IDC IT Security Roadshow 2015, Instanbul

I’m going to be presenting “Start Thinking Like a Bad Guy” at the IDC IT Security Roadshow in Istanbul, Turkey on February 17.

The cost of cybercrime for an organisation has escalated to $12.7 million a year compared to $3.8 million in 2010, while recovering from a data breach has increased from 14 days in 2010 to 48 days in 2014.  While the security industry remains over invested in products and technology, and underinvested in people and processes, hackers are spending more money and sharing information. Find out how to safeguard your business by changing the way you invest in and think about security – from the perspective of the criminals targeting you.

http://idc-cema.com/eng/events/59616-idc-it-security-roadshow-2015

SOC Mistake #6: You Don’t Focus on the Big Picture

This is a mistake we see a lot in Security Operations Centres that have SIEM Use Cases that have been built using a bottom-up approach.  I discussed this in my post SOC Mistake #7: On Use Cases, You Model Your Defences, Not Your Attackers, where SIEM Use Cases are are arrived upon by looking at what event sources are easy to obtain or are available, rather than what is needed to maximise the efficiency and effectiveness of a SIEM’s capability to detect attacks against your key line-of-business infrastructure.

Often in Security Operations Centres that have been built this way, rules cannot be built in a granular enough way to provide the analyst with enough context to determine whether something is a false positive or a real attack without significant digging around.  To deal with event volume the Security Operations Centre has to invest significant cost in hiring additional Level 1 Analysts to perform event triage.  Another problem with bottom-up rules is that they can be extremely tricky to tune, usually have simple correlations relying on two or three different log sources and simple logic – often tuning them for one scenario may detune the detection capability of another.

In contrast a top-down approach should provide multiple opportunities to detect the attack along the attack chain, so if one component of the staged rule is causing mis-fires into the SIEM it is possible to tune anywhere along the staged rules comprising the attack chain.  With this approach you can start to tune out false positives (alarms where there is no real event) without introducing excessive false negatives (missing a real event).  The business impact and threat assessment you undertook as a part of your Use Case Workshop  should drive what the tolerable level of false negatives is: you compare the operational cost of the additional staffing to handle the false positives you have to keep in because you can’t tune them out without introducing false negatives, against the likelihood and impact of the event if you miss it – of course you can’t make these kinds of judgements if you haven’t taken a top-down approach.  I’ll talk more in a later post about collecting and analysing metrics that tell you that you should stop tuning and just can a Use Case and start again with a different approach to it.

Think Big

Going back to the “Big Picture”, in these bottom-up  Security Operations Centres you often see SIEM events in the triage console that resemble the raw events from the event sources, i.e. they don’t resemble ‘actionable intelligence’.  In fact in one multinational company that paid a significant amount for their SIEM infrastructure we saw the SIEM platform only receiving events from a single device type, from the same manufacturer – they might have well just dunked the Analyst in front of the management console of the device and forgot SIEM altogether.  What constitutes ‘actionable intelligence’ will differ depending on which SIEM-vendors marketing glossies your reading, but to me it is enough information for a Level 1 Analyst to conduct initial triage without having to use a large number of investigatory tools to be able to triage false positives, determine the likely impact of the event on the organisation and determine the level of skill and possible motivations of the attacker.

A Use Case built using the top-down approach will provide this information.  The process of building these kinds of Use Cases involve the modelling of vulnerabilities, threats and controls in the people, processes, applications, data, networks, compute and storage for each line-of-business, armed with the information about where in the attack chain the attack has been detected and all of the event information up to the point of detection (or beyond if the rule also triggered a higher-level of proactive monitoring, such as full packet capture or logging of keystrokes or even redirected to attacker to a tarpit to gather further information on their intent, tools, techniques and procedures).  This information allows the analyst conducting the triage, at a glance, to make an initial determinations around impact, capability and scope of the attack.

The SIEM platform, ideally, should provide integrated tools for further analysis such as the retrieval and visualisation of related historical logs to look for anomalies, correlations, affinity groups and context; as well as the ability to lookup sources IPs, packet captures or executables against threat intelligence sources- and beyond to query the configuration management or identity management servers to understand the use and recent configuration changes to machines, as well as the rights of users, involved.  In fact in HP ArcSight this data can be automatically brought in to enrich the event before it is even opened by the Level 1 Analyst to make them more operationally efficient.

So what is the “Big Picture”, well the answer to that is understanding the Who? What? When? How?, and most difficult, Why? of the attack.  Faced with a huge deluge of rule fires that require significant effort to investigate and of which a large proportion end up being false positives which you “never seem to be able to tune out” when something that looks like a real attack is found, the Analyst often will run around with their hair on fire.  Often they’ll escalate without answering these basic questions and when C-level exec has been got out of bed they’ll ask relevant questions that often the SOC can’t answer – Who? What? When? How? and Why?

Before an incident is declared the function of a Security Operations Centre in large organisation is to answer those questions – to prepare, to detect and to investigate.  They should be able to prioritise the incidents to be dealt with by understanding the capability of the adversary, the impact of the incident and the scope of systems involved.  This is the information they should be passing to the incident responders to allow them to contain, eradicate, finally IT operations works with the SOC to eliminate the vulnerabilities/apply additional controls and then recover (and increasing logging to detect if the machine is attacked again).  During the whole process the Security Operations Centre should be working iteratively with the incident response team and IT operations.

Bad Use Cases that provide no context of the attack, bad integration of intrusion detection tools, lack of knowledge of context of systems and users, coupled with a lack of analytical skills in Analysts results in the focusing on the individual events, not the scope and impact of a potential incident or breach.

One story we frequently tell is of a SOC we knew of that where they hadn’t reached out to IT Operations department to win their buy-in for obtaining logs.  Due to the adversarial relationship with IT Ops and the infosec department the infosec team relied on the logs that they could obtain easily, i.e. the ones from the systems they had ownership of – namely intrusion detection, firewall and anti-virus.  Now everyone who works in information security reading this blog knows just how effective these technologies are in 2015 so nothing was triggering the correlations on the SIEM platform (the customer had also just deployed the default content from the vendor, not tuning it to their available resources).  Funnily enough the SIEM didn’t detect a large breach very public breach that the customer suffered.

Questions were asked by the CEO about why he wasn’t notified and then why the SIEM product they’d spent so much money on had “failed”.  At least as a result of the incident the information security team got carte blanche access to whatever logs they wanted – great right?  Well no.  The small SOC then on-boarded every single log source they could lay their hands on using a bottom-up approach.  The result was chaos – masses of events bleeding into the console providing no answers to the contextual questions and in an overcompensation for not notifying the CEO of the original incident, the SOC team call him out-of-hours over a dozen times in one month over incidents that they had panicked over as they hadn’t been able to truly understanding what was happening.

SOC Mistake #7: On Use Cases, You Model Your Defences, Not Your Attackers

Security Information & Event Management platforms remain one of the most mis-sold, and mis-brought, information security products on the market.  There are several reasons for this:

  • People are hard to recruit, expensive to train and even more difficult to retain once you’ve trained them – as a result many SOC managers want to do as much as they can with technology in the belief that they’ll need less staff;
  • Many SOC Managers come from a technical background, where their focus has been on configuring a hardware or software platform – whether it’s an Intrusion Detection/Prevention System, data leak prevention, firewall or anti-virus solution.  As a result they have a natural technology-bias.;
  • SIEM is a competitive marketplace with logging solutions moving up and entering the enterprise space.  While many of these solutions lack a solid correlation capability, their mantra is simplicity, ease-of-use and low cost to initially implement.  Many of the enterprise SIEM vendors’ reactions has been to oversimplify the reality of running a SIEM effectively to compete with these new challengers; and
  • Customers have budget to spend, they often don’t want to undertake the fundamental information security activities that are required to be successful with a SIEM platform – identifying critical assets and modelling the threats against those assets, they just want to buy more blinky-boxes.  The benefits of investing in people and process improvements are hard to quantify to management, especially if you lack meaningful metrics and key performance indicators but more on that in another post, but if you have a physical server or appliance, you can bring the management team down and show them ‘the box that goes ping’.https://www.youtube.com/watch?v=arCITMfxvEc

As a result, many of the Request for Proposals we receive from our customers simply list a number of event sources and the Events Per Second (EPS) volume for each along with a request for an architecture and quote.  This bottom-up approach leads to Use Cases that model’s the IT infrastructure of your organisation, but the tools, techniques and procedures of an attack.

Instead, a good security operations centre capability should start with the basics – what are you trying to protect and against what?  This can normally be accomplished through looking at your organisation’s risk assessment and threat models, but alas, many times these are either woefully inadequate or missing which is why I normally choose to undertake a Use Case workshop.

Before I delve into the high-level details of what a Use Case workshop consists of, it’s worthwhile pointing out the other benefits of conducting this workshop: stakeholder buy-in.

SOC 1

We normally see SIEM installations start with the systems that the information security department have control over – typically at least firewalls, anti-virus and Intrusion Detection/Prevention Systems.  While Intrusion Detection/Prevention Systems and anti-virus can provide some meaningful information, we live in a World where a significant proportion of data on-the-wire is encrypted and unless you’re breaking out the encryption protocols the utility of Network-based Intrusion Detection/Prevention Systems’ has significantly diminished.  Likewise, motivated attackers are doing their research to find out which anti-malware solution you are using (maybe through the technologies listed on the profiles of your ex-employees on LinkedIn?) and maintain a list of what anti-malware solutions block what exploits by leveraging tools designed to be used by defenders such as VirusTotal.

The reality is that to build good, solid use cases that will stand the best chance of detecting the activities of malicious insiders and well-funded, skilled adversaries, you need more than firewalls, anti-virus and Intrusion Detection/Prevention Systems – you need network device, operating system, application, storage, identity & access management, Virtual Private Network and even physical entry systems and to get these logs you need the buy-in of the Information Technology Operations group of your organisation.  Collecting logs from these systems may have an operational impact on performance, while configuring the logging and tuning will certainly have an element of manpower that will distract IT Ops from their day-to-day activities of keeping the business running.  It is no surprise then when the security operations team turn-up with demands for logs that there may be resistance from IT Ops.  Involving the IT Operations team in the Use Case workshop is the single most important step you can take to bring the IT Ops and InfoSec departments together – the IT Operations team will get a more realistic understanding of the threats your organisation faces; may come up with innovative ideas for detection, response and automating workflow; and, most importantly, will feel a sense of ownership of the solution.

So you now know that IT Operations should have a seat at the table – but what is a Use Case Workshop?  The best place to start is, what is a Use Case?  There are differing definitions for what a Use Case is, but I normally use a definition heavily influence by one of my old colleagues Anton Chuvakin (now at Gartner): a use case is a description of how to detect a risk to your organisation and investigate it.  This is not simply a SIEM rule – it includes details of the risk you’re trying to mitigate; the log event sources required to detect that risk; the SIEM rule logic; and the workflow required to triage, prioritise and investigate the incident (plus contain, eradicate and recover if incident response is also the remit of the Security Operations Centre function).

I normally start a Use Case Workshop looking at the critical line-of-business applications in the organisation – what keeps the lights on and the money flooding in (or the troops, intelligence, communications and logistics directed at the mission objectives, in the case of military customers)?  Sometimes you can rely on previous risk assessments or Business Impact Analysis from the business continuity plans, other times you need to start with the fundamentals because although the organisation I’m working with may be about to embark on a journey to build a Security Operations Centre costing potentially millions of Euros a year, they don’t know what they’re protecting in the organisation.  It’s at this point I’ll normally involve C-level in understanding the criticality of business units and line-of-business at a high-level, the operational staff in each department naturally think that their business unit is the most critical ending up with a ‘protect everything, equally’ mentality.  Once you’ve got some consensus on what priortisation of what you’re trying to protect, you need to break these line-of-business down into the technology (including data stores, networks, applications, encryption, access management, physical data centres), processes and people that make up that line-of-business.  Simply focusing on the technology, often the natural reaction of those in information security departments who’ve come up the ranks from Intrusion Detection Systems or firewall administrators, will result in weak links in the protection of that line-of-business, you need to capture all the components required to keep it up-and-running.   The inclusion of the business units themselves, and the IT Operations department, here is critical.  Don’t expect them to be able to provide all of the needed information however, it will require a skilled member of the information security department guiding the conversation.

One of the other advantages to starting with the line-of-business comes when you build management reports, by modelling the line-of-business you can now start to build management reports that can align risk reporting with the core business functions, providing meaningful reporting on the impact of information security events on the ability of the organisation to conduct it’s business – the key to aligning information security with “the business”.

SOC 2

Once you know what makes up the end-to-end line-of-business, it’s time to start modelling the threats to the line-of-business.  I typically start by looking at the threat actors themselves and what would their motivations be for attacking a particular line-of-business?  By addressing the attack motivation question you can prioritise building use cases that address the behaviours of those most likely to target that particular line-of-business.  There is a sort of iterative process of looking at both motivation and means for each different threat actor at this point:

Why would an insider attack an internal Human Resources system -To get the telephone number of a co-worker he/she is infatuated with? To get inside information ahead of pay rise negotiations?  To give him/herself a payrise? To get the address of his manager so he can nail a dead cat to his door?  However, why would a criminal gang look at access the Human Resources system – To get the banking information of all your employees?  To look for dates that payroll is paid so they can target specific systems that have been compromised on that date?  Understanding who’s attacking you and what they are trying to do answers the Who? and Why? questions, the next important one to answer is How?

Then looking at your line-of-business and all the components that form it, walk through different scenarios taking into consideration the intent and skills capability of the attacker.  Answering the Who? and Why? questions first stops you spending days designing a use case that is typically beyond the capability of that particular threat actor or ones where the pay-back for the attacker wouldn’t be worth the effort for them to undertake – you’re trying to avoid being dragged down a metaphorical alley by “that guy” who in business continuity planning meetings wants to prepare a mitigation against alien invasion before a loss of power scenario.  Remember at this stage you’re trying to prioritise Use Case development towards the most impactful/likely at this point in time.  You can always leave the alien invasion scenario on the road map and address it later in detail after you’ve covered the more pressing issues – you want to lay the foundational bricks at this stage and come back and mortar the cracks in, plus many of the more complex and outlier use cases can normally leverage and reuse the content you’ve build for the more impactful/likely ones.

SOC 3

Modelling the How would analyse the attack methodology used by the adversary, typically across a kill-chain or attack-chain.  Many people have heard of the Lockheed-Martin Cyber Kill Chain as it has been covered extensively after it was successfully used to thwart attacks by Advanced Persistent Threats, such as Operation Aurora.  Personally I don’t use the Lockheed-Martin kill chain, not just because they’ve patented it, but because it lacks granularity and needs to be broken down further to be useful – and the Weaponisation (sorry, “Weaponization”) stage takes place typically in a location where it cannot be observed and doesn’t lend itself to being useful to intrusion analysis.  The attack chain I use is based on 10 stages:

  1. Reconnaissance or suspicious traffic
  2. Delivery
  3. Exploitation/Policy Infraction
  4. Privilege Escalation
  5. Establish Persistence
  6. Command and Control
  7. Internal Reconnaissance
  8. Lateral Movement
  9. Data Extraction/Preparation
  10. Exfiltration

Now not all scenarios will include all stages, simple policy infractions such as browsing pornography from a work’s machine can be identified at stage 3, but if a user has escalated their privileges to work around a filter it may be detected at Stage 3 and Stage 4.  The point in the attack chain is to assist with the modelling of the end-to-end attack in a scenario and identify which event sources could provide context to each stage.  I used the ISF Information Risk Assessment Methodology (IRAM) to provide the wider risk assessment framework – such as adversary characterisation, business impact and control selection.  The output of IRAM provides a good input into modelling the scenarios and event sources needed to establish the attack-chain.

Now you’ve established what event sources are available (from the controls section and system/application logging), what actions the threat actor will take in each scenario, you’ve got the foundations for building your Use Cases.

Normally we create a Use Case matrix that details all of the details you’ve gathered above, along with a Level of Effort estimation to implement the Use Case.  This Level of Effort should involve any effort to develop custom connectors to retrieve/receive, normalise and forward events (using HP ArcSight FlexConnectors, for instance).

This can then be used to conjunction with the risk assessment to do a cost/benefit analysis on the Level of Effort to select which Use Cases would provide the greatest value to the organisation’s detection and investigation capability.  At this point you can start to write the rules in the SIEM.

I’ll cover the effective triage and tuning of rules in another post at a later date.

SOC Mistake #8: You Don’t Speak the Language of Business, You Speak the Language of Security

This is by far one of the most common failings of Security Operations.  I’ve reviewed the maturity of several large global Security Operations Centres and they appear to be doing a reasonable job of the prediction, detection and investigation of information security incidents – but none of this is visible to the rest of the organisation who funds their operational budgets.

It is common to find someone who has started life as an operational information security person, maybe originally a firewall or Intrusion Detection System administrator, who’s career ultimately takes them to SOC Manager.  Their life has been steeped in the operational reports produced by technical controls such as firewalls, Intrusion Detection Systems and anti-virus solutions.  These reports a meaningful to him, although I’d argue about the contextual value you can gain out of an individual control’s report, but count based metrics such as ’37 unauthorised access attempts across the business’ or ‘300,000 block spam emails’ are pretty meaningless to senior management.

I’m reminded of Monty Python’s Spanish Inquisition Scene set in Jarrow in 1911 where Graham Chapman enters and says to the mill owner: “One on’t cross beams gone owt askew on treddle” to which the mill owner, both unaccustomed to the regional dialect and technical jargon says “Pardon?“.

Graham Chapman’s character is looking to the mill owner for support and direction, but he’s presenting the problem in the operational language he understands.  If he’d said “A vital piece of the manufacturing equipment in our rail sleeper production has become mis-aligned halting production” instead of “I didn’t expect some kind of Spanish Inquisition” the problem would have got sorted and Cardinals Ximinez, Fang and Biggles would have never appeared.

It is the job of the Security Operations Centre, just like the rest of the information security function, to present meaningful decision support management information around information risk to the management and it’s the management’s responsibility to make decisions on risk based on it.  If we never provide information in a language or format they can utilise, we’re always going to be seen as these strange people who live in the basement and occasionally come into the boardroom and start speaking something that sounds like Klingon to the C-level execs.

The other issue is that information security risk isn’t the only risk that companies need to consider, even if we do try to treat ourselves like a special little snowflake.  An organisation’s risk function has to balance – liquidity risk; currency risk; supply chain risk; asset risk; competition risk; pricing risk; and capital availability to name but a few.  We’re often so wrapped up in our own little worlds that are so important to want we do and we vent when decisions don’t go “our-way” forgetting that the C-level suite are running a company whose main business probably isn’t information security.

A typical SOC

A classic example I can illustrate happened to myself.  I came into a budget meeting armed with a risk assessment and a budgeted control suite – risk to the business was around 800K, cost of controls were about 200K and residual risk would have been around 200K.  Job done the average quantitative risk wonk would say (it was a much more in-depth risk analysis than I am demonstrating here for the sake of brevity), but the issue was that if that 200K was invested in three more sales people they’d bring in much more than 800K in revenue to the organisation, which at the time was exhibiting now’t wrong with a Yorkshire accenta 98% customer retention rate, and future growth, and operational costs, was funded out of customer subscriptions.  When you took into the balance of information security risk vs. opportunity risk, my project was a bad call.

So the presentation of risk in a language the business understands and that allows a normalised comparison with other forms of risk, if you operate an Enterprise Risk Management framework, is one of the key success criteria for good security operations.
So what does good management information look like?  Well, financial metrics are a good start.  Everyone in the C-suite understand pounds, shillings and pence (excuse my pre-metric example, dollars and cents to my US friends) – it’s a good place to start.  Creating financial metrics has long been a difficult proposition, but there are several ways to do it.  Myself, I tend to map my SIEM event categories onto the VERIS framework.  This then lets me use the average costs and time-to-resolve metrics from the Verizon Data Breach Investigations Report, which I still consider to be one of the best yardsticks of what is going on in the wider world, to show my organisations performance against the average.

The other is that it most have context, providing count based metrics for the whole organisation doesn’t impart any information about what line-of-business assets are involved and what the potential bottom-line is to the business.  “37 unauthorised access attempts across Acme Corp” says one thing, “34 on cardholder processing systems”, “2 on bank transfer systems”, “1 on the customer relationship management system”, all of which are buried deep inside your infrastructure behind several layers of now’t wrong with a Yorkshire accentnow’t wrong with a Yorkshire accentcontrols, says quite another.  I’m going to talk more about this in a later blog posting, so I am going to part this here for a while.

Another aspect of the context is granularity, and this normally requires input from the analysts and incident responders and some form of established taxonomy for the more granular categorisation of incidents.  For instances saying you’ve blocked “34 malware infections” says one thing, saying “24 malware infections were stopped at the host level and we’re detected by the Intrusion Detection System”, or out of the other 10, “5 exhibited DNS behaviour showing they attempted to connect over port 443 to external systems” and that the other “5 encrypted the harddisks of systems in our payroll department just before payday”.

It’s not just about the granularity, it’s also about the curation.  Helping the exec’s understand what the impact is to the business is?; giving advice on what they could do about it?  who is the likely perpetrator is, based on the tools, techniques and procedures they are using, or at least provide an indicator of what their capability is?; understanding when this started, is it a part of a campaign or a single attack? Is it still ongoing?; how did this occur?  What vulnerabilities did the attacker exploit?  How could this be prevented from happening again?; the most difficult, and often most important question: why did this attacker attack us?  What were they after?

Having management information that allows information security risk to have a seat at the boardroom table with the rest of the functions that handle risk is a starting point. now’t wrong with a Yorkshire accent Providing context to the C-level execs enabling them to make informed risk decisions helps move security operations from a reactive function, to one that is proactive.  When this is coupled with the topics I’m going to talk about in my next couple of postings: providing line-of-business metrics and using threat intelligence, we’re moving from a Jarrow-accent to a Received Pronunciation one – although there’s now’t wrong with a Yorkshire accent as my fiancée is from Hull ;)

SOC Mistake #9: You don’t tier your SOC staff

Security Information and Event Management (SIEM) platforms are all about turning the mass of raw events that occur in your organisation’s infrastructure into intelligence that can be assessed by analysts and incident responders to identify and react to information security incidents.

SIEMs, despite what the vendors will tell you, are not magic.  It will take you months to tune your ruleset to eliminate the bulk of false positives and you’re probably working against a moving target of an increasing number of event sources as well as continually having to adjust the rules to detect the new threats you’re facing.

To ensure the maximum use of your highly-skilled trained analysts, it is common to tier your analysts into at least two layers.

The initial layer that are solely responsible (at least to start with) for the triage of incoming events.  That is the identification of false positives, ensuring the appropriate prioritisation and escalation.

In an effective SOC, however, these level 1 analysts are not simply “click-monkeys”, as well as triaging false positives they should be doing some form of initial assessment so they can evaluate the potential impact and scope of the incident.  They should also be performing some form of adversary characterisation by evaluating where in the attack chain the event was detected (further down the chain, such as at the command or control or lateral movement stage,  may imply that they have conducted significant reconnaissance and have crafted a specific exploit to be undetectable to your host or network Intrusion Detection/Protection System – this implies a motivated and fairly skilled adversary) and they should also be, from their initial investigation, ascertaining the potential impact to the business.

Often the SIEM will have some form of prioritisation algorithm based on a number of factors, but only a human analysts can take all of the context into consideration (Skill level of attacker? Does the attacker exhibit known behaviour in their Tools, Techniques and Procedures (TTP) that can assist with attribution?  What is the apparent intent of the attacker (disruption, theft, espionage)?  Is this a one-off event or part of a sustained campaign?  Does the attack demonstrate investment of a lot of time or funds (use of zero days, for instance)? What systems are effected and what line-of-business do they support?

Only events assessed as what the level 1 analyst deals real events are escalated to the next level of more skilled analysts to conduct a deeper level of investigation.  You can create specialisations at the Level 2, or above, layers to allow workflows to be created that direct events of a certain category to specific analysts, or groups of analysts.  Some organisations have as many as three or four tiers of analysts, gradually becoming more skills and specialised as you move up the chain.

Any false positives discovered by the analysts can be routed to content authors who can further tune the SIEM rules to try and prevent the false positive from occurring in the future.

The focus should be on making this process as efficient and repeatable as possible, while allowing the collection of metrics to support continual improvement.  For instance, in HP ArcSight, we create ActiveLists for a ‘triage channel’ and the ‘content needs tuning’.  As we’re largely automating this workflow we can collect metrics on key operational Key Performance Indicators such as time-to-triage, time-to-investigate, number of false positives per use case category, number of events escalated per analyst, number of incorrectly categorised false positives per analyst.  These metrics, when combined together, can help you achieve the right balance of efficiency and effectiveness.

We’ve evaluated dozens of Security Operations Centres were all of the analysts are highly trained and all operate at a single tier.  They all randomly pick the events they wish to work on off the console and do their typical ‘deep dive’ investigation.  This causes several problems:

  1. It’s hard to maintain but a broad-spectrum of investigatory skills needed for triage of all event types and a deep-level of specialisation to do a full investigation;
  2. The analyst may prefer to investigate specific categories of events, meaning that some event types may remain in the triage channel for extended periods of time;
  3. Having your highly-skilled analysts conduct the initial triage of false-positives is a bad use of their time; and
  4. Often Security Operations Centres find it really difficult to produce meaningful metrics on the overall performance of the capability, or individual analysts.

Implementing at least a two-tier system of triage/prioritisation and investigation can dramatically increase the performance of your Security Operations Centres.

 

 

5G/SOC Presentation at HP PROTECT Washington DC

I’ll be presenting session BB3055 “5G/SOC: How the world’s most advanced SOCs are leading the way” on Tuesday 5th September at 17:50 at HP PROTECT in Washinton DC – talk about a graveyard shift!

“If we’ve learned anything from all the media attention given to data breaches in the past few years, it’s that no matter who you are, someone out there wants to steal your critical data. The type of data varies, but everyone has something worth stealing. Today’s mature SOC teams are incorporating new technologies, sharing information, and expanding their focus outside of the enterprise to include the modeling of attacker activities and personas. We are now entering the fifth generation of security operations, or what we like to call the 5G SOC. Hear more about the 5G SOCs of today–which monitor more than ever before–and how they change the focus from simply monitoring systems to monitoring the actors perpetrating the attacks. Benefit from 5G SOCs looking beyond their enterprises’ borders and tracking activities in social media, changes in global politics, and shifts in attacker economics in order to discover threats and act on them.

SOC Mistake #10: You confuse your SOC with your NOC

Network Operations Centres (NOCs) are responsible for the operational monitoring of infrastructure and services. Their function is to identify, investigate, prioritise and escalate/resolve issues that could, or do, effect performance or availability. A Security Operation Centre (SOC) shares much in common with a NOC, it’s function is to identify, investigate, prioritise and escalate/resolve issues that could, or do, effect the security of an organisation’s information assets.

It is no surprise then that I am frequently asked by customers looking to build a SOC “Why can’t we use our NOC for this function?”. I can understand the motivation behind this question, once you’ve stood up your Security Information & Event Management (SIEM) platform, identified your use cases, got the right event sources feeding events into the SIEM and then got your SOC procedures nailed, the largest cost of running a SOC is typically headcount.

There are, however, a few reasons why a combined SOC and NOC isn’t always a good idea:

1. They serve different, often conflicting, masters.

Within organisations there is often a conflict between operations and information security teams – information security want to pull the plug on an compromised server that happens to be hosting a critical service; they want vulnerabilities patched as soon as they are available, often without fully testing the impact on operations; they can’t understand why dealing with an incident isn’t always the top priority for the operations team. Likewise, operations often stand-up new pieces of infrastructure without notifying the security team or going through change control; they may not fully harden platforms prior to deployment to “meet a tight deadline”, we’ll come back and patch it later; they may not apply critical patches through lack of a testing environment.

The NOC is measured and compensated for its ability to meet Service Level Agreements (SLAs) for network and application availability, Mean Time Between Failures and application response time. In contrast SOCs are measured on how well they protects against malware; their protection intellectual property and customer data; and ensuring that corporate information assets aren’t misused. The business driver behind both of these is to manage business risks – in a NOC, for instance, the loss of revenue or compensation for breach of an SLA; in a SOC, regulatory fines or loss of customer confidence.

NOCs are about availability and performance, SOCs are about security. Even with the best intentions, having the team responsible for availability and performance make decisions about incident response and the application of controls that will, invariably, impact on the availability and performance of services (even if it is just through the diversion of human resources), is never going to work well.

NOCs and SOCs certainly should be in close co-ordination. One of the best ways of achieving this is to ensure the NOC has a view on of the SIEM platform. I’ve seen SOCs react to “large scale Distributed Denial of Service attacks” that have been the result of legitimate traffic after the launch of a new service, and I’ve seen subtle patterns detected by alert NOC analysts result in uncovering wide-scale penetrations within organisations. When it comes to actually responding to a confirmed incident, operations and information security must work hand-in-hand to investigate, contain, eradicate and recover from the attack with appropriate and proportionate responses. Working together in a collaborative manner as a part of an incident response team, a SOC and NOC help ensure that right balance.

A well-implemented collaboration strategy between a NOC and SOC should identify that the SOC’s function is to analyse security issues and to recommend fixes and then the NOC analyses the impacts of those fixes on the
business, makes recommendations on whether to apply the fix, makes the appropriate approved changes and then documents those changes.

2. The skills needed in, and the responses required from, a NOC analyst
and SOC analyst are vastly different

NOC analysts require a proficiency in network, systems and application engineering, whereas SOC analysts require skills in security engineering. The tools and processes used for monitoring and investigating events also differ, as does the interpretation of the data they produce: A NOC analyst may interpret a device outage as an indicator of hardware failure, while a SOC analyst may interpret that same event as evidence of a compromised device. Likewise, using the example I gave above, high bandwidth utilisation will cause the NOC to take steps to ensure availability, in contrast the SOC may first question the cause of the traffic spike, the reputation of it’s origin and correlations against other known attacks.

One of the biggest differences between a SOC and a NOC is that a SOC is looking for “intelligent adversaries” as opposed to naturally occurring system events such as network outages, system crashes and disk failures. While these naturally occurring systen events can, in fact, be caused by the actions of “intelligent adversaries”, their concern is about the restoration of the quality of service as soon as possible – even if this involves the destruction of evidence that would allow the investigation of the cause.

3. Staff attrition is waaaaaay worse in a SOC

Level 1 SOC Analysts, those responsible for the triage of incoming events burn out with often alarming regularity. The average tenure of a Level 1 SOC Analyst is typically less than two years and can be as high as 20% per annum. In contrast the tenure and turnover of NOC staff is typically much better.

This attrition within a SOC needs to be planned for with a suitable feeder pool of new candidates and an effective on-boarding training scheme to teach them about the use of the SIEM platform, the analytical skills need to investigate incidents and internal procedures. Developing a career progression plan for your analysts will also allow you to retain these valuable resources within your business, potentially moving them to security engineering or incident response positions.

Despite everything I’ve said above it is possible to run an effective coverage SOC/NOC, but it can take more effort, operational expense and better governance than running them as separate functions. The potential benefits can lie through the introduction of a single point-of-contact for all security and operational issues, as well as the tight integration between those who discover and react to information security incidents, and those who have to deploy and manage the mitigations post event. Whether you choose to keep the functions separate or integrate them, it is important to understand the differences between the functions.