How to transform the security model for continuous improvement and better results

0

The incident model has dominated the security community from the start. It is at the heart of safety products and training, dominating the way we talk about our work. The model gives us practical ways to talk about how we define an incident, how the team comes together to resolve the incident, and how to measure incident count and resolution time consistently. It’s rooted in history and works across disciplines: watching firefighters respond to a fire alarm reveals a similar approach.

But just as fire alarms do not prevent fires (they only limit damage and risk to life after a fire starts), a security model built around incidents will not prevent security incidents. We need something better.

Incident response across industries

In over 130 years of research and efforts to improve industrial safety, one of the most ubiquitous results that has emerged are “x days since accident” signs that benefit pop culture life in media and memes.

The signs and memes reverse typical incident report statistics in that they emphasize the positive case – a count of incident-free time – rather than the equally correct cumulative statistic of the total number of incidents. incidents over time. While many people have undoubtedly reported the total number of incidents over time, as well as the time it takes to recover from various incidents, the signs reflect a focus on improving safety outcomes. and reducing the total number of incidents.

Shortcomings of the current incident response model

Despite a decade of efforts to “shift to the left,” incident response in the software security industry is grossly inadequate. A security model built around incidents will not prevent security incidents. Additionally, the standard incident model by which incidents are tracked is also flawed. Here’s why:

  • The standard incident model graphs do not indicate whether developer activity has increased or decreased.
  • The graphs do not indicate whether the incident rate has accelerated or slowed relative to the activity rate.
  • The charts do not show how many tracked incidents have been fixed or are in the process of being fixed.
  • Incident models do not show how quickly incidents are resolved or the mean time to repair (MTTR).

And what about the incidents included in these charts that were false positives? Incident pattern charts do not measure actual security outcomes.

By not including such variables, we are left with a largely incomplete picture of the security of a company’s current security measures. This is problematic because we need to see the big picture to understand where we can make improvements. While part of seeing the big picture involves tracking insightful metrics, it also requires adjusting the language we use.

How we define when an incident is material

While some software refers to “incidents”, some call them “violations”. We could just as well call incidents “opportunities for improvement”. Yet labeling everything as a “violation” turns common development activities into punishable offenses and ignores the likelihood of false positives. It’s also not too far off to say that if a violation exists, we need to eliminate it, but that’s not always the case. And when software only provides negative feedback, it ignores a century of research that has been conducted on improving learning outcomes.

In most cases, detecting a security issue was quite easy compared to fixing the detected issues. Research published in ACM shows that when security issues are reported to developers, it can make a big difference to the outcome..

We moved towards a “diff time” deployment, where analyzers participate as code review bots, making automatic comments when an engineer submits a code change. Issues reported at the time of the diff saw a 70% fix rate, while a more traditional “offline” or “batch” review where bug lists are presented to engineers, outside of their workflow , saw a correction rate of 0%.

The ACM document later explains that the poor results for fixing problems presented in batches to developers were despite having a false positive rate of less than 5%.

Transform the security model for better results

There’s also the unfortunate assumption that developers don’t care about security, when that’s far from the truth. Developers are tasked with turning business goals into valuable results that drive growth, often with too little time or other resources to do the job “right”. Developers know they’re coming up with imperfect solutions, but they’re also trained in the fact that “done is better than perfect.” For developers, “done” means something the customer can use and (hopefully) drive business growth. While it’s painful for them to make the trade-offs necessary to meet deadlines, every effective developer also cares deeply about the accuracy, quality, and security of their code.

As professionals tasked with balancing business requirements and delivering value, developers are also passionate about the quality and effectiveness of the tools they use to achieve those goals. Unfortunately, most security tools designed to solve security problems do not respect or solve the challenges that developers face. To continuously improve security outcomes, security processes must work with developers, not against them.

After all, most people probably wouldn’t feel motivated if other departments were constantly criticizing their work or trying to implement the use of unfamiliar tools that prevented them from completing their work efficiently. But that’s not the only problem with the way most organizations think about security.

As security professionals, we must recognize the lesson here that the most perfect code security detection system is worthless without a workflow that can present feedback to developers in a way that leads to fixes. . Security outcomes are not improved by detection alone.

Safety models should highlight ways to improve and bring about positive change. These changes are often as simple as incorporating new ways of representing data into security models, all to better prepare security teams for success in order to continuously improve and even achieve better results.

Casey Bisson, Product and Developer Relations Manager, BluBracket

Share.

About Author

Comments are closed.