No Time for False Alarms

Impacts of false positives by security software

Blog No Time for False Alarms

| 4 min read

Contact us

Most of us know what it's like to have our flow distracted by what turns out to be a fruitless action. Devs experience this when they find out that the allegedly insecure source code they were urged to review is actually secure. But annoyance is but one negative impact of false positives. When time is at stake, to waste it is to actually lose money. Moreover, if there are constantly many alarms to take care of, there is the possibility of the security team actually neglecting real cybersecurity risks.

What are false positives in cybersecurity?

When software products or systems are tested for security vulnerabilities, be it by a tool or hacker, ideally a report is created to inform of the findings. A false positive is an erroneous report of the existence or presence of a security vulnerability. What the tool or the human perceived as a security issue, in a sector of the system it evaluated, is actually not.

As my editorial partner, Felipe Ruiz, put it briefly in an as of yet unpublished document:

[…​] a false positive is when the doctor tells you, for example, that you have liver disease, when in fact you do not. Their perception of that disease is illusory, and they give you a diagnosis labeled "positive," which is false, i.e., a false positive.

A true positive, on the other hand, is a correct report of the existence of a vulnerability. To provide a low rate of false positives (FPs) and a high rate of true positives are main goals for application security solutions. Still, it's been evidenced that the performance of many —I should say, at least their tools' performance— leaves much to be desired. For example, commercial tools got an average true positive rate of 26% in their goal of finding vulnerabilities in a Java web application proposed by the Open Worldwide Application Security Project (OWASP).

Some tools may be more accurate in searching for some types of vulnerabilities and not others. So, they may be used together to complement each other. But truly, organizations should stay away from having lots of tools. (A 2020 study showed that organizations using more than 50 tools ranked themselves lower in their ability to detect and respond to an attack (8% and 7% lower, respectively). That's apart from the headache of having to orchestrate those tools.)

Negative impacts of false positives

The main problem with false positives is that devs and the security team waste time and effort looking for supposed vulnerabilities. I started this blog post referring to how this feels like. Well, in a 2018 survey, "changing priorities resulting in discarded code or time wasted" was rated as 79/100 in having a negative impact on the personal morale of devs. That came just second to "work overload" (81/100). The survey included more than 1,000 developers in the U.S., the UK, France, Germany and Singapore.

As for the time wasted in inspecting false positives, it appears that devs have not been asked directly. A report from two years ago asked it to 291 directors of firms using managed detection and response services in the U.S. The response of those in organizations of 500 to 1,499 employees was that it took them around 25 minutes. And that was one minute more than how long it took them to investigate true positives. Respondents working at the largest companies, in turn, were the most affected, wasting about 32 minutes investigating false positives. The pattern across firms was that as much time was spent on false positives as true positives.

Get started with Fluid Attacks' Security Testing solution right now

And it's not like false positives are just rare occurrences. In an international study last year, around 60% of IT professionals said they got over 500 cloud security alerts daily. And around half of the respondents said more than 40% of alerts were false positives. I'm citing different studies in this section but you get the gist: false positives are wasting too big a slice of people's time.

Then, how much money is going to waste here? It's yet unsaid, it seems. However, there's a mention that the annual cost of manual alert triage is $3.3 billion in the U.S. So, possibly, an upsetting large sum is dedicated to false alarms.

Organizations must keep investigating the alarms, though. The losses may be dramatic if they failed to detect and respond to an actual threat before criminals find and exploit it (a data breach is averaging $4.45 million). However, the alarms can be so many that the organizations can't keep up. The aforementioned 2021 study found organizations of 500 to 1,499 employees did not address 27% of the alerts they received. And a similar percentage was reported by larger companies. Alert volume must be reduced, specifically, false positives.

How to reduce the impacts of false positives

First off, organizations should look for the security testing solutions that can provide evidence of their minimal false positive rate. On the other hand, tool vendors should improve the quality of their products' alerts. Indeed, interviews with some security operation center (SOC) practitioners have helped identify what can be changed, as they've found alerts to be "unreliable, difficult to interpret, and lacking in the context needed by analysts to filter FPs from genuine alarms." Therefore, the authors of that interview study suggest that alerts should be as follows:

  • Reliable: Come from methods that accurately find vulnerabilities and are perfected often.

  • Explainable: Offer comprehensible information as to why they raise an alarm.

  • Contextual: Take into account characteristics specific to the organization being assessed.

Fluid Attacks offers minimal false positive rates

Our Continuous Hacking involves accurate security testing and remediation recommendations to help companies secure their software products. Our tool achieved a true positive rate of 100% and a false negative rate of 0% in the OWASP Benchmark v1.2. Moreover, our flagship plan includes manual reviews by our hacking team, who not only finds vulnerabilities that tools can't detect but also examines the findings to discard any false positives.

Click here to sign up to a free trial of our tool.

Subscribe to our blog

Sign up for Fluid Attacks' weekly newsletter.

Recommended blog posts

You might be interested in the following related posts.

Photo by Logan Weaver on Unsplash

Introduction to cybersecurity in the aviation sector

Photo by Maxim Hopman on Unsplash

Why measure cybersecurity risk with our CVSSF metric?

Photo by Jukan Tateisi on Unsplash

Our new testing architecture for software development

Photo by Clay Banks on Unsplash

Protecting your PoS systems from cyber threats

Photo by Charles Etoroma on Unsplash

Top seven successful cyberattacks against this industry

Photo by Anima Visual on Unsplash

Challenges, threats, and best practices for retailers

Photo by photo nic on Unsplash

Be more secure by increasing trust in your software

Start your 21-day free trial

Discover the benefits of our Continuous Hacking solution, which hundreds of organizations are already enjoying.

Start your 21-day free trial
Fluid Logo Footer

Hacking software for over 20 years

Fluid Attacks tests applications and other systems, covering all software development stages. Our team assists clients in quickly identifying and managing vulnerabilities to reduce the risk of incidents and deploy secure technology.

Copyright © 0 Fluid Attacks. We hack your software. All rights reserved.