| 7 min read
"We're drowning in security vulnerability reports — enough to make you want to hide under the covers!" These words could easily be ascribed to many members of organizations' development and security teams committed to cybersecurity. Quite understandably so. Even small projects can receive hundreds of reports requiring teams to spend enormous amounts of time and effort to understand and solve. As a consequence, it becomes normal for many vulnerabilities to remain unfixed. However, these vulnerabilities should be the ones that pose the least risk, not the others, as can sometimes happen. Hence comes the need to prioritize them.
Metrics such as the CVSS score (as well as the CVSSF by Fluid Attacks) and EPSS have emerged and contributed over the years to the purpose of organizing vulnerabilities in order of importance for management and remediation. They have helped teams avoid delays and efficiently use their resources. However, it is still not enough. High-priority security issues — according to the risk exposure they represent — continue to get lost in the reporting noise.
This problem is largely evident from vulnerabilities in third-party software components. We have already seen umpteen times the estimate that around 80% of the source code of an average modern application is made up of open-source packages or libraries developed by third parties. Hence, the corresponding number of vulnerabilities can be very high. In fact, the "use of software with known vulnerabilities" has been the type of security problem we have reported the most at Fluid Attacks in recent years (see State of Attacks 2024). Nonetheless, the mere presence of vulnerable third-party components in an application does not always mean the application is vulnerable.
When assessments such as SCA (software composition analysis) are carried out on an application, SBOM (software bill of materials) is established to see in detail not only the components explicitly defined as direct dependencies but also the sub-dependencies or transitive dependencies, over which developers may have no control or may even be unaware of. In relation to these dependency trees and from automated tools and massive databases, correlations are established, and risks linked to third-party software components used in the application, including vulnerabilities, malicious code, and licensing issues, are reported and prioritized. However, usually, there is no analysis of how each vulnerable component is used in the application, some of which may not even be executed. Here, "reachability analysis" comes into play to contribute to prioritizing security issues.
Understanding reachability analysis
Reachability analysis is a software security assessment technique that determines whether attackers can exploit a vulnerability within a third-party component or container image in a specific application. With a granular approach, reachability analysis pinpoints the exact parts of the components used by certain portions of an application's code. This involves tracing execution paths to identify whether the components' particular functions or code segments containing vulnerabilities are invoked or called within the application's source code and context. It should be noted that a software package may have multiple vulnerabilities, but often only a subset of its functions (some vulnerable and some not) is employed by the application in question. It is even said that "only 10% to 20% of imported code is typically used by a specific application."
By understanding how vulnerable functions, classes, modules, etc., are utilized, organizations can distinguish between theoretical and "real-world" risks, thereby reducing the noise associated with vulnerability reports. This means that, for example, something previously seen as a vulnerability for an application may now be considered an irrelevant problem; leaving it as a risk would be a false positive. This precision is particularly valuable when dealing with complex software systems that rely on numerous third-party components. By prioritizing vulnerabilities that are actually reachable by the application, security teams can optimize their remediation efforts and reduce the overall attack surface.
Reachability analysis can extend beyond direct dependencies to encompass transitive dependencies. This means that the assessment delves into the intricate web of components or tree of dependencies that make up a software application, including those that are several layers deep. By considering these indirect relationships, organizations can identify vulnerabilities that might otherwise go unnoticed. This comprehensive approach is essential for modern applications relying on vast open-source component ecosystems.
Reachability analysis methods
Security teams typically employ scanners to perform reachability analysis. However, manual expert review can also provide valuable insights. There are two primary approaches to reachability analysis: static and dynamic analysis. Each method offers distinct advantages and challenges, and a combined approach is often considered the most effective way to identify and mitigate risks.
Static reachability analysis
Static analysis examines an application's codebase without executing it. By analyzing the source code, security teams can determine if vulnerable libraries are being loaded or invoked from specific parts of the application, thus establishing the so-called call graphs or similar diagrams. This method is valuable for early integration into the software development lifecycle (SDLC), enabling the identification of reachable vulnerabilities before they come into production.
However, static analysis has limitations. It may miss calls or executions of vulnerable elements that only manifest during runtime, such as those triggered by user inputs in specific environmental conditions. Moreover, it may struggle to account for complex runtime behaviors influenced by factors like configuration settings and data fetched from external sources.
Dynamic reachability analysis
Dynamic analysis focuses on the runtime behavior of an application. By evaluating the application as it executes, security teams can observe which vulnerable components are actively being used in specific environments and under particular conditions. This approach is effective at reducing false positives, as it can identify vulnerabilities that are not actually exploited in practice. For example, a vulnerable library might be loaded into an application, but its vulnerable functions are never called during regular operation. Dynamic analysis can sometimes also be used to detect behavioral anomalies, such as unauthorized file access or network connections.
However, dynamic analysis faces challenges such as limited coverage due to the difficulty of exploring all possible execution paths, the high computational cost of evaluating numerous user input combinations, the performance overhead introduced by analysis instrumentation, and the management of large volumes of generated data.
Combining static and dynamic analysis
A combined approach that leverages both static and dynamic analysis provides a more comprehensive understanding of an application's security posture. Static analysis can serve as a preliminary assessment to identify real and potential risks. In contrast, dynamic analysis can validate some of these findings and leverage the context of the runtime application to detect additional risks. By combining these methods, security teams can more accurately prioritize remediation efforts and reduce the risk of vulnerability exploitation in cyberattacks.
Some challenges of reachability analysis
Reachability analysis faces significant challenges when applied to large-scale projects. As the size and complexity of a codebase increase, so too do the demand for computational (and human) resources and the risk of errors or omissions. The need to balance precision and performance becomes paramount. Highly accurate analysis tools can be prohibitively expensive, especially for big projects. Conversely, cost-cutting measures that prioritize speed over accuracy can lead to higher rates of false positives and false negatives. On the other hand, the presence of obfuscated or proprietary code within third-party dependencies can hinder analysis efforts.
Furthermore, the dynamic nature of software development introduces ongoing challenges for reachability analysis. As components are updated, the reachability of vulnerabilities can change, potentially creating new risks or invalidating previous findings. This underscores the need for continuous analysis and regular recalibration of analysis tools to accommodate evolving technologies and threats. It's important to note that a negative reachability result does not definitively mean a vulnerability is not exploitable. There is always a possibility that the analysis tool may have limitations or that the vulnerability may be reachable under specific, yet undetected, conditions.
Reachability analysis with Fluid Attacks
At Fluid Attacks, we aim to help you prioritize and focus your remediation efforts on the vulnerabilities that pose the most significant risk to your applications and infrastructure. While a reachability analysis can be performed independently, ideally, it should be part of a broad ASPM (application security posture management) framework, including RBVM (risk-based vulnerability management) and SSCS (software supply chain security). That's how we do it at Fluid Attacks within our all-in-one solution, Continuous Hacking. We provide comprehensive vulnerability reporting, incorporating advanced reachability analysis to help you understand which vulnerabilities are truly exploitable in your specific context.
We integrate static reachability analysis directly into your SDLC, enabling early detection and remediation of vulnerabilities and preventing them from escalating into more serious and costly issues. By continuously monitoring your codebase, we can identify vulnerable components and their usage within your applications. This analysis is a core part of our SCA and SBOM functions, which provide a detailed view of your dependencies and associated risks. Our team of security researchers continuously reviews databases of vulnerability advisories in third-party components and, according to their specific contexts and conditions, develops custom rules for our tool to identify their use in your code automatically. This ensures that our analysis is accurate and up-to-date.
Our focus on reachability analysis centers on providing highly reliable results. We prioritize vulnerabilities that are definitely accessible within your application rather than offering ambiguous labels like "unreachable." This means that if a vulnerability is not marked as "reachable," it is not because we have dismissed the possibility of it being exploitable but rather because the evidence is not sufficiently strong to classify it as such. By avoiding labels like "unreachable," we prevent users from misinterpreting the results and creating a false sense of security.
Within our platform, we help you prioritize vulnerabilities based on a combination of factors, including reachability, exploitability, and the potential impacts on confidentiality, integrity, and availability. For instance, consider a scenario where a third-party component in your dependency list has vulnerabilities of high CVSS scores and high probabilities of exploitation (EPSS). While these factors indicate risky vulnerabilities, reachability analysis can provide additional context. Suppose the vulnerable functions within this component are actually being called in your application. In that case, such vulnerabilities are prioritized over similar ones that, despite having high CVSS and EPSS scores, cannot be directly exploited in your specific codebase. This prioritization ensures you focus on vulnerabilities that pose an immediate and actionable risk.
This is how we show the results of the reachability analysis within our platform:
As an example, if we click on the first dependency (lodash), we can see its location in the versioning file containing the list of dependencies (package-lock.json) and the vulnerability we defined as reachable:
We can see this vulnerability marked with a warning sign among the others associated with lodash when we click on the View details link on the right side of the main table:
When we follow the vulnerability link shown in the second screenshot, we can find which of our files and in which location it calls the lodash function that is vulnerable and reachable (among other information):
This granularity level allows us to quickly identify and address the root cause of each security issue.
While we are proud of our current capabilities, we recognize that there is always room for improvement. Our roadmap includes expanding our reachability analysis to identify more CVEs and go beyond tracking them only in direct dependencies. By continuously investing in research and development, we aim to provide our customers with the most comprehensive and accurate security assessments.
If you would like to try our Continuous Hacking solution on the Essential plan for 21 days for free, please follow this link.
Note: I especially want to thank security developers Julián Gómez and Luis Saavedra for the information they provided, which was helpful in creating this blog post.
Recommended blog posts
You might be interested in the following related posts.
How we enhance our tests by standardizing them
Introduction to cybersecurity in the aviation sector
Why measure cybersecurity risk with our CVSSF metric?
Our new testing architecture for software development
Protecting your PoS systems from cyber threats
Top seven successful cyberattacks against this industry
Challenges, threats, and best practices for retailers