Addressing the Disconnect in External Attack Surface Awareness

By Jason Kaplan, Chief Executive Officer, SixMap [ Join Cybersecurity Insiders ]
29
Data Breaches

External vulnerability scans have become a staple in the cybersecurity toolkit of most organizations. Similar to a penetration test, external scans are designed to discover open ports and internet exposed assets including websites, servers, APIs, and other network endpoints to help identify vulnerabilities and potential entry points in an organization’s external infrastructure.

The exploitation of vulnerabilities as the critical path to initiate a breach had an 180% increase over last year, according to the Verizon 2024 DBIR Report.  That makes verifying the security posture of externally-facing systems very important. External scanning is part of that equation. Scanning can uncover vulnerabilities like injection flaws, cross site scripting, broken authentication, unsecured APIs and other common security misconfigurations that lead these breaches. But while they can provide a baseline level of security if used correctly, external vulnerability scans can create a false sense of safety if not used effectively, which is far too often the case.

Many organizations struggle with the basics of managing and maintaining an external vulnerability management program, and often see it fall apart over time. Everyone runs external scans, yet everyone hates them. They are notorious for creating more noise than intel, and more busy work than actionable remediation advice. What gives?

Challenges in Discovering External Assets 

Despite the near ubiquitous use of external scanners, the visibility they give us is limited. Most organizations struggle to know what external assets they own, much less maintain a comprehensive and up-to-date view of their external attack surface. Common challenges include:

  • The work is tedious: There’s a lot of manual effort required to use external scanning tools effectively. Traditional tools often require manual input of enterprise organizational data and attribution to assets, domains and IP addresses, which is not only time-consuming but also prone to errors. Maintaining an accurate and up-to-date inventory of external assets becomes an overwhelming task. This is especially true for large organizations with complex IT environments and companies active in M&A like the financial sector and biotech.
  • It’s resource and time intensive: Continuously scanning for and cataloging every Internet-facing asset requires significant labor time and computing resources that often get deprioritized amidst the daily onslaught of security alerts and incidents. There is also the trade-off between the scope of the scan and the amount of load and impact to the network. The more comprehensive the scan, the bigger the trade-off.
  • It’s often a shot in the dark: The proliferation of shadow-IT and orphaned infrastructure —unknown and unmanaged external assets—makes knowing what assets you own to scan an immense challenge. Digital assets that are no longer actively managed or monitored can include outdated servers, forgotten cloud instances, and old test environments that were never decommissioned. Such assets are particularly dangerous because they often go unnoticed until they are exploited by malicious actors.
  • Scans don’t happen often: Many organizations conduct scans infrequently, such as once a quarter or even less often, which leaves significant windows of exposure and long periods during which new vulnerabilities can emerge and remain undetected. These sporadic efforts are insufficient in today’s fast-paced threat environment.
  • Impact of scanning on production: Today’s scanning technologies force organizations to make a trade-off between network impact and the completeness/frequency of scans due to the brute-force methods used to discover network-facing assets. As a result, organizations often de-emphasize scan completeness and frequency, leaving them vulnerable.

As a result of these challenges, we end up with a patchwork of partial information, leaving gaping holes in our understanding of our attack surface. The market is full of solutions, yet no tool has fully addressed the issue comprehensively. This disconnect highlights a critical gap in the security posture of many organizations. The tools are there, but the processes and understanding required to leverage them effectively are often lacking.

Addressing the external attack surface gap

To bridge this gap, organizations should adopt the following best practices:

  1. Regular Scanning: Conduct regular and comprehensive scans of the entire external attack surface. This should be done at least weekly, if not daily, to ensure new vulnerabilities are quickly identified.
  2. Automation: Leverage automated tools that can continuously discover and monitor all external assets. Automation reduces the manual effort required and ensures more consistent and accurate results.
  3. Prioritization: Use threat intelligence to prioritize the remediation of identified vulnerabilities based on their risk level. This helps focus efforts on the most critical issues first.
  4. Policy and Governance: Establish strong policies and governance structures to ensure continuous monitoring and management of external assets. This includes setting up processes for regularly updating asset inventories and decommissioning outdated infrastructure.
  5. Continuous Monitoring: Implement continuous monitoring solutions that provide real-time visibility into the external attack surface. This allows for immediate detection and response to emerging threats.

By recognizing the limitations of current approaches and adopting automated, process-driven solutions, organizations can bridge this critical gap. Regular scanning, strong processes, and continuous monitoring are key to staying ahead of emerging threats and ensuring a secure external attack surface.

Ad

No posts to display