10 Reasons Why Websites STILL Get Hacked

1) Over 2 billion Internet-connected assets are listening on port 80 and 443, each most likely containing some number of vulnerabilities. Do the math.

2) Most companies remain unaware of websites they own, what they do, or who is responsible for them. Obviously, you can only scan and secure what you know you own. For unknown assets, it’s impossible to respond quickly to high-priority vulnerability reports.

3) The number of new websites deployed on the Internet exceeds the number of new websites covered by vulnerability management. A decreasing percentage of websites are being scanned.

4) DAST scanning technology hasn’t proved it’s able to scale to the size of the Internet or even that of a large enterprise.

5) On average, only a tiny fraction (estimated ~20%) of a company’s websites is actively covered by website vulnerability management.

6) When vulnerabilities are identified, only roughly half are fixed, and it commonly takes six months to do so. Companies remain resistant to scanning “more,” let alone “everything” because developers can’t tackle the current backlog.

7) The existing vulnerability volume makes fixing all of them impractical, and prioritization essential. Traffic-light risk models (red/yellow/green) are laughably unhelpful to allocate development resources as they don’t account for exploitation likelihood, asset value, financial impact, etc.

8) Outsourcing of vulnerability remediation has not received traction because the security department doesn’t want to spend budget dollars on what they believe should be paid for by the engineering department.

9) DAST+WAF integration for “virtual patching” is still gaining acceptance but currently covers only a tiny fraction of all websites.

10) SAST overwhelmingly does not identify the vulnerabilities adversaries find and exploit. The DAST-SAST vulnerability overlap is only 1-5%. “Pushing left” in this way so far has had margin benefit.

 

Post by Jeremiah Grossman

June 24, 2021