Effective patch management remains as much a challenge today as it was a decade ago. The missing foundational piece is asset inventory - I would argue the biggest security gap most organizations have. Even assuming good inventory challenges remain for Threat Intelligence (TI) to identify new vulnerabilities in the supply chain - to include vendor products, open source libraries, software tool-chains, etc. There are frequent race conditions between what is known and the emerging unknown. Rapidly identifying those emerging unknowns - eg. zero days - is a key part of a threat intelligence program.
Currently the one-stop-shop for vulnerability information is MITRE’s Common Vulnerabilities and Exposures (CVE) program. There are a number of smart dedicated people working on this tough problem space, but still the program has significant shortcomings as called out recently by the US House Energy and Commerce committee. Among the issues are that they provide at once too much information on inconsequential vulnerabilities causing fatigue, and too little information on those you do care about. This is compounded with a lack of structure or consistency to the reported information that is available. Perhaps most concerning is the significant delays between discovery and CVE release. For the most impactful vulnerabilities, those that provide remote execution and are ‘worm-able’ (allow for self propagation or mass scanning), the announcement of the vulnerability kicks off a race between defenders seeking to identify and remediate their systems, and attackers trying to weaponize the vulnerability then discover and exploit vulnerable systems. A group of us did a forecast for the August 2018 Struts vulnerability on how quickly it would be ‘seen in the wild’ and it was a matter of days before attack code was on Github, with malicious scanners hitting honey pots shortly before that.
While CVEs attempt to document all vulnerabilities across all products, it is necessary to supplement the CVE feeds with direct information from your vendors to get more timely information, and hopefully patches. Automation is key, both for speed in the race against attackers, and in order to scale your security and IT people. Ideally automated updates can eliminate the need for the more manual aspects of this process, except where regression testing is required to ensure the security fix doesn’t impact reliability. This vendor management challenge is at least tractable. The gap that remains are those assets that do not have a capable vendor pushing vulnerability information to you - open source libraries (Struts) for instance. Scaling up bespoke monitoring solutions for every element of your complex software (and hardware) supply chain is a daunting challenge. Current solutions rely on pushing lightly filtered feeds of information to human experts with domain knowledge of what is in the software ecosystem and making informed trade offs based on the risk of different elements you could monitor, but there is a lot of room for innovation and improvement here.
Another area where TI can be valuable is in prioritizing remediation. Of course the goal is to patch immediately but the reality is there can be various delays for testing and redeployment. The response time (SLA) expected from application teams can be a defined compliance-based term of days, or at Netflix it can be based on what is best for our customers as determined by the engineering teams. To give them the proper context it is important to define the severity of the issue - is the vulnerability accessible from the internet (this is a complicated question in a micro-service architecture), is the system in a vulnerable configuration (it may have the library, but is the vulnerable function actually executed), and, what TI can answer, is the vulnerability being actively exploited ‘in the wild.’ Seeing live fire examples of the vulnerability of course increases the sense of urgency and can bump the priority of a remediation effort even if the teams are confident the vulnerability is not present and not internet facing - just in case.
Finally there is always the dark horse vulnerability that shows up in unstructured data through some private sharing groups. Likely as an embargo-ed heads-up that something is coming, but providing little actionable information. These are still key pieces of threat intelligence that need to be captured, enriched and shared with the right people. As these are often based on personal relationships and trust groups it is hard to automated this area.