Future of IOCs

Thank you for joining us for our series on Threat Intelligence. This is the first of our content posts, linked from our introductory post.

The one constant in the information threat landscape is that it is always changing. Yet too often, when the term ‘threat intelligence’ is used, it is in reference to static Indicators Of Compromise (IOCs). Are these traditional indicators still relevant or useful today? How about in the near future? Most importantly, how adept is your Threat Intelligence (TI) program at dealing with these constant shifts in landscape? Can your current TI detect a black swan event, or a black swan hatching?

IOCs or “Oh, I see”

While there is still a great deal of commodity malware that can be found using file hashes, DNS names and IP addresses, many modern attacker techniques render these indicators unusable. For example, malware can be repacked thus changing the resulting hash, IP addresses are largely communal and interchangeable in today’s virtual hosting environments, and otherwise legitimate domains can be hijacked (or in the case of sites with user generated content, simply repurposed) to serve as malicious command and control infrastructure.

A common shortcoming in TI programs is an outsized focus on malware. There are many ways an attacker can achieve their objectives without malware. From classic phishing (or purchase) of credentials, through trendy techniques like ‘living off the land’ with install tools, to advanced in-memory ‘fileless’ attacks. All it takes is some easily purchased RDP credentials for a threat actor to extort, disrupt, or establish a foothold for further reconnaissance and exfiltration.

This is not to say that IOCs should be abandoned, any more than signature-based antivirus should. As a percentage, the overwhelming bulk of malware (and thus noise) can be reduced by proper application of intel-derived IOCs, antivirus signatures, IDS rules, and the like. These low-hanging attacks must be dealt with before the defense can focus on higher order threats, but in a SaaS world I generally expect my vendors to handle the commodity threats for me. They are better positioned in terms of visibility across multiple customers, ability to amortize threat feed investments, and permissions to action those threats. Thus I expect Windows Defender and Apple XProtect to disable commodity malware, and Gmail to filter out pervasive social engineering campaigns, while my TI program focuses on targeted attacks that my vendors cannot detect or will not reasonably prioritize for action. Add to this the move away from general compute platforms for use endpoints (OS X/Windows) to sandboxed app environments (iOS/Chrome/etc) and the days of file hashes seem numbered.

What then might a next generation of IOCs look like? Perhaps suspicious 3rd party app integration identifiers for GSuite in your corporate environment, or abusive AWS accounts in production. The vendors may still be best positioned to action those, but if they stem from an attacker common to an industry or other peer group, sharing could have value. There will also be those grey IOCs that do not warrant blocking alone, but could form useful input to other detection schemes (perhaps based on ML) as risk scoring or enrichment.

The chief distinguishing factor between an IOC-centric model and “Oh, I see” is the ability to look beyond what is already known and shared widely, into the shadows of what could — and likely is — threatening your specific organization or industry. The first step, then, is to fully understand your business risks, as well as your broader industry risks, and identify the (often unique) threats that confront both. Only then can you begin building a meaningful threat intelligence program.

TTPs versus IOCs

It has been said that every problem in computer science can be solved with another level of abstraction. In a way you can consider TTPs (Tactics, Techniques and Procedures) an abstraction layer over IOCs. With TTPs we can find enough applicable commonalities regardless of attack(er) type to make the case that TTPs and not IOCs should be the primary goal of TI. TTPs encapsulate the general modus operandi of a given actor or even more generally of a class of actor. These will always be useful to understand, both for red teams to model and for defenders to build controls against (detective and preventative).

Whereas an IOC might be a hash of a specific RAT, the tactic model would be the concept of the RAT itself regardless of how it is packed. The techniques would be the type of operations the RAT enabled, the way it established command and control communications, how it establishes persistence, and the like. The procedures would be the use of this RAT as part of the attacker’s post exploitation runbook. Taken together this intelligence provides a strong direction regarding where to look, but is not limited by a specific value for which to look for.

TTPs have been de rigueur for sometime now, but sharing still occurs largely through storytelling. TTPs do not lend themselves to machine readable formats in the way IOCs do. There has been recent progress adopting taxonomies like MITRE ATT&CK to talk about TTPs more efficiently, but work remains to be done towards automated sharing. Ultimately I would love to see something like Palantir’s ADS as a medium for sharing TTPs by sharing detection strategies for them, perhaps all the way to detection code (yara?) if we could agree on a generalized rules engine and solve some stubborn data schema issues - or perhaps abstract to a common Event Query Language as Endgame suggests. I would propose that the future of TI sharing is at the nexus of intelligence and detection engineering.

Thank you for reading, and stay tuned for our next post, which I will link here as well as on our introductory post.