Cyber Threat Hunting an the limits of its own self-image
Many people dedicate their lives to actualizing a concept of what they should be like, rather than actualizing themselves. This difference between self-actualization and self-image actualization is very important. Most people live only for their image.
There is an industry-wide notion of what Threat Hunting is supposed to be which pretty much goes this way: a proactive approach for the identification of unknown cyber threats in your network. A few words more, a few words less, but that’s pretty much the gist of it.
Despite this definition thrown around alongside whatever
?DR (replace the ? with any letter of the alphabet, and add any extra prefix or suffix that seems pertinent) or managed service offering out there, the industry has trouble defining what
proactive is supposed to mean, what
unknown is supposed to mean and what is
Threat Hunting is usually conflated with some version of a supercharged detection effort. Such notion, in turn, is the result of intrinsic limitations in many of our current Cyber Security Frameworks which are stuck in a concept of cyber defense that only looks at things from the perspective of the defender. This doesn’t mean that our current frameworks are not fit for purpose or are lacking in some way. They are perfectly applicable within the confines of the problems they tried to solve. This is called bounded applicability, and it applies to almost everything in life1. The issue is that they might not be enough to provide descriptive insight of the aspects of threat hunting that I consider relevant within a cyber warfare context.
Take for example the National Institute of Standards and Technology (NIST)
Here, threat hunting is basically an activity that occurs in the detection phase, with the task of uncovering anomalous behaviour in a less constrained way than that of detection engineering or incident response. These factors inform what I’ve seen and read threat hunters generally do.
A lot of threat hunting I’ve seen around is based on the following practices, from less sophisticated to more sophisticated:
- Leveraging your SIEM/?DR to sweep your environment using IOCs. Despite this not qualifying as a threat hunting effort per se, it is classed as such in many blue teams. I don’t blame nor mock them for this (a practice sadly widespread in our industry with so called “experts” showing off their knowledge in the topic by mocking others). In many circumstances, blue teams resort to this type of IOC searches -that may lead to potential hunting leads, and evolve into more interesting hunting efforts-, as a way to keep the candle burning, to rest from the otherwise fatiguing tasks of SOC-like monitoring and intensive alert triaging.
- Running queries postmortem, in the aftermath of an incident, to ensure there are no lingering threats. Usually by means of leveraging your SIEM or EDR/MDR/?DR. These queries are generally focused on the concrete set of observables that the team dealt with during the incident. These practices are also carried through the incident response itself, as a way to scope the impact of the threat.
- Leveraging your SIEM/?DR to perform some more sophisticated searches around a particular threat actor TTPs, based off the latest technique some researcher disclosed on twitter.
- Leveraging your SIEM/?DR to perform some more sophisticated searches around a particular threat actor TTPs, based off Threat Intel Reports.
- Employing MITRE ATT&CK Framework to run atomic behavioural searches to drive vertical deep-dive hunting efforts, in correlation with behaviours flagged by Threat Intel Reports.
What all these approaches have in common, which constitutes a strong symbolic meaning assigned to the main activity performed by cyber hunters, is that threat hunting is about querying data using your traditional SIEM/?DR, threat hunting is mainly about detection and nothing else.
Now here’s the thing, when approaching hunting in this way, we are already limiting ourselves to:
(a) specific sets of data, usually around endpoint and network telemetry
(b) data that is consumable or queryable via our SIEM/?DR, whose insight is limited by the query language that tool facilitates
(c) data that lacks intentional direction, because on some level it’s given to us, not planted by us
Don’t get me wrong, having all the data that is nowadays available to companies via not only regular logs but ?DR telemetry, it’s a MASSIVE improvement from the situation 10 years ago when there was only clunky, heavy SIEMs and a bunch of network or server logs to triage an incident2. The “industry standard” approach described above has a huge advantage too: it covers the majority of generic use cases, it facilitates following threads of evidence within a sea of telemetry and is more than enough to cover the low hanging fruit, which normally comprises the majority of threats a company experiences.3
The concepts described above, however, represent a limited way to conceive what threat hunting is here to do.
The shift towards Active Defense
Using no way as a way, having no limitation as limitation
What I’ve described above serves as background to introduce the real topic of this post series which is what is cyber hunting’s real mission? what is the bigger picture here?
I have an un-orthodox vision of what hunting is, which does not align with mainstream industry blabber. When people ask me what I think about threat hunting, I always answer in the same way:
For me threat hunting is less about running queries on this or that SIEM/?DR and more about active defense. Hunting is more about calculated disruption of adversarial tactics than it is about thinking our detection efforts will find adversaries were we expect them to be. Hunting is more about understanding and shaping attacker behaviour and less about waiting for attacker behaviour to make itself evident. Hunting is more about deriving insight from your data, than it is about expecting your data to give you readily available insight.
The main difference here is that
Active Defence does not rely solely on the identification of threat actor’s actions, whether known or unknown, but it aims to intercept threat actors and disrupt adversarial operations. To accomplish this, it seeks to implement ANY tactics, beyond just leveraging our traditional defensive tools in the shape of a SIEM/?DR.
What would an updated picture of the NIST security framework look like if we consider these factors? Well kind of something like this ;)
You may wonder what intercept and coerce mean in this diagram. The former I will clarify in this article series, the latter will be a topic for the next thing I’m working on. Suffice to say that coercion is an activity that aims to shape attacker’s behaviour so that you can influence a kill chain’s outcome. In fact,
response are some of the phases of what I call the cyber disruption chain, which is yet again topic for an upcoming blog post based on my latest research on cyber defense. However, here is a preliminary depiction of what I am referring to:
The Intercepting Fist Tactics
A lot is said about threat hunting being “proactive” vs the “reactive” approach of functions like IR. I actually don’t think that the classic difference between “proactive” and “reactive” helps understand threat hunting any better. In fact, threat hunting can become very reactive under certain conditions.
I think that cyber defense as a whole, being a complex dynamic system, presents characteristics of fractality. That is, all of the cyber defense functions in NIST could be thought of as proactive, when considered from the point of view of actively building defensive capability for a business, versus just hoping attackers won’t target you. But it can be regarded as reactive, when observed from the point of view of response, as an activity that is triggered when an emergency is already going, and it’s all about mitigating and containing damage.
What makes more sense is to think of cyber defensive functions in terms of how they engage threat actors. The point along the kill chain where your defensive capability engages threat actors is what provides directionality and territorialization: a zone of conflict where a particular cyber function is tasked with managing the inherent tensions that emerge in that space.
The defensive capability that threat hunting should engineer is directly related to the adversarial engagement space that we can occupy as hunters: not just the inside of our networks, but also the perimeter and beyond.
So which are, you may ask, some of the tactics that we can implement as threat hunters to increase our areas of adversarial engagement?
I will summarize here below some of the tactics that hunt teams usually forget, when performing threat hunting in the enterprise using the traditional way. I promise I won’t mention ML or AI ;)
- Cyber Deception & Controlled Attack Paths
- Attack Path Management4
- OSINT Threat Hunting
- Big Data Analytics & Graph Analysis
I will break some of these topics down a little in our next post. But they are in no way an exhaustive account of the tactics we can implement as cyber hunters. In the end, my vision of threat hunting is one where we are actively intercepting the adversary, with the end goal of causing as much disruption to their operations as possible.
This also involves shifting the hunters' mindset from a purely passive one to a truly active one: becoming consultants within our business, partnering with relevant stakeholders to drive the change that’s required to implement disruptive tactics.
Threat Hunting is like Jeet Kune Do, it is the art of the intercepting fist. It’s basic guiding principles are: Simplicity, Directness and Freedom (the form of no form). To evolve as threat hunters, we can’t be limited by the practices that are accepted and buzzworded around in the industry.
In Part 2 of these series, we will explore one of these active defense tactics. Until then, I hope you enjoyed this article :)
This concept is central to complexity science as conceived by the Cynefin framework, one of the most beautiful and complex pieces of human cognition I’ve come to know. If you wish to dig deeper check https://cynefin.io/wiki/Cynefin. ↩︎
Unfortunately this seems to still be the case with some solutions nowadays that have learnt NOTHING from the last decade of developments in this particular engineering field. ↩︎
The fact that I mention “low hanging fruit” doesn’t mean that your blue team will have that base covered. Just because something might be easy to detect, doesn’t mean that it will be detected as the recent Optus data breach shows. I’m not bringing this up to belittle the colossal efforts that the Optus blue team carried out, hats off to them, I’m just mentioning it because digital infrastructure is complex, and sometimes even detecting the easy stuff is hard, because we are human, and people are a usually an unaccounted factor that make even the simplest of things overly complicated. ↩︎
By the way, if you haven’t yet read Andy Robin’s Attack Path Manifesto, you should definitively set aside the 20 minutes that it takes to go over this post. I promise it will have a high ROI! ↩︎