If you landed here without reading Part 2 of this article, I recommend you head there and give it quick read ;)
Towards a better framework for threat hunting
Based on what was discussed in Part 1 and Part 2, a more representative framework to approach the epistemic basis of cyber threat hunting would look like the following:
When threat hunting we:
deal with the realm of “knowable” things, i.e. events that leave behind some evidence of their activity in the form of telemetry. The unknowable is everything that escapes any possible representation and thus is irrelevant to the task.
recognize the existence of unknown (past) and unpredictable (future) events, patterns and occurrences we are not even aware of at a certain point in time. An example of this is an attacker that has been lurking in our network for 6 months (unknown event that has a history and can be situated in the past) due to the exploitation of a zero-day (a bug in an important piece of software that could have never been predicted and thus was never included in our future planning)
understand that, in order to achieve the above, we have to rely on our awareness window, which is finite and has bounded applicability: we can’t scan or sweep all systems and data at the same time, we can’t be aware of all the variables that influence a particular outcome. The applicability of this window is limited by constrains such as collection and availability of data, computing capacity, data exploration methodologies and skills, etc.
realize that we domain of unknown things sits still within the realm of the knowable and is conditioned by time. As such, it cannot escape to past and accrued knowledge and the way this accumulated knowledge organizes itself into structures, impacting the very nature of unknown things. The transition from known objects to unknown ones is driven by self-organizing patterns that represent pivotal events. These pivotal events act like connective tissue, expanding our domain of known things, helping us uncover previously unknown activity. Pivotal events may also arise as absolute unexpected disruptions.
The revisited Threat Hunting approach
As a result of the above shift in our understanding of the way we classify knowable occurrences, we can now re-locate threat hunting in a different board, with a different set of principles.
When threat hunting we are actually attempting multiple ways of inhabiting that pivotal space or threshold that separates the so far known from the yet unknown.
We include the dimension of time, and talk about “yet” unknown to indicate this relationship.
Threat hunting can and should leverage structured collective knowledge (think MITRE ATT&CK and the like) since despite this knowledge belonging to the domain of “known” things, it is however an enabling constraint, which propels us into the threshold space of emerging events not yet known, but inevitably connected to our past and present.
Leveraging a specific TTP, widely known by the community and encoded in a particular framework, does not mean we are performing futile exercises. Rather, it is a tool we use to navigate the liminal space or threshold that helps us uncover unknown patterns.
Leveraging IOCs in the form of threat intel, as a trampoline into indirect compromise evidence, is not to be regarded as a condemnable approach. For example, the presence of a particular IOC in the past, could allow us to hypothesize that some TTPs used by the same threat actor were not explored. Hunting for these TTPs could lead to the discovery of unknown threats.
There is not “a” or “b” approach with threat hunting, but rather an “either or” or “and” approach. For example, leveraging unsupervised machine learning algorithms in a directed way, in order to reveal hidden patterns, is a valid approach to explore the threshold space and bring unknown things to light.
Threat hunting is about setting realistic expectations and understanding that we cannot hunt for everything, everywhere, at the same time. In the same way that the old saying
"those who attempt to defend everything, defend nothing"points to the reality of limited resources, we should consider our window of awareness as a limited resource too. This is critical when mapping out our attack surface and deciding how much terrain we want to cover and the available telemetry we have for it.
Reflective methods are also valid. For example, in an attempt to identify most commonly executed applications in our environment, we collect AppCompatCache and Prefetch data from all our estate. Using big data capable machines, we run Matias Bevilacqua AppCompatProcessor to classify and count occurrences of applications. We also process Prefetch with Eric Zimmerman’s Prefetch Parser en masse. We would normally use such extracted data to pinpoint suspicious process names. However, through this method, we could also end up discovering seemingly benign processes that have exactly the same execution quantity (the amount of times it has been executed on a system) across multiple systems (revealed by AppCompatCache and Prefetch). Since it’s odd that a particular process will exhibit the same amount of executions across multiple systems, we may suspect this is due to some sort of scheduling system or synching system that could be centrally controlled. We may have found ourselves a backdoor responding to a C2, or perhaps simply the effects of a script deployed by the endpoint admin team set to run at specific times. In any case, we arrived at this unknown fact via the exploration of known artifacts like Prefetch and AppCompatCache, and an assumption of known TTPs (odd or typo process names). These patterns emerged as a result of us probing the telemetry with a known method, which allowed us to briefly pass through the threshold space that connects with the yet unknown aspects of our network.
The inclusion of time and the disambiguation of the knowledge matrix inherited from military practices, which transitioned into the cyber world unquestioned (like many other concepts) by the industry, is a step forward in the adventure of threat hunting and cyber defense.