I seem, then, in just this little thing to be wiser than this man at any rate, in that what I do not know I do not think I know either
I only know one thing: that I know nothing
In the same way that we hunt for cyber threats, we should strive to hunt for our own biases, which are also threats. These biases are usually encouraged and sedimented by the cyber industry. When left unquestioned, they can threaten our understanding of the world, the perception of our own and others' opinions, acts and decisions. The danger lies in us relying on unconsciously learned behaviours that preclude us from thinking and acting differently. Above all, they have a heuristic effect on our descriptive self-awareness regarding what we think we are doing as professionals.
I want to talk today about one of those biases when it comes to cyber threat hunting: that threat hunting is about finding the
The origins of the “known unknowns” and “unknown unknowns” epistemological categories (i.e. concerned with the study of how we know) is usually attributed to Donal Rumsfeld, secretary of defense of the USA from 2001 until 2006. During a department of defense news briefing in February 2002, in regards to actionable intelligence, he stated:
The message is that there are known knowns. There are things we know that we know. There are known unknowns. That is to say there are things that we now know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.
This can be represented in the following matrix:
However, the attribution of this model to Rumsfeld is simply wrong. Some basic research reveals that he US military had been using this term way before Rumsfeld public appearance (see1). The different pairs of epistemological categories can even be traced back to the Defense Acquisition Accronyms and Terms manual dating from September 19912 which defines “uknown-unknowns” as:
Future situation impossible to plan, predict, or even know what to look for.
Whether these categories migrated to the military world from the realm of academic risk and knowledge management research or the other way around, they are now widely referenced in many cyber discussions regarding defensive ops. Let’s explore their implications in the threat hunting domain.
It is common lore in the cyber world that traditional SOC approaches based on alerts use known IOCs (indicators of compromise) and detection rules to drive their response to cyber threats. This process has been largely categorized as reactive in that our response is based on certain logical conditions (detection rules) triggering, whilst the role of the security analyst here is based on waiting until an alert fires up in order to triage it. Since IOC and detection rules are the result of previous research, we are attempting to detect “known” occurrences, iterations of things (perhaps with slight variations) previously seen. We are in the world of known knowns.
Things we know that we know. We deploy a detection rule for Microsoft Office apps -like Word- spawning powershell processes, this is a technique which is well understood by now (T1204.002) and is mostly regarded as typical malware activity: our only task is to wait until this “trap” triggers. We know this technique is in use in the wild, we know what we are looking for, we can code it in a rule for our preferred SIEM platform, we know what this activity usually indicates. We just navigate through the world of repeatable and clear processes. In cyber operational terms we are in the area of the SOC and first level alert triage. But what happens when a new threat emerges, which presents a whole new range of IOCs so far unmonitored in our organization?
When we encounter new threats, which the community has already researched -or is starting to-, we need to ensure whether our organization has been affected by them. This requires a shift in the typical approach described above (known knowns) and a transition into the domain of known unknowns. We know what a particular emerging threat is doing (it can be anything, a new string of malware, a new vulnerability, a new C2 domain, etc.), but we don’t know whether our own infrastructure is affected by it. We don’t know how pervasive this threat is, or what is the dwell time of this threat (and the threat actor behind it) within our networks.
In the world of cyber, these threat types take the shape of threat intel reports and other briefings which usually trigger a response on the defensive side. This response is mainly based off IOC sweeps and selective deep dives when specific results are found. We leverage any known IOC to scan our environment in an attempt to ascertain whether there is evidence of past compromise. If any evidence is found, we deploy our cyber detective team to perform a deeper investigation and activate any required security controls to contain and eradicate the threat.
In cyber operational terms we are in the area of Incident Response and higher complexity analysis usually performed by seasoned professionals and senior members of the team (whether consultants or in-house, the same principle applies).
So far, we’ve dealt with threats which can be brought into the ordered realm of known things one way or another. However, is there a cyber domain that deals with those cyber threats that are not yet known nor can they be anticipated? What if a threat actor has been exploiting a zero-day, unknown yet to the world, gaining a foothold inside your network and lurking, undetected, for weeks and months? How do you surface such threat?
According to the classification matrix introduced above, we are now situated in the domain of “unknown unknowns”. The domain of occurrences that are impossible to anticipate, no matter how thorough our risk analysis, rule deployment and AV technology is. The current state of the art in cyber security calls this the realm of threat hunting. Hunting is regarded as the art of seeking threats that are not yet known. In Part 2 of this article, we will evaluate whether these claims are true, and whether talking about unknown unknowns is a misleading or accurate concept to define threat hunting activity.
But before we finish, let us overlay the operational cyber security model over the knowledge matrix:
Lt. Gen. William Donahue, “Achieving Success in the y2k Battle,” Air Force Printed News, Oct. 13, 1998. Cited in Safe or Sorry: The “Y2K Problem” and Nuclear Weapons ↩︎