The Precision-Recall Curve, when applied to security alerts, describes the balancing act between allowing more visibility with a higher probability of false positives, or fine-tuning to have a smaller number of alerts with a higher probability of true positives, which introduces a greater risk of missing early indicators (false negatives).
This precision curve creates an ongoing dilemma for security teams. While many teams operate with an acceptable false-positive level (15%?, 20%?), this does not account for the nuances across alert types, threat activity types, and threat stages. Whether its data exfiltration, ransomware, or some other outcome, an adversary’s end goal does not occur in a single step. Many early indicators of a threat are also prone to false positives, which either result in a low priority score (so not investigated) or getting tuned out completely. Using a blanket percentage to understand false-positive tolerance creates the worst of both worlds: creates blind spots to early threat indicators while adding to alert volume beyond the teams’ ability to triage.
Connecting system-level activities by their causal relationships both suppresses false positives and groups together security alerts based on their causal connections. This reverses the interest to suppress alerts with a higher probability of false positives. While these indicators when seen in isolation do not warrant investigation, collectively they work as ‘flags’ in the attack trace to spotlight clusters of activities that tell the ‘story’ of the attack.
Security investigations struggle to connect disparate alerts. Rudimentary aggregation methods (e.g. combining alerts from the same host or the same user) miss the point. These methods themselves create false positives and possibly false negatives by alerts that fall out of their time window. For example, the malware used in the SolarWinds breach used a random wait period of 10 to 14 days before attempting lateral movements. This waiting period foils these rudimentary aggregation methods and makes it increasingly possible, if not impossible, to manually recognize the relationship between these activities.
Spyderbat removes the ambiguity and doubt by establishing causal relationships between alerts, even if separated by systems, users, or long periods of time.
Spyderbat optimizes the precision-recall curve.
Suppresses False Positives by recognizing alerts with no outcomes, allowing analysts to dismiss or ignore isolated alerts.
Removes risk of False Negatives by connecting potential indicators with follow-up activity.
This results in allowing security teams:
Tolerating the noise-to-signal ratio without the consequence of adding to analysts’ workload.
Increasing alerts that detecting earlier threat indicators, since attack traces will connect alerts based on causality to detect threats earlier in their lifecycle.