Dashboard
Account 🔐
Sign Up
Login
Global Leaderboard
Game Vault
Badge Backpack
Blue Team Glossary
Login and start playing
Leaving so soon?
×
You really want to log out? We were having so much fun!
Home
›
Glossary
›
Security Concepts
›
true-positive-vs-false-positive
True-Positive-Vs-False-Positive
Security Concepts
Definition
In security operations, every alert your tools generate falls into one of four categories based on whether the detection was correct and whether the threat was real. True positives and false positives are the two you deal with most often. **True Positive (TP)** — The alert fired, and the threat is real. Something genuinely malicious or suspicious happened, and your detection caught it. **False Positive (FP)** — The alert fired, but nothing malicious actually happened. A legitimate action triggered a detection rule that wasn't specific enough to tell the difference. The other two exist too: **True Negative (TN)** — No alert fired, and nothing malicious happened. The system correctly identified normal activity as normal. **False Negative (FN)** — No alert fired, but something malicious did happen. The threat existed and your detection missed it entirely. This is the one that keeps analysts up at night. Every alert a SOC analyst touches is either real work or wasted work. High false positive rates burn analyst time, erode trust in the detection platform, and most dangerously create alert fatigue. When analysts are conditioned to expect false positives, they start moving faster and checking less. That's exactly when a real threat slips through. ### The Four Outcomes at a Glance | | Threat is Real | No Threat | |---|---|---| | **Alert Fired** | True Positive | False Positive | | **No Alert** | False Negative | True Negative | Common reasons for false positives: - Detection logic written without accounting for normal admin behavior - Rules ported from threat intel without environment-specific tuning - Legitimate tools that share behavior with attacker tools (PSExec, PowerShell, certutil) - Scheduled tasks or scripts that mimic attack patterns at predictable times Common reasons for false negatives: - Attackers using techniques your rules don't cover - Missing log sources — if the data isn't there, the rule can't fire - Detection logic too narrow — only catches exact known-bad signatures, misses variations - Attacker operating below detection thresholds deliberately
Explore More Terms
Checkpoint
Data_source
Moonstone-Sleet
Lateral_movement
Honeypot
Examples & Use Cases
**True Positive — Credential dumping caught** — An alert fires on LSASS memory access from an unexpected process. Investigation confirms procdump.exe was used to dump credentials. The detection worked. The alert was real. **False Positive — Admin tool triggering malware rule** — A detection rule for lateral movement fires every morning at 9am. Investigation shows the IT team runs a legitimate PSExec-based deployment script on a schedule. The behavior is real, the threat is not. Rule needs scoping to exclude that source account and timeframe. ### Further Reading - [MITRE ATT&CK — Detection Guidance](https://attack.mitre.org/resources/getting-started/) - [SANS — Reducing False Positives in SIEM](https://www.sans.org/blog/reducing-false-positives/) - [Elastic — Detection Rule Tuning](https://www.elastic.co/guide/en/security/current/rules-ui-management.html) - [Red Canary — Detection Quality Metrics](https://redcanary.com/blog/detection-quality/)