Dashboard
Account 🔐
Sign Up
Login
Global Leaderboard
Game Vault
Badge Backpack
Blue Team Glossary
Login and start playing
Leaving so soon?
×
You really want to log out? We were having so much fun!
Home
›
Glossary
›
Cybersecurity Framework
›
mitre-atlas
Mitre-Atlas
Cybersecurity Framework
Definition
ATLAS stand for Adversarial Threat Landscape for Artifial-Intelligence Systems) is a globally accessible, living knowledge base of adversary tactics and techniques against AI-enabled systems based on real-world attack observations and realistic demonstrations from AI red teams and security groups. Furthermore, ATLAS is modeled after and complementary to MITRE ATT&CK, raising awareness of the rapidly evolving vulnerabilities of AI-enabled systems as they extend beyond cyber. Additionally, ATLAS complements rather than competes with OWASP LLM Top 10 and NIST AI RMF use all three for comprehensive coverage. Understanding the distinction between ATLAS and ATT&CK helps security teams determine when to apply each framework. Table: Comparison of MITRE ATT&CK and MITRE ATLAS frameworks Aspect MITRE ATT&CK MITRE ATLAS Primary focus Traditional IT/OT adversary behaviors AI/ML-specific adversary behaviors Tactic count 14 tactics (Enterprise) 16 tactics (14 inherited + 2 AI-specific) Technique count 196+ techniques 84 techniques Unique tactics None AI-specific ML Model Access, ML Attack Staging Target systems Endpoints, networks, cloud ML models, training pipelines, LLMs Case studies Groups and software profiles 42 AI-specific incident analyses Best for Endpoint/network threat modeling AI system threat modeling ATLAS inherits 13 tactics from ATT&CK — including Reconnaissance, Initial Access, Execution, and Exfiltration — but applies them specifically to AI contexts. The two AI-specific tactics unique to ATLAS are: ML Model Access (AML.TA0004): Describes how adversaries gain access to target ML models through inference APIs or direct artifact access ML Attack Staging (AML.TA0012): Covers how adversaries prepare attacks targeting ML models, including training data poisoning and backdoor insertion
Related Terms
Mitre-Att&ck
Examples & Use Cases
Reconnaissance through Initial Access (AML.TA0001-AML.TA0003) The attack lifecycle begins with reconnaissance, where adversaries gather information about target ML systems. Key techniques include: Discover ML Artifacts: Adversaries search public repositories, documentation, and APIs to understand model architectures and training data ML Supply Chain Compromise: Attackers target supply chain attacks by inserting malicious code or data into ML pipelines Prompt Injection (AML.T0051): Adversaries craft malicious inputs to manipulate LLM behavior — this maps to OWASP LLM01 ML Model Access and Execution (AML.TA0004-AML.TA0005) These AI-specific tactics describe how adversaries interact with and exploit ML models: Inference API Access: Gaining access to model prediction interfaces enables reconnaissance and attack staging LLM Plugin Compromise: Exploiting vulnerable plugins extends attacker capabilities within AI systems Persistence through Defense Evasion (AML.TA0006-AML.TA0008) Threat actors maintain access and avoid detection through: Modify AI Agent Configuration (October 2025 addition): Attackers alter agent settings to maintain persistence Adversarial Perturbation: Crafting inputs that cause models to misclassify while appearing normal to humans Collection through Impact (AML.TA0009-AML.TA0014) Later-stage tactics focus on achieving adversary objectives: RAG Database Retrieval: Extracting sensitive information from retrieval-augmented generation systems Poison Training Data (AML.T0020): Data poisoning corrupts training data to manipulate model behavior — a critical data exfiltration vector Exfiltration via AI Agent Tool Invocation (October 2025 addition): Leveraging agent tool access to extract data Understanding lateral movement patterns helps security teams track how attackers progress through these tactics.