LIVE · THREAT INTELLIGENCE CENTER

AI Security Threat Monitor

Real-time surveillance of AI exploits, prompt injection attacks, supply-chain compromises, and vulnerabilities in vibe-coded apps — pulled from 9 live intelligence sources and explained in plain language so any builder knows what to do.

SourcesSecurity SignalsHacker NewsReddit SecurityCISA KEVNVD CVE 2.0OSV.devGitHub AdvisoriesSecurity RSSVBS Registry
0 items
MITRE ATLAS — AI Adversary Tactics
Real-world AI/ML attack techniques mapped from threat intelligence
ML Attack StagingAML.TA0001
  • Acquire public ML artifacts
  • Obtain capabilities via supply chain
  • Stage malicious model weights
ML Model AccessAML.TA0002
  • Inference API access
  • Physical environment access
  • ML model file access
ReconnaissanceAML.TA0000
  • Search for victim AI assets
  • Discover ML model outputs
  • Identify data dependencies
Exfiltration via AIAML.TA0010
  • Model inversion attack
  • Membership inference
  • Training data reconstruction
Prompt InjectionAML.T0051
  • Direct prompt injection
  • Indirect via retrieved data
  • Jailbreak via role-play
Adversarial EvasionAML.T0015
  • Craft adversarial examples
  • Bypass content filters
  • Perturbation attacks
Supply Chain CompromiseAML.T0010
  • Poison training data
  • Backdoor ML model
  • Compromise model repository
Erode ML Model IntegrityAML.TA0005
  • Inject backdoor
  • Introduce bias
  • Corrupt fine-tuning pipeline
Found a new AI vulnerability pattern?

Submit for review. Verified entries receive a VBS advisory ID and researcher credit.