← Back to blog

Vulnerability Module

Hallucinated Dependencies: Detection and Remediation Guide

April 10, 2026 · 11 min read · PolyDefender Research Team

AI coding tools sometimes invent package names that do not exist. Attackers register those names to deliver malware. Here is how to detect, prevent, and respond to hallucinated dependency attacks.

Hallucinated dependencies represent one of the most novel attack vectors enabled by AI coding tools. When a large language model suggests a package to solve an implementation problem, it draws on patterns from its training data. If the training data did not include a clear record of a specific package, the model may invent a plausible-sounding name that does not correspond to any real package — or that corresponded to a legitimate package that has since been removed.

Why This Attack Vector Works

Attackers have learned to monitor for package names that appear frequently in AI-generated code suggestions. By registering these names on npm, PyPI, or other package registries before (or after) they become popular hallucinations, attackers position their malicious packages to be installed by developers who trust their AI tool's recommendation without independent verification.

The attack is insidious because it requires no compromise of the developer's machine or the AI tool itself. The attacker simply registers a package name that matches what developers are asking for, adds a postinstall script that executes malicious code, and waits. The developer runs npm install on a trusted recommendation and the malware runs with full developer machine privileges.

Real-World Hallucinated Dependency Patterns

In PolyDefender's analysis of AI-assisted development sessions, the highest-risk hallucination patterns share common characteristics:

  • Package names that combine two real package names in a plausible way (e.g., "express-supabase-auth" when no such official package exists)
  • Package names that add "ai-", "-ai", "-gpt", or "-llm" prefixes/suffixes to real utility package names
  • Package names that are slight variations of well-known packages with an added feature name (e.g., "lodash-deep-security")
  • Packages in emerging domains where there is not yet a clear canonical library (AI vector stores, model routing, etc.)

Step 1: Verify Every Package Before Installing

The single most effective control is to verify package existence and legitimacy before running npm install. This takes less than 60 seconds per package and eliminates the majority of hallucination attacks.

  • Search the package name on npmjs.com (or pypi.org for Python)
  • Check the publication date — packages first published within the last 30-60 days with no download history are high risk
  • Check the weekly download count — legitimate packages for popular use cases typically have thousands or millions of weekly downloads
  • Review the maintainer profile — single-maintainer packages with no other published packages are higher risk than packages from well-established maintainers
  • Check the package's GitHub repository (if listed) — no README, no issues, no stars, and a recent creation date are all risk signals

Step 2: Inspect postinstall Scripts

npm packages can include a scripts.postinstall field in package.json that executes code automatically when the package is installed. This is used legitimately by some packages (native module compilation, etc.) but is almost never needed by pure JavaScript utility packages.

  • Before installing any AI-suggested package, run: npm pack <package-name> to download it without installing, then inspect its package.json for a postinstall script
  • If a package has a postinstall script, read the script contents before proceeding. Legitimate postinstall scripts typically compile native code or run build steps — they should not make network requests or access environment variables.
  • In your CI pipeline, add a check that fails if any new package in package.json has a postinstall script that was not explicitly reviewed

Step 3: Lock Your Dependency Versions

A lockfile (package-lock.json for npm, yarn.lock for Yarn, pnpm-lock.yaml for pnpm) pins the exact version and hash of every package in your dependency tree. This prevents an attacker from publishing a compromised version of a legitimate package that would be automatically installed on the next npm install.

  • Commit your lockfile to version control and never add it to .gitignore
  • Enable integrity checking: npm ci (not npm install) in CI environments verifies package hashes against the lockfile
  • Require human review of lockfile changes in pull requests — unexpected lockfile changes may indicate a dependency was added or modified without explicit intent

Step 4: Add Automated Dependency Scanning to CI

Manual review catches many issues but does not scale. Add automated dependency scanning to your CI pipeline:

  • PolyDefender's scan checks new dependency additions for publication date, download count, and postinstall script presence, surfacing high-risk packages before deployment
  • GitHub's Dependabot alerts you to known-malicious packages from the GitHub Advisory Database
  • Socket.dev provides supply chain analysis that specifically looks for dependency confusion attacks and recently published packages with suspicious characteristics

Responding to a Confirmed Hallucinated Dependency

If you discover that a package you installed was malicious:

  • Remove the package immediately: npm uninstall <package-name>
  • Assume your development machine is compromised — credentials stored in environment variables, .env files, or the system keychain may have been exfiltrated
  • Rotate all credentials that were present on the machine during or after the installation
  • Review your git history for unauthorized commits pushed from the compromised machine
  • Check your deployment platform for unauthorized environment variable changes or new deployments
Security Scan

Need a fast security baseline?

Run a free scan to detect secrets, auth bypass, RLS exposure, injection paths, and dependency risk in minutes.