Back to Intelligence
March 27, 2026Vinay KumarSupply Chain Security

The LiteLLM Supply Chain Attack Explained: What Happened, Who's Affected, and What to Do Now

Cover Image for The LiteLLM Supply Chain Attack Explained: What Happened, Who's Affected, and What to Do Now

The LiteLLM Supply Chain Attack Explained: What Happened, Who's Affected, and What to Do Now

TLDR: On the morning of March 24, 2026, two malicious versions of LiteLLM — one of the most widely used Python packages for building AI applications — were published to PyPI. The packages were live for roughly two and a half hours. During that window, anyone who ran pip install litellm without a pinned version may have installed a credential stealer that targeted AWS keys, Kubernetes tokens, SSH keys, cloud credentials, CI/CD secrets, and database passwords. If you use LiteLLM, check your version right now. If you installed between 10:39 UTC and 13:25 UTC on March 24, treat your environment as compromised.


"Just a Library Update" — Until It Wasn't

Picture a typical Monday morning. A developer on your team runs a routine pip install --upgrade litellm in a CI/CD pipeline. The upgrade goes smoothly. No errors. The pipeline passes. Everything looks fine.

What they didn't know: for a 166-minute window earlier that day, two versions of LiteLLM on PyPI contained a hidden, multi-stage credential stealer. The moment those packages executed, they began silently harvesting every secret they could find on the host — and exfiltrating them, encrypted, to an attacker-controlled server.

This is a supply chain attack. Not your code. Not your team. A package you trusted, from a project you rely on, turned into a weapon.

And this one hit one of the most sensitive packages imaginable.


Why LiteLLM Was the Perfect Target

If you build AI-powered applications, there's a good chance LiteLLM is somewhere in your stack. It's an open-source Python library that acts as a universal interface for over 100 large language model providers — OpenAI, Anthropic, Google Gemini, and more — translating API calls into a standard format. It's the plumbing of the modern AI application layer.

That position in the stack is exactly what made it valuable to attackers. According to Wiz's research, LiteLLM is present in 36% of cloud environments. It processes API keys, environment variables, and credentials as part of its normal function. Compromise the library, and you're inside the environment of every developer and company that uses it — with direct access to the most sensitive configuration data they have.

The package receives approximately 3 million downloads per day. Even two and a half hours of exposure represents an enormous potential blast radius.


What Actually Happened

The threat group behind this attack — identified by Wiz and Sonatype as TeamPCP, suspected to have links to the LAPSUS$ group — had already compromised Aqua Security's Trivy security scanner the day before. During that compromise, they obtained an API token belonging to a LiteLLM maintainer's PyPI account.

They used that token to bypass LiteLLM's official CI/CD pipeline entirely and publish two malicious packages directly to PyPI:

v1.82.7 — published at approximately 08:30 UTC. The malicious payload was embedded in proxy_server.py. It executed whenever litellm --proxy was run or when litellm.proxy.proxy_server was imported.

v1.82.8 — a more dangerous escalation published shortly after. In addition to the proxy_server.py payload, it introduced a file called litellm_init.pth — exploiting Python's .pth mechanism, which allows arbitrary code to execute during interpreter startup. This meant the malware ran whenever Python was invoked on the system — regardless of whether LiteLLM was explicitly imported. This made it significantly harder to detect and dramatically more persistent.

PyPI quarantined both packages at approximately 11:25 UTC. The official LiteLLM security update was published the same day. LiteLLM's own incident report confirmed the compromised versions, the attack vector, and the immediate steps taken.


What the Malware Did Once Installed

The payload operated in three stages, each more aggressive than the last.

The first stage launched data collection and began exfiltrating immediately. The second stage performed deep reconnaissance across the host — enumerating system details and searching for: environment variables and API keys, SSH keys and configurations, cloud provider credentials (AWS, GCP, Azure), Kubernetes configuration files and service account tokens, CI/CD pipeline secrets, Terraform and Helm configurations, Docker configs, database credentials, and cryptocurrency wallet data.

In some cases, the malware actively used discovered credentials — querying AWS APIs and accessing Kubernetes secrets — rather than simply collecting them.

All harvested data was encrypted using AES-256-CBC with a randomly generated session key, that key was then encrypted with a hard-coded RSA public key embedded in the malware, and the entire package was exfiltrated to attacker-controlled domains: checkmarx[.]zone (version 1.82.7) and models[.]litellm[.]cloud (version 1.82.8). Neither domain is affiliated with LiteLLM.

The third stage dropped a persistent Python script (sysmon.py) configured to run as a system service, polling the attacker's server every 50 minutes for new payloads — meaning even after the initial infection is cleaned up, compromised systems may continue to receive attacker instructions until the persistence mechanism is removed.

This is not a script-kiddie attack. The sophistication of the encryption, persistence mechanism, and evasion techniques (including serving benign content to sandbox analysis systems) points to a well-resourced, organised threat group.


Are You Affected?

You may be affected if any of the following are true:

  • You ran pip install litellm or pip install --upgrade litellm on March 24, 2026 between 10:39 UTC and 13:25 UTC
  • You built a Docker image during that window that included pip install litellm without a pinned version
  • You use an AI agent framework, MCP server, or LLM orchestration tool that depends on LiteLLM as a transitive dependency
  • Your CI/CD pipeline pulls dependencies without version pinning

You are not affected if:

  • You are running the official LiteLLM Proxy Docker image (ghcr.io/berriai/litellm) — it pins dependencies and did not pull the compromised PyPI versions
  • You are on v1.82.6 or earlier and did not upgrade during the window
  • You installed LiteLLM from the GitHub source repository, which was not compromised
  • You use LiteLLM Cloud

To check your version: pip show litellm


What to Do Right Now

If you installed v1.82.7 or v1.82.8, act immediately:

1. Check for the persistence file. Run this on any potentially affected host:

find /usr/lib/python3/ -name "litellm_init.pth"

If it's present, remove it and treat the host as fully compromised.

2. Rotate every credential on affected systems. Assume everything has been exfiltrated: AWS access keys, cloud service tokens, database passwords, SSH keys, Kubernetes tokens, CI/CD secrets, .env file contents. Rotate all of them immediately from a clean, unaffected machine.

3. Check outbound traffic logs for connections to models.litellm.cloud or checkmarx.zone. Either domain in your logs is a confirmed indicator of compromise.

4. Audit your entire dependency tree. Don't just check direct LiteLLM installations — check every package in your environment that might pull LiteLLM as a transitive dependency. AI agent frameworks and orchestration tools are the most likely indirect vectors.

5. Pin your version to v1.82.6 or wait for a verified clean release from the LiteLLM team, who have paused new releases pending a full supply chain review with Google's Mandiant team.


The Bigger Picture

This attack is the third in a series by TeamPCP in the space of two days. They first compromised Aqua Security's Trivy scanner, then Checkmarx's KICS GitHub Action, then used credentials obtained from those attacks to hit LiteLLM. This is a coordinated, escalating campaign targeting the security and AI tooling ecosystem specifically.

We've written before about how software supply chain attacks are now OWASP #3 — and about how the XZ Utils backdoor nearly compromised every Linux server on the internet. LiteLLM is the same category of attack, executed faster, against a more targeted segment of the industry.

The lesson isn't to stop using open source. It's to stop treating your dependencies as someone else's responsibility. Every package in your application is code you are accountable for — whether you wrote it or not. Secrets committed to code or picked up from a compromised package are equally dangerous once they're in an attacker's hands.


How Kuboid Secure Layer Can Help

If your team uses LiteLLM or any AI framework with LLM dependencies, now is the right time to audit your dependency tree, your secrets management practices, and your CI/CD pipeline security.

Our cloud and application security reviews specifically cover supply chain exposure — what's in your dependency graph, whether your build pipeline is hardened against token theft, and whether your secrets are being managed in a way that limits blast radius when an upstream package is compromised.

If you think you may have been affected and need guidance on response, or if you want to proactively assess your exposure before the next attack, reach out to us.

Are you running LiteLLM in production? Have you checked your version yet? Drop a comment — and if you found the malicious package in your environment, please share what you saw. The more the community shares, the faster we all respond.


Kuboid Secure Layer provides application and cloud security assessments for businesses building on modern AI infrastructure. Learn more at kuboid.in/services.

Vinay Kumar
Vinay Kumar
Security Researcher @ Kuboid
Get In Touch

Let's find your vulnerabilities before they do.

Tell us about your product and we'll tell you what we'd attack first. Free consultation, no commitment.

  • 📧support@kuboid.in
  • ⏱️Typical response within 24 hours
  • 🌍Serving clients globally from India
  • 🔒NDA available before any discussion
Loading form...