A critical pre-auth SQL injection in LiteLLM (CVE-2026-42208, CVSS 9.3) lets attackers steal every API key the proxy holds. Exploitation was observed in the wild within 36 hours of disclosure. Here is what to do this week.
Key takeaways
- CVE-2026-42208 is a critical (CVSS 9.3) pre-auth SQL injection in LiteLLM affecting versions 1.81.16 through 1.83.6. The fix shipped in 1.83.7-stable on 19 April 2026, and targeted exploitation was observed within 36 hours of disclosure.
- The bug lets an unauthenticated attacker read or modify the proxy database — extracting the master admin key, every issued virtual key, and stored credentials for OpenAI, Anthropic, Bedrock and other upstream providers in one shot.
- If your LiteLLM has been internet-facing at any point since 19 April on an affected version, treat it as compromised. Patch, then rotate the master key, all virtual keys, and every upstream provider key — and audit spend on each provider account.
- As a holding mitigation, setting disable_error_logs: true under general_settings removes the error-handling path through which the malicious input reaches the vulnerable query. This is not a fix — patch as soon as you can.
- AI gateways are now production infrastructure handling some of your most valuable secrets. They need a patching cadence, real authentication in front of them, and network segmentation — not a one-off deploy bolted on next to a chatbot.
The LiteLLM project shipped a critical patch on 19 April. By 26 April, attackers were already firing exploit payloads at production deployments. If you run LiteLLM as an AI gateway — and a lot of small businesses, schools and tinkerers now do — you have until roughly the time it takes to read this article to act.
Sysdig confirmed that the first targeted exploitation of CVE-2026-42208 was logged on 26 April at 16:17 UTC, around 26 hours after the GitHub Advisory was indexed. The CVSS score is 9.3. The bug requires no authentication. Any internet-reachable instance on an affected version is fair game.
This is not a niche issue.
What LiteLLM is, and why this matters to non-FAANG teams
LiteLLM is an open-source proxy that sits in front of OpenAI, Anthropic, Bedrock, Gemini, Mistral and others, and gives your apps a single OpenAI-compatible endpoint to talk to. You configure keys for each upstream provider in one place, hand out "virtual keys" to internal tools and users, and get rate limiting, spend tracking and audit logs out of the box.
That convenience is exactly why it has spread far beyond Silicon Valley. We have seen it deployed inside school multi-academy trusts running internal AI tooling, by SMB engineering teams trying to keep spend visible across multiple LLM bills, and by self-hosters who would rather not paste a personal OpenAI key into half a dozen separate apps. The project has 22,000+ stars on GitHub and is now a fixture of the AI middleware stack.
When that proxy gets compromised, the attacker does not just get into one app. They walk away with every key it manages. That means the master admin key, every issued virtual key, and the upstream provider credentials for OpenAI, Anthropic, Bedrock and the rest.
What CVE-2026-42208 actually is
The bug is a textbook SQL injection — the kind of mistake that should not exist in 2026 software, in a security-sensitive code path, in a project at this scale. But here we are.
When LiteLLM's proxy receives a request, it pulls the bearer token from the Authorization header and looks it up in the database to find out which virtual key it belongs to. In affected versions (≥ 1.81.16, < 1.83.7) that lookup query was built by string-concatenating the bearer value into a SELECT against the LiteLLM_VerificationToken table, with no parameter binding.
Send a single quote in the header, escape the string literal, and you are appending arbitrary SQL. The path runs before authentication completes, which means a fully unauthenticated attacker — anyone who can reach the proxy port — can read or modify the proxy's database.
The maintainers' patch notes call out three particularly attractive tables for an attacker:
- LiteLLM_VerificationToken — every virtual key issued by the proxy, including the master admin key.
- litellm_credentials — stored upstream provider credentials for OpenAI, Anthropic, Bedrock, and so on.
- litellm_config — proxy environment variables, often including database credentials and other secrets.
In other words: if your LiteLLM is on the public internet on an affected version, an attacker can mint themselves the master key and start spending against your OpenAI account, then move on to whatever else those environment variables expose.
Who is affected
You are affected if you run LiteLLM proxy at version 1.81.16 through 1.83.6 inclusive. Check with:
pip show litellm | grep Version
or by inspecting the Docker image tag. If the version is in that range and the proxy is reachable from anywhere outside trusted internal networks, treat it as in scope.
You are particularly exposed if you exposed the proxy on a public hostname for staff or external apps to reach, skipped putting it behind a VPN or auth-proxying reverse proxy because "the keys themselves are the auth", or did not enable disable_error_logs in general_settings.
If your LiteLLM only listens on localhost or a private network with no untrusted access, the attack surface is much smaller — but the people who can reach it can still pivot. Do not sit on the patch.
What to do this week
Patch. Upgrade to 1.83.7-stable or later. The maintainers' release notes cover this and several adjacent fixes shipped in the same window.
Mitigate immediately if you cannot patch yet. Setting disable_error_logs: true under general_settings removes the error-handling code path through which the bad query was reachable. This is a holding measure, not a fix. Patch as soon as your change window allows.
Rotate every key the proxy touched. Treat any internet-facing instance running an affected version during the exposure window (19 to roughly 27 April) as compromised, even with no specific evidence of intrusion. That means rotating the LiteLLM master key, every virtual key, and every upstream provider key — OpenAI, Anthropic, Bedrock, Gemini, Mistral — and updating the secrets wherever they are stored. Yes, this is painful. Do it anyway.
Audit spend on every upstream account. Check OpenAI, Anthropic and any other provider dashboards for unexpected usage from 19 April onward. Crypto-mining-via-LLM and resale of stolen keys are both well-documented monetisation paths for stolen API credentials.
Get the proxy off the open internet. If your deployment was directly exposed, this is the moment to put it behind a Tailscale, WireGuard, or reverse proxy with a real authentication layer (Cloudflare Access, oauth2-proxy, Authentik). The argument that "the API key is the auth" just got disproven publicly.
The wider story: AI gateway sprawl
Step back from the specific bug for a moment.
Over the past 18 months, AI gateway projects have become the unglamorous plumbing of the LLM era. Every team that touches more than one model provider ends up wanting one. They are being deployed faster than the practices around running them — patch cadence, network exposure, secrets handling, monitoring — have caught up.
LiteLLM is not uniquely insecure. It is a mature, widely used project, and the maintainers responded promptly with a patch and a detailed advisory. What this incident exposes is the surface area: an internet-exposed gateway holding bearer tokens for every major frontier model, often deployed by people who specced it out for AI features rather than as security infrastructure.
The same lessons apply to every other AI proxy your team might be running, including ones you have forgotten about. A spike in MCP server deployments has produced its own command-injection CVE. LiteLLM had a supply chain compromise on PyPI in March, where two malicious package versions briefly shipped after an attacker pivoted in through a compromised CI dependency. Earlier in the year, a proxy config endpoint allowed authenticated low-privilege users to escalate to remote code execution.
If you are running AI middleware, you are now running infrastructure. Patch it, monitor it, segment it.
A note for school and MAT IT leads
If your network has anything labelled "AI", "ChatGPT proxy" or "internal copilot", find out today whether LiteLLM is underneath it. Several of the AI-for-education plug-and-play platforms now use it as their gateway component, and a vendor running an unpatched LiteLLM with your master OpenAI key in it is your incident, not theirs. Ask the vendor in writing what version they are on and when they patched. Get the answer in the same paper trail you use for safeguarding decisions.
If you are starting from scratch
If this is the first time you have heard of LiteLLM and you have decided you would rather not run an AI gateway yourself, that is a defensible choice. For most small businesses, hitting the OpenAI or Anthropic API directly from one or two apps is simpler and has a smaller attack surface. AI gateways earn their keep when you genuinely have many internal teams or apps and need cost attribution and rate limiting across them. If you do not have that problem, you do not need that infrastructure yet.
If you do need it, run it on a private network, put real authentication in front of it, watch your provider bills, and read the changelog when patches drop.
How ReadyToday can help
We help schools, MATs and small UK businesses run the AI tooling they actually need without inheriting somebody else's incident response. That includes auditing what is exposed, picking a sensible architecture, putting authentication and segmentation in the right places, and writing the runbook so your team knows what to do when the next CVE drops. Open-source-first, vendor-neutral, no lock-in.
If you have just read this and quietly realised you are not sure what version your gateway is running, get in touch — or have a look at our services. It is much cheaper to find out where you stand now than after the bill from OpenAI lands.
Sources: LiteLLM security hardening, April 2026 | Sysdig — CVE-2026-42208 exploitation analysis | BleepingComputer — Hackers exploiting LiteLLM pre-auth SQLi | SecurityWeek — Fresh LiteLLM vulnerability exploited shortly after disclosure | LiteLLM — March 2026 supply chain incident | SentinelOne — CVE-2026-35029