Study Finds 26 Third-Party AI Routers are Secretly Stealing Credentials

Researchers at the University of California, Santa Barbara, have found a major security issue in the AI development supply chain. Their study, titled “Your Agent Is Mine: Measuring Malicious Intermediary Attacks on the LLM Supply Chain,” shows how some third-party LLM routers may steal credentials and drain crypto wallets.

One researcher warned on X that “26 LLM routers are secretly injecting malicious tool calls and stealing creds” after the findings were published.

The team tested 28 paid routers from platforms like Taobao and Shopify, along with 400 free routers from developer communities. Results raised concern. Nine routers injected malicious code into requests. Two systems used adaptive methods to avoid detection. Seventeen routers accessed cloud credentials without permission. One even drained Ether from a test crypto wallet set up by the researchers.

The core issue lies in how these routers function. They act as application-layer proxies and can see all plain-text data passing through. Unlike traditional attacks, they do not need to break encryption. Developers voluntarily route their traffic through these services, which means sensitive data like API keys, seed phrases, and private credentials often pass through them directly.

The study also highlights how difficult detection is. Routers already handle sensitive data as part of normal operation, which hides malicious behavior inside expected traffic flow. Researchers say these systems sit on a critical trust boundary that most developers do not properly secure.

Another concern is “YOLO mode,” an automation setting used in many AI agent frameworks. In the study, 401 of 440 compromised sessions ran in this mode, in which tool actions execute without confirmation. This creates room for silent exploitation.

The researchers also showed how even trusted routers can become risky. A single leaked API key produced massive usage and exposed data across connected systems.

They recommend strict limits on passing private keys or seed phrases through AI agents. A long-term fix may require cryptographic verification systems, similar to DKIM for email, to confirm that tool actions match verified model outputs.

Until such safeguards exist, the study warns that every third-party router should be treated as a potential security risk.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top