
AI might be cranking out code at record speed, but San Antonio researchers say it is also quietly opening a fresh hole in the software supply chain. A new UTSA study finds that nearly one in five package names suggested by AI-generated code do not exist at all, which means attackers get a handy list of fake dependencies they can register, booby-trap, and wait for someone to install. Open-source maintainers say they are already drowning in slick-looking but broken pull requests, and some big companies are rethinking how much outside code they will even allow in the door.
In a comprehensive paper accepted to the USENIX Security Symposium, the University of Texas at San Antonio team analyzed 576,000 Python and JavaScript code samples and found about 440,445 references to packages that do not exist - roughly 19.7% of all package mentions, according to UTSA. The researchers warn that many of these hallucinated package names keep showing up across multiple runs, which gives attackers a predictable hit list they can pre-register on public registries like PyPI and npm. The group says it has disclosed the findings to major model providers and is pushing for fixes both in the underlying models and in how teams integrate AI into their development workflows.
Maintainers Are Already Feeling The Strain
Open-source maintainers report that AI-written contributions often look polished on the surface while hiding logic bugs or subtle security flaws underneath. That forces reviewers to treat every unfamiliar pull request as a potential landmine instead of a welcome contribution. Rémi Verschelde, project manager for the Godot game engine, has publicly vented about a stream of “AI slop” pull requests that burn reviewer hours and leave maintainers exhausted, a complaint picked up by GamesRadar. Security consultants say that kind of day-to-day friction is how a supposed productivity boost quietly flips into a long-term liability.
Enterprises Face A Governance Problem
Inside large organizations, IT and security leaders warn that AI has slashed the cost of generating code but has barely touched the human effort required to review, test, and maintain it. Analysts and consultants told InfoWorld this imbalance is fueling what one consultant calls a “verification collapse,” as teams struggle to keep up with the sheer volume of machine-authored changes. One analyst likened AI coding agents to “robotic toddlers” - fast and energetic, but nowhere near reliable enough to replace human judgment on what actually ships to production.
What Slopsquatting Actually Is
Security researchers have labeled a new attack pattern “slopsquatting,” where adversaries register the very package names that large language models invent, then quietly fill those packages with malicious code. Reporting from BleepingComputer notes that UTSA’s tests found many hallucinated dependencies repeat across different prompts, which turns random-looking output into a stable target list. The same reporting highlights that open-source models hallucinate non-existent packages far more frequently than commercial ones. That repeatability is what turns a quirky hallucination into a predictable, weaponizable supply-chain vector.
Why This Is A Why-Now Story
This is not just a future risk on a slide deck. Security briefs in late February pointed to live incidents that echo the UTSA warnings. SANS NewsBites summarized an npm supply-chain compromise in mid-February that briefly pushed a post-install payload, and also flagged AI-augmented campaigns going after FortiGate devices, as signs that opportunistic attackers and brittle, AI-assisted toolchains are already intersecting in the wild. Those kinds of headlines have helped push vendors to accelerate dependency-scanning tools and have intensified calls for stricter guardrails around AI-generated code in enterprise environments.
How Teams Can Reduce Risk
Security teams and best-practice guides increasingly recommend treating AI as an assistant rather than an autopilot, and baking security checks directly into prompts, code review, and CI pipelines. The OpenSSF guide for AI code assistants urges teams to favor well-vetted libraries, generate software bills of materials, pin dependency versions, and wire automated dependency scanning into CI pipelines, among other steps, OpenSSF explains. Requirements for contributors to explain design intent, internal mirrors of external registries, and automated pull-request reviewers can all help offset the asymmetric burden that AI-generated submissions are placing on already thinly stretched maintainers.
A Local Fix For A Global Problem
Back in San Antonio, the UTSA researchers say they have already shared their results with model vendors and urged changes both to how models suggest dependencies and to how developers integrate those suggestions into their workflows. For local open-source maintainers and the companies that depend on them, that translates into concrete to-do items: add verification steps for new dependencies, mirror public package registries internally, and spell out clear AI contribution policies before allowing machine-written code into production pipelines. The core message is straightforward. AI can absolutely supercharge development, but only if organizations accept that the human work of verification, review, and governance still carries the final responsibility.









