Open Source Is Broken by Design: How Trust Became the Biggest Vulnerability in Software
The XZ Utils Backdoor: A Three-Year Campaign Built on Trust
Most security incidents follow a familiar pattern. Someone clicks a phishing link, a port is left open, a credential gets reused. The XZ Utils attack was different. In February 2024, a malicious backdoor was discovered in the Linux build of the xz compression utility — a component so deeply embedded in Linux infrastructure that it ships with virtually every distribution. The backdoor was introduced by an account named “Jia Tan,” and subsequent investigation revealed that the campaign was the culmination of approximately three years of deliberate effort to gain a position of trust within the project.
The attacker didn’t exploit a vulnerability. They became a trusted contributor. Jia Tan began building this attack in 2021, making legitimate contributions and gradually earning the confidence of the primary maintainer — a single individual who had been struggling with mental health issues and limited resources. Over time, Jia Tan was granted commit access, and used it to introduce a sophisticated backdoor targeting OpenSSH, which underpins remote access to hundreds of millions of servers worldwide. Had it gone undetected, it would have dwarfed the SolarWinds compromise in scope and impact. It was caught, almost by accident, by a Microsoft developer investigating a 500 millisecond performance anomaly in SSH connections. We got lucky.
This was not the first time someone weaponised open source trust. In 2018, the event-stream npm package — downloaded over two million times per week — was compromised after its burned-out maintainer transferred publishing rights to a new contributor who had offered to help. That contributor injected a targeted payload designed to steal cryptocurrency wallet credentials, and the malicious code sat in the package for two months before anyone noticed. The SolarWinds breach in 2020 showed that nation-states were willing to invest in compromising software build pipelines. The Codecov breach in 2021 showed that CI/CD tooling itself was a viable attack surface. Each incident revealed the same structural weakness, and each time the industry treated it as an isolated event rather than a pattern.
The Structural Problem
The XZ incident revealed a systemic flaw in how open source works. Despite its adoption across routers, firewalls, web servers, smartphones, and virtually every piece of critical infrastructure on the planet, the xz compression library was maintained by a single volunteer, working for free. This is not an anomaly — it is the norm. The internet runs on critical software maintained by a handful of unpaid individuals, and that is exactly the attack surface that state-sponsored threat actors are now deliberately targeting.
The same pattern played out this week at a smaller scale when a fake Stripe package appeared on NuGet, the .NET package registry. No sophisticated malware delivery — just a carefully named package with an inflated download count and a payload that silently forwarded API keys to an attacker-controlled database. Developers check star counts, download numbers, and readme quality. All of these signals can be faked in an afternoon. The package name looked right, the documentation looked professional, the download count suggested widespread adoption. None of it was real. It just needed to look trustworthy long enough for a developer to run install and move on.
According to ReversingLabs’ 2026 Software Supply Chain Security Report, there was a 73% increase in detections of malicious open-source packages in 2025, alongside the emergence of the first ever registry-native worm malware. Malicious activity on the npm repository more than doubled, accounting for nearly 90% of all open-source malware detected. These represent a maturing attack category with dedicated tooling, automation, and in some cases nation-state backing — expanding in both directions simultaneously, more sophisticated at the top and more automated at the bottom.
And Then AI Made It Worse
AI coding tools like GitHub Copilot, Cursor, and Claude Code are now deeply embedded in how software gets written. They are also, inadvertently, becoming a delivery mechanism for supply chain attacks. Researchers have identified a threat category they’ve named “slopsquatting” — when AI models generate code, they sometimes recommend packages that don’t exist. The model confidently suggests importing a library with a plausible-sounding name, but the package has never been published. Attackers have realised that if they register a package with the same name as one consistently hallucinated by a popular AI model, any developer who copies the AI-generated code will automatically download and execute the malicious payload. Researchers tested 16 AI coding models and found that nearly 20% of generated code samples recommended non-existent packages, and that 43% of those hallucinations were repeated consistently across multiple queries — making them predictable, reliable targets for attackers to pre-register.
The problem goes deeper than hallucinated dependencies. AI models themselves have become a supply chain risk. Unlike traditional software components where you can inspect the source code, AI models are opaque, often lacking transparency regarding their origin, training data, and contributors. This makes them susceptible to backdoors introduced during training or fine-tuning that remain dormant until specific triggers are introduced. Out of over a million models on Hugging Face, researchers found that 400 contained malicious code designed to execute when the model was loaded. Traditional security tools — SBOMs, static analysis, code review — are not designed to detect threats hidden in model weights. The attack surface has expanded into territory that our existing tooling cannot see.
The Bottom Line
The XZ Utils backdoor should have been a wake-up call, but the industry has largely continued operating on the same assumptions that made the attack possible. The trust model that makes open source work — the assumption that contributors are acting in good faith, that popular packages are safe, that community review catches malicious code — is the same trust model that attackers are now systematically exploiting. AI is accelerating this problem, not solving it. The attack surface is growing faster than the security community’s ability to monitor it.
For businesses, the implication is straightforward: if you build or deploy software, you have supply chain risk, and that risk is increasing. Organisations need to audit their dependency chains, implement runtime monitoring that catches what static analysis misses, treat AI-generated code as untrusted input, and build continuous visibility into their software supply chain — whether through internal capability or a managed service like Spectra. The organisations that take this seriously now will be far better positioned than those that wait for their own XZ moment to arrive.
Ready to Work Together?
Let's discuss how we can help protect your business and achieve your security goals.