Researchers recently analyzed 18,000 Claude Code configuration files pulled from public GitHub repositories. What they found was straightforward and alarming: developers are already installing mistyped, misconfigured, and near-identical MCP server names — often without realizing it. The human-error condition that makes typosquatting work was already present at scale before any attacker needed to exploit it.
That research was published by UpGuard. The registries it examined — Smithery.ai, MCP.so, and the official GitHub MCP Registry — are the same ones that threat actors have already begun operating in. This post is about what they found, what it means for your organization, and what one real supply chain attack looked like when it played out over four months in 2025.
Typosquatting is not a new concept. It has been a documented attack method in software package registries — npm, PyPI, RubyGems — for years. The mechanics are the same regardless of the ecosystem.
First, an attacker registers a name that is visually close to a legitimate, trusted tool. Then, a user makes a small error — a missing letter, a transposed character, an incorrect capitalization, and moves to the last stage, where they install the attacker's version instead of the legitimate one.
Staging this kind of attack is unsuspectingly simple, as typosquatting only requires two ingredients: Accessibility (the ability to create lookalike resources) and Human Error (a user making a "fat-finger" mistake). In the MCP ecosystem, both are currently present at a level that should concern any security leader.
Deep Dive: For a full technical breakdown of our analysis of those 18,000 developer files, read our original research on typosquatting in the MCP ecosystem.
Not all MCP registries carry equal risk. Understanding the governance posture of each is the first step in making an informed decision about where your developers should and should not source tools.
The practical reality for mid-market organizations is that developers do not browse the 57-server official registry first. They search Smithery or MCP.so, because those platforms are where the tools they actually need are listed.
It's these moderately to near-unmoderated platforms where the risk concentrates, and is also where small oversights can lead to massive consequences, as the MCP ecosystem is orders of magnitude easier for opportunistic attackers to target.
In February 2026, researchers at Straiker published findings on what may be the most instructive MCP supply chain attack to date — not because of its technical sophistication, but because of what it reveals about attacker economics and intent.
The threat actor was SmartLoader, an established malware operation previously focused on distributing infostealers through fake pirated software download sites. The decision to pivot infrastructure toward the MCP ecosystem was deliberate and calculated.
Developers hold concentrated, high-value credentials — cloud service logins, GitHub tokens with CI/CD access, cryptocurrency wallets, and database credentials. Shifting to target them when MCP servers are a critical vulnerability within their workflow is not just logical, but it should be expected.
SmartLoader chose the Oura Ring MCP server as its clone target. The original was created by an OpenAI engineer and distributed via GitHub. Its user base — productivity-focused developers integrating wearable health data into AI workflows — was precisely the demographic SmartLoader wanted to reach: technically sophisticated, credential-rich, and accustomed to self-sourcing developer tools.
Five fake GitHub accounts were created with AI-generated developer personas. Each account cross-forked the others to manufacture the appearance of an active community of contributors. Forks accumulated. Apparent activity increased.
Over three months, the attacker built enough surface credibility to pass a casual review. This phase required patience, not technical skill — which is significant, because it means the barrier to replication is low.
A malicious fork was published. The code appeared functionally identical to the legitimate Oura Ring MCP server. The payload was embedded in a file called resource.txt, containing a LuaJIT script. The technical obfuscation was layered: 443-state virtual machine obfuscation, octal string encoding, and chunked assembly techniques designed to defeat static analysis.
One indicator of compromise stood out: the original author of the legitimate server was excluded from the contributor list on the malicious fork. Legitimate forks always include the original creator. The absence was the tell — but only if you knew to look for it.
The trojanized repository was submitted to a legitimate MCP market registry. It appeared alongside genuine community contributions. No visual distinction. No warning. No moderation review flagged the submission.
On execution, it exfiltrated browser saved passwords, cloud service session cookies, Discord tokens, cryptocurrency wallet files, SSH keys, and API credentials — silently, before any user noticed abnormal behavior.
The persistence mechanism was disguised as a legitimate Windows audio process: RealtekAudioManager_ODMw.exe. Standard endpoint detection tools that rely on process name recognition would not flag it.
For a lean security team, the immediate actions are practical and achievable:
1. Search your brand in community registries now.
2. Establish a simple provenance check for MCP server approvals.
3. Audit your remote-hosted MCP dependencies.
4. Ask your development team leads to identify every MCP server currently installed.
For the analyst on your team who triages developer environment alerts: the provenance checks above — account age, contributor history, original author presence — are the triage equivalent of checking sender reputation before investigating a phishing alert.
Build them into your MCP server approval workflow the same way you would vet a new open-source dependency. If your team already spends 20 minutes per alert gathering context, adding a 5-minute provenance check at installation time prevents hours of investigation later.
Security teams that have managed open-source dependency risk will recognize this pattern immediately. The npm and PyPI ecosystems have dealt with typosquatting, dependency confusion, and supply chain poisoning for years. The response — tools like Dependabot, npm audit, Snyk, and private package registries — took years to develop and are still imperfect.
The MCP ecosystem is at the same inflection point as npm was around 2018: growing rapidly, largely unmoderated, and increasingly targeted by attackers who have recognized the opportunity. The difference is that MCP servers do not just provide libraries — they provide AI agents with the ability to execute actions in your systems.
The tooling to match the threat is still catching up. In the meantime, manual verification and policy controls are the primary defense.
The incidents in this post — and the data behind them — point to a single underlying issue: most organizations have no systematic way to know which MCP servers are being used across their developer environments, or whether any of those servers have been impersonated in public registries.
That is not a criticism. MCP launched in November 2024, and security tooling for it is still emerging. But the window between the threat materializing and the detection capability existing is precisely where exposure lives, and that window slowly grows wider each day.
UpGuard Breach Risk's Threat Monitoring continuously scans major MCP registries for brand impersonation and unofficial servers targeting your organization. The same registry-layer detection would have surfaced the SmartLoader campaign before a developer installed it.
See how Threat Monitoring identifies registry threats →
Next in this series: Post 3 — Prompt Injection and Tool Poisoning: How AI Agents Get Hijacked