Publish date
January 16, 2026
{x} minute read
Written by
Reviewed by
Table of contents

What permissions are developers granting to Claude Code, and could those permissions pose a risk if the coding agent were exposed to malicious inputs? To answer this question, we turned to GitHub, the website where developers go to share their private configuration files.

From Github we collected a dataset of 18,470 .claude/settings.local.json files, each containing the permissions that a user granted to Claude Code for a software project. Analyzing the contents of those files reveals that developers are granting Claude extensive rights to download content from the web and read, write, and delete files on their machines without requiring developer permission. 

The most critical risks identified include arbitrary code execution, unmonitored file deletion, and unrestricted network access—vulnerabilities that could serve as primary vectors for supply chain attacks and data exfiltration. When chained together, the widespread permissions for Claude to download files from untrusted sources, execute code in the local environment, and deploy files back to GitHub create the prospect of Shai-Hulud-like worms, which could weaponize vibe-coding permissions to spread across developer ecosystems.

Background 

Claude Code is a command-line interface (CLI) agent developed by Anthropic. It interacts with a user’s local file system and terminal to perform coding tasks. The tool operates on a permission system defined in the settings.local.json. As Claude Code determines what actions will satisfy the user’s request, it either asks the user for permission to proceed or executes those commands. The settings.local.json file defines which commands Claude can run without prompting for permission, which commands it should always request confirmation for, and which it is not permitted to run.

When repeatedly performing similar actions, Claude will ask the user whether it should add a rule to the permissions file allowing it to perform those actions without asking the user in the future. This pattern makes sense: if a user is repeatedly allowing an action, the system should “learn” to do it without explicit permission. Such permission rules need to be broad enough to cover similar but non-identical versions of the action, but not so broad as to create unnecessary risk. In practice, this can lead to Claude prompting users for permission to use commands with any arguments, like using curl to retrieve any URL or python to run any Python script. Users can edit their permissions to set the boundaries they want and/or accept the proposed permissions updates from Claude. 

Methodology

This study analyzed a dataset of 18,470 unique Claude configuration files collected from public GitHub repositories. After downloading our dataset, we parsed each line as a separate permission to aggregate permissions across the 18k files. 

Typically, local settings files are not committed to code repositories, as they are intended for use within the developer’s environment only. Files like those in the .claude directory would usually be added to a developer’s .gitignore file to prevent inclusion in a shared repo, though they do not contain credentials or other inherently sensitive data. In these cases, however, the developers did not omit the settings.local.json files, presumably because the files contained no credentials. While these files pose minimal inherent risk, they are useful for providing insight into developer workstation configurations that are otherwise inaccessible. 

Critical Security Risks

Developers are granting Claude permission to perform actions that could be abused by adversaries to deliver payloads from remote locations and execute commands that compromise the integrity and confidentiality of their working environments. Such attacks assume some mechanism of prompt injection, which other researchers are exploring actively

Network access for payload retrieval and data exfiltration

Malware typically operates in two phases: first, the delivery of a minimal, hard-to-detect payload, such as an encoded bash command, which then downloads a more substantial payload from an attacker-controlled resource. To do so, the initial payload needs to be able to execute web request commands, like using the common bash tool curl. In the world of AI agents, curl is comparable to the web_fetch tool. Granting permission to fetch content from the web or GitHub provides attackers with a means to escalate from prompt injection to a more significant compromise. 

  • The curl Vector (21.3% of files): Allowing Bash(curl:*) lets the agent download external scripts (e.g., malware stagers) or upload local secrets to a remote server via POST requests. 
  • Unrestricted Web Fetching: While many users restricted web_fetch to documentation sites (e.g., docs.anthropic.com), a notable number used the wildcard (*) or broad domains (e.g., github.com) (11.4%), allowing the agent to read arbitrary content from the web. GitHub is a core resource for coding agents to commit code and download relevant resources, but it is also an unmoderated content host that attackers have used to distribute malicious content. 

Arbitrary Code Execution

From there, an attacker would try to escalate toward remote code execution. Claude is designed for code execution, which is safe and beneficial provided that it executes commands issued or approved by the user. An attacker with access to Claude could prompt it to invoke interpreters or shells directly. If an AI—or an attacker via prompt injection—can run these commands, they can bypass almost all other restrictions.

  • Python Execution (14.5% of files): Over 2,600 users explicitly allowed Bash(python:*) or Bash(python3:*). This allows the execution of arbitrary Python scripts, effectively granting full control over the environment.
  • Node.js Execution (14.4% of files): Similar to Python, allowing node:* permits arbitrary JavaScript execution outside the intended scope of the agent.
  • Direct Shell Access (2.6% of files): 482 users granted Bash(bash:*), literally allowing the agent to spawn a sub-shell and execute any command.

Command Injection via Utilities

In addition to the obvious risks of allowing any Python script to run, many users granted permissions for common utilities without realizing they contain features that allow for shell escape or command execution.

  • The find Utility (29.0% of files): This was the second most common permission overall. The find command is dangerous due to its -exec flag, which allows it to run any other command on the system.
  • Text Processors: sed (8.2%) and awk (5.1%) were commonly allowed. Both are Turing-complete languages capable of writing to files or executing system commands, often used in "Living off the Land" attacks.

Unrestricted Destructive Capability

A shocking 22.2% of files (4,101 users) granted the Bash(rm:*) permission. This enables the AI to permanently delete files and directories without requiring additional confirmation. Combined with the find permission mentioned above, an agent could theoretically be tricked into recursively wiping a project or system. For decades, tricking someone into running rm -rf has been a dark, niche computer joke. Now, with widespread vibe-coding permissions granted to AIs, that ancient gag has become a legitimate, automated threat capable of reaching thousands of projects.

Supply Chain Integrity and Spreading Through GitHub

Destroying your computer is bad, but why stop there when there are so many other computers in the world? In 2025, we observed how the Shai-Hulud worm could spread by compromising a developer’s workstation and injecting malicious code into open-source projects where they were a contributor. 

Once an attacker has access to issue commands to Claude Code, they may have permission to push changes to that developer’s GitHub repos. Even if the permission is in a project intended only for personal use, excessive permissions allow Claude to interact with any of that user’s repos, not just the repo for the current project. 

In this way, attackers can capitalize on lax security practices in personal projects to pivot via AI agents to corporate or open-source repositories. 

  • Unchecked Pushes (19.7% of files): Nearly 20% of users allowed Bash(git push:*).
  • Tampering Risk: When combined with git commit (24.5%) and git add (32.6%), this configuration allows an agent to modify code, commit it, and push it to a remote repository without human review. This is a significant supply chain risk if the agent hallucinates or is manipulated into injecting malicious code.

And one more thing…

In addition to permitting Claude to execute privileged commands, some of the permissions contained local system and personal information that is in itself valuable reconnaissance. Some permissions were scoped to file paths that include information like their username on the computer, their legal name, and the name of their employer. The analysis of file paths revealed 8,341 instances of user-specific permissions containing usernames, such as /Users/johndoe/. 

Recommendations

Claude Code’s permission settings already provide the framework for limiting the impacts of prompt injection. Avoiding the problems outlined above requires developers to establish and adhere to a strict governance framework for AI coding tools, implemented via clear policies and procedures. 

Leverage ‘Deny’ and ‘Ask’ rules

In our dataset, only 1.1% of files had any deny rules at all, despite these settings being available alongside the prolifically used allow. Proactively defining your deny and ask rules is the most effective way to create important guardrails against worst-case prompt injection scenarios.

Periodic Review of Permissions

It’s also worth reviewing the permissions you have granted. It’s easy to let permission fatigue get the better of you, and once that Bash(python3: *) gets added to your settings, it stays there. If you need help, Claude can even write a script to find all your .Claude/local.settings.json files and provide recommendations on which ones might be excessively scoped. 

Conclusion

Our data clearly indicates that a significant proportion of developers using Claude Code prioritize convenience over long-term security. By copying overly broad permissions or generating loose 'allow' lists to avoid repeated permission prompts, users have inadvertently created a dangerously large attack surface.

The widespread allowance of Python, Node, curl, and rm suggests that many users treat the AI agent as a trusted extension of their own hands, rather than a semi-autonomous entity. These configurations create a scenario where a prompt injection attack could escalate into full remote code execution (RCE) on the developer's machine, and subsequently pivot to other resources, such as GitHub repositories. 

The speed of frictionless coding is an undeniable asset, but it must be recognized as a persistent risk profile that demands indefinite oversight. To safely leverage this convenience, developers must move beyond a mindset of implicit trust and commit to a permanent framework of rigorous, managed sandboxing.