93% of LLM permission requests are approved. That number should make you uncomfortable. Not because approving is wrong — most of those approvals are fine — but because a 93% approval rate is a signal that you haven’t designed your permission model. You’ve just been clicking through dialogs.

There’s also the opposite problem: leaving an agent to work autonomously, then coming back to find it paused, waiting for you to confirm that yes, it can grep a file it’s been reading all session. Both failures have the same root cause: permissions set by default, not design.

The question worth sitting with is whether you can move faster and sleep better. The answer is yes, but it requires deliberate thought about what your LLM actually needs to do its job.

Why care Link to heading

The obvious risk with unrestricted LLM tooling is a destructive operation: a rogue rm -rf, a dropped table, a force-pushed branch. Those are real, but they’re also visible. The quieter risks are the ones that matter more.

Data exfiltration. An agent with unrestricted file-read access and network-call capability can trivially read secrets from .env files, SSH keys from ~/.ssh/, or credentials from cloud config directories. Most developers wouldn’t grant a new contractor that access on day one. LLMs often get it by default.

Supply chain exposure. An agent that can run npm publish, pip upload, or git push to arbitrary remotes can compromise packages your entire organisation depends on. The blast radius of that isn’t bounded by the agent’s intent — it’s bounded by who trusts your packages.

Chained actions. Agentic LLMs don’t execute single operations; they chain them. An agent with read, write, and shell access can read a config file, modify it, restart a service, and observe the result — all within one task. Each individual permission might seem benign; the combination creates surface area you might not have considered.

Audit gaps. When something goes wrong — and eventually something will — you need to know what the agent did and when. Default permission setups often produce no audit trail. You’ll be left reconstructing events from git history and hoping nothing happened outside the repo.

The principle at work is the same one that governs human access controls: least privilege. Grant what’s needed for the task at hand, nothing more. The discipline of applying it to AI agents just requires understanding the tooling.

Scoping permissions with Claude Code Link to heading

Claude Code exposes its permission model through settings files. There are two layers: user-global at ~/.claude/settings.json and project-local at .claude/settings.json. Project settings take precedence within the repo.

The core mechanism is allowedTools and denyTools. allowedTools is an explicit list of tools the agent can use without prompting for approval. denyTools blocks tools outright, regardless of what the user or project settings say.

settings.json Link to heading

A read-only config for code review looks like this:

{
  "allowedTools": ["Read", "Glob", "Grep", "WebFetch"],
  "denyTools": ["Edit", "Write", "Bash", "WebSearch"]
}

An implementation config that allows shell operations but restricts what can run:

{
  "allowedTools": ["Read", "Glob", "Grep", "Edit", "Write"],
  "denyTools": ["WebFetch", "WebSearch"],
  "allowedBashCommands": ["npm run *", "hugo *", "git diff", "git log", "git status"]
}

The allowedBashCommands array accepts glob patterns. This is where the real leverage is: you can allow npm run * without allowing npm publish, allow git * without allowing git push --force, allow hugo * without allowing arbitrary shell execution.

Hooks add a layer above the settings file. A hook is a shell command that fires on a specific event — before a tool call, after a tool call, or when a session ends. You can use hooks to log all tool invocations to a file, block specific patterns (e.g., deny any Bash call containing --force or DROP TABLE), or notify a Slack channel when an agent executes in a production-adjacent environment.

Hooks run in your shell, so they have access to environment variables, can call external systems, and can return a non-zero exit code to block the tool call from proceeding. They’re the escape hatch for any permission logic too contextual for a static config.

The practical discipline: start from a minimal set and expand. Most agents can complete most tasks with read access and targeted write access. Full shell execution is rarely required on the first pass — add it when the task demands it, not by default.

GitHub Copilot permissions Link to heading

Copilot’s permission model operates at three levels: 1. enterprise policy, 2. organisation settings, and 3. repository configuration. Understanding which level controls what saves a lot of debugging when behaviour doesn’t match expectation.

1. Enterprise policy is the ceiling. An enterprise admin can disable Copilot entirely for specific product areas — Copilot Chat, Copilot in the CLI, pull request summaries, Copilot Workspace — or restrict access to specific SKUs. These are all-or-nothing toggles at this level.

2. Organisation settings control which repositories Copilot can access and what features are available to members. The critical setting here is whether Copilot is enabled for all repositories or only selected ones. In organisations where some repos contain sensitive IP or regulated data, this distinction matters considerably. Org admins can also toggle specific capabilities per organisation: code completion, Chat, pull request descriptions, code reviews.

3. Repository settings allow owners to exclude specific file paths and patterns from Copilot’s context. This prevents Copilot from reading — and therefore reproducing — code in sensitive modules, generated files containing credentials, or proprietary algorithm implementations. The configuration lives in the repository settings UI and applies to all Copilot users accessing that repo.

The less-examined surface area is Copilot Extensions. Extensions allow third-party tools to integrate into Copilot Chat. Each extension requires OAuth approval and can request scopes including repository read, issues read/write, and code access. Treat these approvals with the same scrutiny you’d apply to any OAuth app. The question isn’t whether the vendor is trustworthy in the abstract — it’s whether you’ve consciously assessed what access you’ve granted and whether it’s proportionate to the utility you’re getting back.

One pattern worth adopting at the org level: use Copilot’s content exclusion to prevent it accessing secrets management directories, prod environments, and deployment configuration. The agent doesn’t need to read your prod Terraform state to write a unit test.

Org, repo, and user level permissions Link to heading

Across most agentic tooling — Claude Code, Copilot, Cursor — permissions stack in a hierarchy: organisation sets the ceiling, project sets the context, user adjusts within what’s permitted above.

This hierarchy matters because it distributes accountability correctly. Security and compliance teams own the organisational baseline. Engineering teams own project-level scoping for their specific domain. Individual contributors own their personal settings within that envelope. When something is misconfigured, the hierarchy tells you who the right conversation is with.

TLDR:

At the organisation level, establish the minimum set of capabilities available to all agents across all repos. This is your hardened baseline. No network egress to arbitrary URLs. No production credentials in scope. No push access without explicit repo-level override. Think of this as the policy you’d feel comfortable writing down and presenting to your security team.

At the repository level, expand for what the project legitimately requires. A deployment repo needs shell access to invoke deployment scripts. A documentation repo probably only needs read and write on Markdown files. A library with a CI pipeline needs specific test commands. These expansions live in .claude/settings.json or an equivalent config, reviewed in PRs, and version controlled like everything else.

At the user level, embed your personal preferences — whether to be prompted on ambiguous operations, logging verbosity, preferred tool sets.

The thing that often gets missed: permission decisions should be treated like code. The allowedTools array in your settings file is a security decision. It deserves the same scrutiny as a firewall rule. Add tools to the allow list do it becuase everyone using the repo will need it. When a new engineer joins they should see the .claude/settings.json and know the tools required.

The 93% approval rate isn’t inevitable. It’s what happens before you’ve thought about this. The target isn’t a lower approval rate — it’s a higher deliberateness rate. The friction-free calls have been pre-approved, and the genuine decisions are surfaced clearly rather than buried in a stream of obvious approvals.

Speed and safety aren’t in opposition here. Designed permissions are faster than ad-hoc ones: the agent runs without interruption on the operations you’ve pre-approved, and stops clearly on the ones you haven’t. The investment is upfront, the dividend compounds with every session.

Further reading Link to heading