Encoding engineering standards as an AI skill means they get applied consistently, automatically, and can be measured against.

The problem with documented standards Link to heading

I have spent a lot of time in technical leadership. I love being a visionary, clearly explaining where to improve, and setting standards to get us there. The problem has always been the gap between the standard and the practice.

In the old world, you wrote a standards document and published it. You gave it a title like “Python Engineering Standards v2.3 FINAL.” You ran a lunch-and-learn. Some engineers nodded politely. The document was last opened six weeks later by a new joiner.

The standards were not applied consistently. They were applied by the engineers who had been in the lunch-and-learn and only remembered the three points that resonated with them personally. Everyone else was doing whatever the previous project did.

The other problem is measurement. You cannot know how well your standards are being applied when the only signal is code review comments, which depend entirely on who reviews the PR and how much coffee they had that morning. There is no feedback loop — no way to know which standards are being ignored, which are genuinely too hard to implement, or where to focus improvement next.

Documented standards are not entirely futile, but the value is muted.

SPEED engineering standards Link to heading

SPEED is our engineering framework covering Python, TypeScript, and Terraform — the languages that appear in almost every modern cloud-native engagement. The standards address clean code, efficiency, scalability, defensive coding, security, and documentation: SPEED.

The content is not unusual. The standards themselves look like competent engineering guidelines you could find in any well-run organisation. What is new is how they are delivered.

How the skill works Link to heading

In AI assistants, like Claude Code, a skill is an instruction file that tells the AI how to reason, what to check, and how to respond in a particular context. Skills live in .claude/rules/ alongside the codebase, and Claude Code loads them automatically when it opens a project.

The SPEED skill is exactly this: the engineering standards encoded as a set of rule files. When an engineer opens Claude Code in a project, those rules are active immediately. When the AI generates code, it generates against the standards. When it reviews a PR, it reviews against the standards. When it refactors, it refactors toward them.

The skill ships as a set of rule files in .claude/rules/. Opening a project that contains them activates the standards automatically — no installation, no configuration, no onboarding required. The engineer does not need to remember the rule; the tool applies it.

This is the shift that changes the economics of standards entirely.

Running the skill Link to heading

The skill can be run by any developer in their local environment, or centrally across an entire portfolio of repos. Either way, it produces a prioritised remediation plan and a compliance percentage — something that simply did not exist before.

FileStandards covered
python-SPEED.mdPEP 8, SOLID principles, defensive error handling, secure input validation, GenAI code patterns
typescript-standards.mdStrict TypeScript configuration, type safety, Biome tooling, Zod validation at boundaries
typescript-patterns.mdDiscriminated unions, type narrowing without as, immutable parameters, async error handling
terraform-SPEED.mdModule structure, naming conventions, stateful resource protection, secrets management
security-pipeline.mdPre-commit hooks (Husky + lint-staged), CI/CD security gates, Renovate configuration
tooling-config.mdbiome.json, tsconfig.json required flags, Renovate grouping rules

Rules marked alwaysApply: true — such as typescript-standards.md and security-pipeline.md — load for every Claude Code session in the project. Others activate when files matching their globs pattern are open, keeping context focused.

When you want an explicit compliance report — before a project review, an engineering audit, or a client delivery — you ask Claude Code directly:

Review this codebase against the SPEED engineering standards and produce a compliance report.

The plan.md lists every issue found, categorised by standard and severity. It can be edited, prioritised, and handed directly to the team as a sprint of remediation work. The compliance percentage is the number you put in the leadership review.

What it checks Link to heading

Python Link to heading

The Python standards catch issues that code review consistently misses, particularly under time pressure:

  • Magic numbers: constants like 3 or 86400 scattered through functions instead of named variables (MAX_RETRIES, SECONDS_PER_DAY). The standard flags every occurrence and suggests the named alternative.
  • Bare except clauses: except: or except Exception: instead of specific exception types. Defensive coding requires catching what you expect; catching everything hides bugs that surface six months later in production.
  • Raw dicts for domain objects: functions that pass {"user_id": ..., "email": ...} through ten layers of code instead of a typed dataclass or pydantic model. The standard identifies these and suggests the typed alternative with validation.
  • Unsanitised input at trust boundaries: functions that take user-supplied data and pass it directly to a query, file path, or external call without validation — the source of most injection vulnerabilities.
  • Missing docstrings on public interfaces: the standard checks every public module, class, and function — not just the ones the reviewer happened to click into during a rushed review.

TypeScript Link to heading

TypeScript standards enforce the strict type-safety configuration that teams adopt for new projects but rarely retrofit to existing ones:

  • any type usage: anywhere any appears, the standard flags it and suggests unknown with a narrowing pattern or a Zod schema at the trust boundary.
  • Non-null assertions (!): user!.email is a deferred runtime crash. The standard rejects every instance and requires explicit narrowing or optional chaining.
  • Boolean flags on state objects: { isLoading: boolean; isError: boolean; data?: Data } creates impossible states that are impossible to exhaustively check. The standard identifies these and suggests discriminated unions.
  • Floating promises: unawaited async calls that silently swallow errors. The standard catches these before they become 2am incidents.
  • Missing tsconfig flags: noUncheckedIndexedAccess, exactOptionalPropertyTypes, erasableSyntaxOnly — flags that make TypeScript genuinely stricter and are consistently absent from default project setups.

Terraform Link to heading

Terraform standards address the infrastructure-as-code mistakes that are cheap to fix in a PR and very expensive to fix once the statefile is in production:

  • Secrets in code: credentials, connection strings, or API keys anywhere in .tf files instead of Secrets Manager references via data blocks. These end up in version control and statefiles — both are bad.
  • Missing prevent_destroy on stateful resources: databases and storage accounts without lifecycle protection can be deleted by a terraform apply that was only supposed to update a tag.
  • Sensitive outputs without sensitive = true: output values containing credentials appear in pipeline logs and state unless explicitly marked. The standard checks every output.
  • Variables without validation blocks: unconstrained variable inputs are a reliability risk. The standard flags every variable without a validation block and suggests appropriate constraints.

Integrating into the organisation Link to heading

The rule files live in version control. Updating a standard is a pull request — it goes through review, it has an author, it has a rationale in the commit message. Rolling it out to another team is a clone or a PR to their repo. That is the entire distribution mechanism — no sharepoint, no email, no meeting.

For organisations running multiple repos, the rule files belong in a shared internal template repo. New projects inherit the standards on creation. Existing projects adopt them via a one-time PR that adds the .claude/rules/ directory. The changeset is small and reviewable; the impact is immediate.

Compliance percentages become a meaningful metric over time. A team starting at 43% Python compliance and tracking to 78% over two quarters has a story to tell in a leadership review. An engagement that ships at 85% TypeScript compliance has evidence of quality that did not previously exist in any form that could be pointed to.

The standards themselves will evolve. New language versions, new security requirements, new tooling decisions — the rule files are updated centrally and each repo picks up the change on the next sync. This is how a living standard actually lives. Not as a Confluence page with a “last reviewed” date three years ago, but as a versioned file that is as current as your package.json.

Lessons learned Link to heading

Having been running this on all repos in 2026, it makes me laugh how ineffective our previous approach to standards was. A document that depended on a human to remember, evangelise, and apply it manually — against the entropy of a growing codebase and a rotating team — was always going to lose.

The skill and embedding standards has been a game changer. The compliance reports have been a great way to track improvements over time, and the conversation with clients about code quality has shifted from “we follow best practices” — unverifiable, says everyone, means nothing — to “we are at 82% SPEED compliance, here is the breakdown by category” — verifiable, specific, improvable.

Improving the standards explicitly for LLMs — writing them in ways that are unambiguous, machine-actionable, and grounded in concrete examples — will be a focus for months, possibly years to come. It turns out that writing a good standard is the same discipline as writing a good prompt: precision matters, examples matter, and ambiguity is the enemy.

Further reading Link to heading

  • Synechron SPEED engineering guidelines — the source standards encoded in the skill
  • Biome — the TypeScript linter and formatter required by the TypeScript standards
  • Zod — runtime validation library used at all external boundaries in the TypeScript standards
  • Ruff — the Python linter and formatter referenced in the Python SPEED standard
  • TFLint — Terraform linter referenced in the Terraform SPEED standard
  • gitleaks — secrets detection tool integrated into the security pipeline standard