Security Baseline

The minimum security posture this standard recommends. Layers on top of SECURITY.md which covers vulnerability reporting; this document covers vulnerability prevention.

Repository-level controls

These are GitHub features. Turn them on once; they run forever.

CI / CD hardening

These belong in every workflow file. They cost nothing and shrink the blast radius of a compromised action or PR.

Minimum permissions

GitHub workflows inherit broad GITHUB_TOKEN permissions by default. Always declare the minimum your workflow needs at the top of the file, and override per-job where higher access is required.

permissions:
  contents: read

jobs:
  release:
    permissions:
      contents: write
      id-token: write   # for OIDC, if used
    # ...

Pin third-party actions to a full SHA

Tag-based references (@v4) are mutable. Compromised action releases have happened. For untrusted third-party actions, pin to the immutable commit SHA:

- uses: lycheeverse/lychee-action@a3046df3bc09bce6b8e8bbc7d3b29b1efa9b2b25 # v2

Official actions/* actions are also pinnable; this standard ships them at version tags for readability and relies on Dependabot to update them. Adopters with stricter supply-chain requirements should pin all actions to SHAs.

Prefer OIDC over long-lived secrets

When deploying to cloud providers, use OIDC trust relationships instead of storing static credentials. AWS, GCP, Azure, and HashiCorp Vault all support GitHub Actions OIDC. The id-token: write permission above is the prerequisite.

Restrict workflow re-use

If a workflow is in .github/workflows/, anyone with push access to a PR branch can modify it. For sensitive workflows (release, deploy), use the environments feature with Required Reviewers, and gate execution on protected branches only.

AI-specific controls

Coding agents read files, run commands, and reach external tools (MCP servers). Their privileges must be smaller than the human who runs them.

Reference: OWASP Top 10 for LLM Applications

For projects that actually call LLMs at runtime (RAG, agents, chatbots), the OWASP Top 10 for LLMs covers the AI-specific risks that this baseline cannot anticipate generically: prompt injection, sensitive-information disclosure, insecure tool use, excessive agency, supply-chain attacks on models, and others. Map each item to a concrete control in your codebase before shipping.

Reference: Other baselines worth adopting later

These are not minimum standards — they are next steps once the basics are in place.


Source: docs/security-baseline.md — edits land here on the next deploy.