CI/CD Pipeline Security Best Practices
Modern software delivery moves fast; securing the path from commit to production is essential to avoid breaches and supply chain attacks.

If you have ever watched a deployment roll out at 2 a.m., you know how fragile the process can feel. A CI/CD pipeline is the backbone of that delivery, and over the last few years I have seen teams introduce security controls too late in the process, sometimes right before a release. The result is predictable: secrets leak, images carry known vulnerabilities, and approvals become rubber stamps. Securing the pipeline is not about adding bureaucracy. It is about catching the right issues early, keeping credentials out of logs, and making sure the thing that lands in production is exactly what you intended.
In this post, I will walk through practical best practices for CI/CD security, grounded in real project work. We will talk about threat models, source control hygiene, secrets management, pipeline hardening, artifact signing, and policy-as-code. I will include code and configuration examples that you can adapt, not copy blindly. While tools and platforms differ, the mental model is consistent: limit blast radius, verify integrity at each step, and trace everything. We will also look at tradeoffs, when to slow down, and where automation helps more than it hurts.
Context: Where CI/CD security fits today
Continuous integration and continuous delivery have become the default for shipping software, whether you are working on a monolith, microservices, or infrastructure itself. Most teams rely on Git-based workflows, containerized builds, and cloud-hosted runners. In this landscape, the pipeline is a privileged system: it sees secrets, accesses registries, and can push to production. Attackers know this. The supply chain attacks of the past few years made that painfully clear. The Open Source Security Foundation (OpenSSF) published supply chain security guidance that emphasizes securing build pipelines and verifying artifacts. You can read their general guidance here: https://openssf.org/
At a high level, securing CI/CD is different from securing applications because the pipeline runs with elevated permissions and interacts with many external systems. It is closer to an identity and access management problem than a traditional vulnerability scanning exercise. Compared to application security testing, pipeline security focuses on who can change code, how builds are executed, what dependencies are trusted, and how artifacts move through environments. It complements SAST, DAST, and dependency scanning, but it also stands apart because the pipeline itself can become an attack vector if left unmanaged.
Typical users of these practices include platform engineers, DevOps practitioners, and security-minded developers. The most mature teams treat pipeline security as part of their software delivery lifecycle, not a separate security project. They apply defense in depth, assume build runners are ephemeral and potentially hostile, and enforce policies early.
Threat model: What are we defending?
Before diving into tools, it helps to think like an attacker. In my experience, the most common pipeline threats fall into a few categories.
- Credential theft: Secrets embedded in code, logs, or artifacts.
- Dependency confusion or typosquatting: Pulling malicious packages from public registries.
- Build injection: Malicious inputs from PRs or environment variables that alter build behavior.
- Artifact tampering: Replacing a signed artifact with a malicious one before deployment.
- Runner compromise: Privileged containers or long-lived VMs that retain sensitive data.
A practical approach is to map assets and trust boundaries. The source repository, build system, artifact registry, and deployment target are each a trust boundary. Your goal is to ensure each step verifies the previous one and does not over-permission.
Example threat list you might keep in your repo:
- Asset: GitHub Actions workflow files
Threat: Injected secrets or modified build steps via PR
Control: Require CODEOWNERS on workflows, least privilege tokens
- Asset: Dockerfile and base images
Threat: Malicious base images or build args
Control: Pin digests, minimal base images, image scanning
- Artifact: Container image in registry
Threat: Tampering after build
Control: Cosign signatures, policy enforcement at deploy
- Deployment: Kubernetes manifests
Threat: Insecure secrets or overly permissive ServiceAccounts
Control: Sealed Secrets or external secrets, RBAC reviews
You do not need a heavyweight process. A simple markdown file that tracks threats and controls in your repository can be enough to drive decisions.
Source control hygiene: Protect the pipeline at the origin
The pipeline starts at the repository. If an attacker can change workflow definitions, they can run code with your permissions. I have seen teams allow any contributor to modify workflows, which is convenient but risky. Here are pragmatic controls.
- Require code reviews for workflow changes and protect main branches. On GitHub, branch protection rules can require approvals and status checks.
- Use minimal tokens for workflows. The default GITHUB_TOKEN is a good start; avoid over-scoped PATs stored in secrets. Prefer OIDC when talking to cloud providers (AWS, GCP, Azure) so you do not store long-lived credentials at all.
- Isolate privileged jobs. Do not run deploy steps on every PR. Use environment-based approvals for staging and production.
A minimal GitHub Actions workflow that limits permissions and uses OIDC to AWS might look like this:
name: build-and-deploy
on:
push:
branches: [main]
pull_request:
branches: [main]
permissions:
contents: read
id-token: write # needed for OIDC to AWS
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials via OIDC
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/github-actions-deploy
aws-region: us-east-1
- name: Build image
run: |
docker build -t myapp:${{ github.sha }} .
Notice there is no AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY stored as secrets. The OIDC flow exchanges a GitHub-issued token for temporary cloud credentials. The AWS role trust policy should restrict the GitHub repository and environment:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::123456789012:oidc-provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
},
"StringLike": {
"token.actions.githubusercontent.com:sub": "repo:your-org/your-repo:*"
}
}
}
]
}
For GitLab, you can use OIDC to cloud providers as well: https://docs.gitlab.com/ee/ci/cloud_services/ . In CircleCI, the similar concept is OIDC tokens for cloud providers: https://circleci.com/docs/openid-connect-tokens/ .
Secrets management: Keep credentials out of the pipeline
Secrets are the most common leak in pipelines. Logs expose them, artifacts embed them, and shared environments keep them around longer than needed. The rule I follow is simple: the pipeline should fetch secrets at runtime and never store them on disk for longer than necessary.
- Never commit secrets in source, including in
.envfiles or configuration templates. Use tools likegit-secretsortellerto catch mistakes early. Git-secrets: https://github.com/awslabs/git-secrets - Use a dedicated secrets manager: AWS Secrets Manager, Azure Key Vault, Google Secret Manager, or HashiCorp Vault. In Kubernetes, use External Secrets Operator or Sealed Secrets rather than native Secret objects for multi-cluster setups.
- Scope secrets to environments and jobs. In GitHub Actions, define secrets at the environment level and require approvals for production.
A GitHub Actions workflow that pulls a database password at runtime might look like this:
jobs:
deploy:
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v4
- name: Fetch database credentials from AWS Secrets Manager
id: db-secrets
uses: aws-actions/aws-secretsmanager-get-secrets@v1
with:
secret-ids: prod/db/credentials
parse-json-secrets: true
- name: Deploy
run: |
# These vars are automatically set from the secret JSON
echo "Using DB host: $PROD_DB_HOST"
./scripts/deploy.sh
env:
DB_HOST: ${{ env.PROD_DB_HOST }}
DB_USER: ${{ env.PROD_DB_USER }}
DB_PASS: ${{ env.PROD_DB_PASS }}
In Kubernetes, I often use the External Secrets Operator to sync from cloud secrets into cluster Secret objects, with short TTLs:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: db-credentials
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secretsmanager
kind: SecretStore
target:
name: db-credentials
data:
- secretKey: host
remoteRef:
key: prod/db/credentials
property: host
- secretKey: username
remoteRef:
key: prod/db/credentials
property: username
- secretKey: password
remoteRef:
key: prod/db/credentials
property: password
This pattern removes secrets from the pipeline and keeps them out of container images and logs.
Build and dependency security: Lock, scan, and verify
Dependencies are a major source of risk. The pipeline should enforce reproducible builds and scan for vulnerabilities and malicious packages. Here are practical steps.
- Pin dependencies. For npm, use
package-lock.json. For Python, preferpoetry.lockorpip-toolswith pinned versions. For Go, enable vendoring or use a module proxy with checksums. For containers, pin base image digests. - Use private or vetted registries. For public packages, configure an allowlist or an internal proxy such as JFrog Artifactory or Sonatype Nexus. This helps prevent dependency confusion attacks where a public package overrides an internal one.
- Run SCA (software composition analysis) and image scanning in CI. Tools like Trivy, Grype, or Snyk can gate builds on severity thresholds.
Here is a GitHub Actions job that scans code, dependencies, and container images:
jobs:
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Trivy vulnerability scanner on code and dependencies
uses: aquasecurity/trivy-action@master
with:
scan-type: 'fs'
scan-ref: '.'
severity: 'CRITICAL,HIGH'
exit-code: '1'
- name: Build image
run: docker build -t myapp:${{ github.sha }} .
- name: Run Trivy on container image
uses: aquasecurity/trivy-action@master
with:
image-ref: 'myapp:${{ github.sha }}'
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
exit-code: '1'
- name: Upload SARIF results
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: 'trivy-results.sarif'
When scanning fails, fail the build. Gate policies are important; if you only scan and report, teams will ignore results. Start with CRITICAL and HIGH. As the process matures, expand scope and reduce false positives with exceptions tracked in code.
For Go projects, I like to enable checksum validation and module proxy usage. A simple go.mod with replacements for internal modules and a controlled proxy reduces surprises:
module github.com/your-org/your-app
go 1.22
require (
github.com/gorilla/mux v1.8.1
golang.org/x/sync v0.6.0
)
replace github.com/your-org/internal-lib => ./internal/lib
Combine this with CI caching to speed up builds while keeping reproducibility.
Artifact integrity: Sign and verify
Signing artifacts ties a build to a trusted identity. In my experience, once you add signing, you will catch several categories of issues: unauthorized builds, misconfigured pipelines, and artifact swaps. Sigstore’s Cosign is a popular choice for container images and software bills of materials (SBOMs). SLSA (Supply-chain Levels for Software Artifacts) provides a framework to reason about build integrity: https://slsa.dev/
Below is a pipeline that builds an image, generates an SBOM, signs the image, and verifies the signature:
jobs:
sign:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t myapp:${{ github.sha }} .
- name: Install Cosign
uses: sigstore/cosign-installer@v3
- name: Generate SBOM
uses: anchore/sbom-action@v0
with:
format: spdx-json
output-file: sbom.spdx.json
- name: Sign image
run: cosign sign --yes myapp:${{ github.sha }}
env:
COSIGN_EXPERIMENTAL: 1 # enables OIDC flow for signing
- name: Verify signature
run: cosign verify myapp:${{ github.sha }} --certificate-oidc-issuer https://token.actions.githubusercontent.com --certificate-identity-regexp "^https://github.com/your-org/your-repo/.github/workflows/.*@refs/heads/main$"
Cosign uses OIDC to create short-lived signing certificates. You can enforce signature verification in admission controllers like Kyverno or OPA Gatekeeper in Kubernetes. This prevents unsigned or improperly signed images from being deployed.
Policy as code: Enforce rules early and everywhere
Policy-as-code translates security rules into executable checks. It is useful in PRs, during builds, and at deployment. OPA Gatekeeper for Kubernetes and Conftest for CI are common choices.
An example Conftest policy that rejects containers running as root:
package main
deny[msg] {
input.Config.User == "root"
msg = "Containers cannot run as root"
}
In a pipeline, you can run Conftest on Kubernetes manifests or Dockerfiles:
# Run conftest on Kubernetes manifests
conftest test deployment.yaml -p policies/
# Run conftest on Dockerfile
docker run --rm -v $(pwd):/project -v $(pwd)/policies:/policies openpolicyagent/conftest test /project/Dockerfile -p /policies
OPA Gatekeeper applies similar policies at cluster level. Policies should be stored in the same repo as application code so they can evolve with the service.
Secrets scanning and path constraints
Even with a robust secrets manager, teams sometimes leak secrets. Add scanning to catch them before they leave your environment. GitGuardian’s ggshield and TruffleHog are popular options. A minimal CI step:
- name: Scan for secrets
uses: trufflesecurity/trufflehog@main
with:
path: ./
base: main
head: HEAD
extra_args: --debug --only-verified
Path constraints matter too. Limit which workflows can access production secrets by scoping them to specific environments and branches. In GitHub, environments can require manual approvals and restrict branches:
deploy-prod:
runs-on: ubuntu-latest
environment:
name: production
url: https://prod.example.com
steps:
- uses: actions/checkout@v4
- name: Deploy
run: ./scripts/deploy.sh
Isolation and least privilege for runners
Build runners can be a weak link. Long-lived VMs retain state and secrets; containers are ephemeral but can be privileged. In practice, I prefer ephemeral, containerized runners with minimal privileges.
- Avoid running containers with
--privileged. Use rootless Docker or Podman if possible. - If using self-hosted runners, isolate per repository or team and restrict network egress. GitHub supports runner groups: https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners
- Consider using Kubernetes jobs for builds with resource quotas and network policies to limit what a build can access.
Here is a minimal Kubernetes Job spec for a build step with security context:
apiVersion: batch/v1
kind: Job
metadata:
name: build-job
spec:
template:
spec:
containers:
- name: builder
image: docker:24-dind
securityContext:
privileged: false
runAsUser: 1000
runAsNonRoot: true
allowPrivilegeEscalation: false
command: ["sh", "-c"]
args:
- |
set -e
docker build -t myapp:$TAG .
env:
- name: TAG
value: "abc123"
restartPolicy: Never
backoffLimit: 0
Deployment approvals and environment protection
Not all deployments are equal. It is fine to auto-deploy preview environments, but production should have guardrails. Use environments and approvals to enforce checks.
GitHub environments support required reviewers and deployment gates. In GitLab, environments can have manual actions and approval rules. Add automated checks before approval, like vulnerability scanning and policy checks.
A GitLab CI snippet that requires manual approval for production:
deploy_staging:
stage: deploy
script:
- ./scripts/deploy.sh staging
environment:
name: staging
only:
- main
deploy_production:
stage: deploy
script:
- ./scripts/deploy.sh production
environment:
name: production
when: manual
only:
- main
Observability and audit: Know what happened
Security is also about knowing what happened. Ensure your pipeline emits logs with traceable context: commit SHA, build ID, user who triggered the job, and the environment. Centralize logs to a system like Splunk, Elasticsearch, or cloud-native services (CloudWatch, Log Analytics). For Kubernetes, consider OpenTelemetry for traceable deployments and admission audit logs.
When something goes wrong, a simple trace id can help correlate build, image, and deployment events. For example, pass the build ID through the pipeline as a label:
- name: Deploy
run: ./scripts/deploy.sh
env:
BUILD_ID: ${{ github.run_id }}
Then label Kubernetes resources:
metadata:
labels:
build-id: "${BUILD_ID}"
Tradeoffs and limitations
Not every team can implement every control immediately. Here are tradeoffs I have encountered.
- Strict branch protection slows down hotfixes. Mitigate by having a fast-track path with two reviewers and automated checks only.
- Container signing adds complexity and requires policy enforcement to be meaningful. Start by signing and verifying manually, then automate.
- Ephemeral runners can be more expensive and slower to provision. Compare the security gain against your release cadence. For low-risk apps, managed runners with strict scopes may be sufficient.
- Policy-as-code can block deployments unexpectedly. Version policies, run conftest in CI before merge, and maintain an exceptions process with time limits.
Personal experience: What worked, what hurt
Over the years, I have made plenty of mistakes. One time, I stored a cloud access key as a repository secret and rotated it quarterly, thinking that was enough. A developer printed the key in a debug log during a failed build. It ended up in the Actions logs, visible to anyone with read access. We rotated it immediately and switched to OIDC. That was the moment I decided never to rely on static credentials where OIDC is available.
Another learning moment came with dependency pinning. In an early microservice, we left version ranges open for convenience. A transitive dependency changed behavior in a minor release, causing a production incident. Since then, I pin everything and run dependency updates in a controlled cadence, with a clear review process.
Signing artifacts felt heavy at first, but once we integrated Cosign into the pipeline and added admission control in Kubernetes, it caught a misconfigured job that was pushing unsigned images from a local machine. That alone justified the effort.
Getting started: Workflow and mental model
If you are starting from scratch, do not try to implement all controls at once. Build momentum with small, durable changes.
- Define your trust boundaries: repo, build, artifact registry, deploy.
- Enforce branch protection and minimal workflow permissions.
- Integrate OIDC for cloud access and remove long-lived credentials.
- Pin dependencies and run vulnerability scans in CI with failure gates.
- Add artifact signing and verify signatures in a staging environment first.
- Introduce policy-as-code for Kubernetes deployments.
- Centralize logs and keep a simple change ledger in your repo.
Folder structure that works well for a team:
.github/
workflows/
build.yml
security.yml
deploy.yml
policies/
deny_root.rego
enforce_signatures.rego
scripts/
deploy.sh
scan_deps.sh
k8s/
base/
deployment.yaml
service.yaml
overlays/
staging/
production/
Dockerfile
go.mod # or package.json, requirements.txt
The mental model is simple. Treat every step as a verification gate. The source step verifies who can change code. The build step verifies what goes into the artifact. The registry step verifies the artifact’s integrity. The deploy step verifies policy compliance. Each gate narrows risk.
Free learning resources
- OpenSSF Supply Chain Security Guidelines: https://openssf.org/
- SLSA Framework: https://slsa.dev/
- GitHub Security Hardening for Actions: https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions
- Sigstore Cosign Documentation: https://docs.sigstore.dev/cosign/overview/
- Trivy by Aqua Security: https://aquasecurity.github.io/trivy/
- Conftest by Open Policy Agent: https://www.conftest.dev/
- OPA Gatekeeper for Kubernetes: https://open-policy-agent.github.io/gatekeeper/website/docs/
- GitGuardian Secret Scanning: https://docs.gitguardian.com/
These resources are practical and maintained by teams with real-world exposure. They will help you go deeper on topics that matter most for your stack.
Summary: Who should use these practices and who might skip
CI/CD pipeline security practices are valuable for any team that ships software more than once a week, uses cloud credentials, or depends on third-party packages. They are especially important if you run production workloads, handle sensitive data, or operate in regulated environments. The effort pays off through fewer incidents, faster audits, and safer deployments.
If you are building a small personal project that never touches real data or cloud resources, you can prioritize just branch protection and dependency pinning, then expand as needed. If you are working on large teams or shared platforms, you should adopt as many controls as your tooling supports, starting with least privilege, secrets management, and artifact signing.
The takeaway is grounded. Treat the pipeline as a privileged system and secure it like you would a production service. Limit permissions, verify integrity, and leave an audit trail. The goal is not perfection; it is confidence that every release is the one you intended.




