GitOps Patterns for Continuous Delivery
Why declarative infrastructure and Git as a single source of truth now matters more than ever

I’ve lost count of how many times a deployment broke because someone clicked a button in a UI, or because a human edited a Kubernetes manifest directly on a cluster. Those moments sting because they’re usually preventable. When every change flows through Git, you get reproducibility, auditability, and a way to reason about your system that no dashboard can match. GitOps turns Git from a code versioning tool into a control plane for delivery, and it fits right into modern CI/CD pipelines where environments multiply and compliance demands clarity.
If you’ve ever wrestled with drift, snowflake servers, or pipelines that deploy with a mix of scripts and prayers, GitOps offers a path that reduces risk and speeds feedback loops. It’s not a silver bullet; it’s a disciplined pattern for continuous delivery. In this post, I’ll unpack the core patterns, tradeoffs, and practical examples you can apply to Kubernetes and beyond, and share what actually worked on real projects.
Where GitOps fits today
In the real world, GitOps has become a default for teams running Kubernetes at scale. It pairs well with containerized microservices, ephemeral preview environments, and infrastructure-as-code. It’s used by platform engineers, DevOps teams, and increasingly by developer teams who want ownership without admin-level access. Compared to traditional push-based pipelines (e.g., Jenkins or GitHub Actions invoking kubectl apply), GitOps is a pull-based model. Controllers inside the cluster watch Git repositories and reconcile the actual state to the declared state. This reduces the number of credentials floating around in CI systems and provides a clear audit trail.
Technically, GitOps complements CI rather than replacing it. CI builds artifacts and runs tests. GitOps delivers those artifacts to environments by updating manifests and letting controllers handle rollout. The most common tooling combination is Argo CD or Flux for the GitOps controller, Kustomize or Helm for templating, and a container registry to hold images. For non-Kubernetes targets, tools like Argo CD Image Updater can update manifests automatically on new images, or you can extend GitOps to Terraform-managed infrastructure via operators like crossplane or Atlantis for Terraform PR workflows.
Core GitOps patterns for continuous delivery
Below are patterns that consistently deliver value across small and large teams. They are presented with practical code examples and a focus on what you actually need to decide and maintain.
Declarative manifests with a single source of truth
Everything required to run an environment should live in Git: deployments, services, configmaps, secrets, RBAC, and any infrastructure definitions. No manual changes. This makes rollbacks a git revert, not a war room.
A simple application layout with Kustomize:
app/
├── base/
│ ├── deployment.yaml
│ ├── service.yaml
│ └── kustomization.yaml
├── overlays/
│ ├── dev/
│ │ ├── replica-count.yaml
│ │ ├── configmap.yaml
│ │ └── kustomization.yaml
│ ├── staging/
│ │ ├── replica-count.yaml
│ │ ├── configmap.yaml
│ │ └── kustomization.yaml
│ └── prod/
│ ├── replica-count.yaml
│ ├── configmap.yaml
│ └── kustomization.yaml
└── README.md
Example base deployment:
# app/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-app
spec:
replicas: 1
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
containers:
- name: app
image: ghcr.io/example/sample-app:0.1.0
ports:
- containerPort: 8080
env:
- name: LOG_LEVEL
value: info
Example base service:
# app/base/service.yaml
apiVersion: v1
kind: Service
metadata:
name: sample-app
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: sample-app
Kustomization for base:
# app/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
Overlay for dev with a replica patch and config override:
# app/overlays/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: dev
resources:
- ../../base
patchesStrategicMerge:
- replica-count.yaml
- configmap.yaml
# app/overlays/dev/replica-count.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-app
spec:
replicas: 2
# app/overlays/dev/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-app-config
data:
LOG_LEVEL: debug
Git becomes the canonical source. Any environment-specific changes go into overlays, keeping differences explicit and reviewable.
Environment promotion via pull request
Promotion is a pull request that changes an overlay to point to a new image or version. This pattern makes the path to production visible and auditable.
Change a tag in an overlay by updating a Kustomize images entry, or by updating a Helm values file. Here’s a simple workflow:
# app/overlays/staging/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: staging
resources:
- ../../base
images:
- name: ghcr.io/example/sample-app
newTag: 0.1.1
A promotion PR from staging to prod updates prod/kustomization.yaml to the same tag after validation. The PR description should include:
- Link to the CI run that built the image
- Link to staging environment verification
- Rollback plan
PR-based promotion is where GitOps shines: changes are peer reviewed, status checks pass, and the merge triggers the reconciliation loop. No hidden state. It’s also friendly to compliance. Approvals are captured, commit history preserves rationale.
Git structure and repository strategy
You can model Git repositories in two main ways:
- Monorepo: all overlays and base definitions live together. Easier to share base components, consistent versioning, simple discoverability.
- Polyrepo: separate repositories for app code, manifests, and environment configs. Better for strict separation of duties or multi-team boundaries.
A typical monorepo structure:
cluster-infra/
├── apps/
│ ├── sample-app/
│ │ ├── base/
│ │ └── overlays/
│ │ ├── dev/
│ │ ├── staging/
│ │ └── prod/
│ └── another-app/
│ └── ...
├── infrastructure/
│ ├── namespaces/
│ ├── ingress/
│ └── monitoring/
└── README.md
A polyrepo approach:
- App repo: code + Dockerfile + CI config
- Manifest repo: Kustomize overlays or Helm charts
- Cluster repo: cluster-level resources (CRDs, operators, monitoring)
Argo CD supports multiple approaches via ApplicationSets. Flux supports GitRepository objects that point to specific paths in repos. Choose the structure that matches team topology and access controls.
Automated image updates
Manual bumping of image tags gets old fast. Argo CD Image Updater watches your image registry and updates overlays automatically based on policies. You can configure it to update only within a semantic version range or match patterns.
Example annotation on the Argo CD Application to enable image updates:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sample-app-staging
namespace: argocd
annotations:
argocd-image-updater.argoproj.io/image-list: sample-app=ghcr.io/example/sample-app
argocd-image-updater.argoproj.io/sample-app.update-strategy: semver
argocd-image-updater.argoproj.io/sample-app.allow-tags: regexp:^0.1.[0-9]+$
spec:
source:
repoURL: https://github.com/example/manifests.git
targetRevision: main
path: apps/sample-app/overlays/staging
destination:
server: https://kubernetes.default.svc
namespace: staging
In production, you usually gate automatic updates behind a promotion PR, while letting dev/staging track latest. This balances velocity and control.
Progressive delivery and health checks
GitOps isn’t just apply-and-pray. Use health checks and progressive delivery patterns to increase confidence. Argo Rollouts can perform canary or blue-green deployments. You define a Rollout resource instead of a Deployment, and GitOps reconciles it.
Example Argo Rollout canary strategy:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: sample-app
spec:
replicas: 6
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
containers:
- name: app
image: ghcr.io/example/sample-app:0.1.1
ports:
- containerPort: 8080
strategy:
canary:
steps:
- setWeight: 25
- pause: {duration: 60s}
- setWeight: 50
- pause: {duration: 60s}
- setWeight: 100
Use readiness/liveness probes and analyze metrics from Prometheus or Datadog during canary steps. In real-world projects, pairing canary with automated analysis reduces incident rates by catching regressions before full rollout.
Secrets management without Git secrets
Never store raw secrets in Git. Solutions include:
- Sealed Secrets: encrypt secrets into Kubernetes manifests that can be committed safely. Decryption happens in-cluster.
- External Secrets Operator: fetch secrets from Vault, AWS Secrets Manager, or GCP Secret Manager, creating Kubernetes secrets from them.
- SOPS + Age: encrypt fields in files stored in Git, with decryption keys in cluster.
Example Sealed Secret manifest (encrypted with cluster key):
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: sample-app-secrets
namespace: staging
spec:
encryptedData:
DATABASE_URL: AgBy3i4...
API_KEY: AgB8z2k...
The workflow is simple: create a Secret locally, use kubeseal to encrypt, commit the SealedSecret to Git. The controller decrypts it in the cluster. Developers don’t need cluster secrets to update manifests.
Multi-tenancy and multi-cluster
As teams grow, you’ll run multiple clusters (dev, staging, prod, or regional clusters). GitOps scales by using ApplicationSets in Argo CD or Flux Kustomization controllers per cluster. Patterns include:
- Path-based overlays: cluster/cluster-name overlays
- Label-based selectors: auto-generate Applications per cluster label
- Environment isolation: separate namespaces or dedicated clusters for compliance
Example ApplicationSet that generates an Application per cluster label:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: sample-app
namespace: argocd
spec:
generators:
- clusters:
selector:
matchLabels:
environment: dev
template:
metadata:
name: '{{name}}-sample-app'
spec:
source:
repoURL: https://github.com/example/manifests.git
targetRevision: main
path: apps/sample-app/overlays/dev
destination:
server: '{{server}}'
namespace: dev
Drift detection and reconciliation
Drift happens when someone bypasses Git and modifies the cluster directly. GitOps controllers constantly compare desired state (Git) with actual state (cluster), and reconcile. This capability eliminates snowflake configurations and gives you confidence in reproducibility.
When drift is detected, you can decide to allow the controller to override or to block and notify. In production, I prefer a “manual sync” policy for prod clusters where changes must be reviewed and merged, while dev clusters automatically sync. This separates safety from speed.
Observability for GitOps
Monitoring the GitOps pipeline itself matters. Track sync status, health status, and reconciliation errors. Argo CD provides a UI and metrics. Flux emits Prometheus metrics. Example metrics to watch:
argocd_app_sync_total: how often syncs occurargocd_app_health_status: per app healthgit_ops_reconciliation_errors: failures in applying manifests
Combine these with alerting so teams know when a PR didn’t make it to the cluster or when a rollout is stuck.
Honest evaluation: strengths, weaknesses, and tradeoffs
GitOps is a strong fit when:
- You run containerized workloads, especially Kubernetes
- You need auditability and reproducibility across environments
- You want to reduce credentials in CI systems and implement least-privilege access
- Your team wants ownership without direct cluster admin rights
- You need to manage many clusters or environments consistently
GitOps might not be the best choice when:
- Your workload isn’t declarative or cannot be reconciled by a controller (e.g., stateful systems with strict ordering constraints)
- You need extremely fast, ad hoc emergency changes and cannot rely on Git approvals or controllers
- Your team lacks the maturity to maintain a clean Git repository structure and versioning discipline
- You rely on stateful infrastructure that’s not well supported by declarative operators (though many gaps are closing)
Tradeoffs to consider:
- Complexity: initial setup of controllers, repositories, and templating requires effort
- Latency: pull-based loops add a few seconds to minutes to apply changes; not ideal for ultra-low-latency changes
- Git noise: automated updates can flood PRs; careful policy tuning is required
- Secrets: you’ll need a separate strategy for managing secrets safely
- RBAC: access to Git repositories and environments must be carefully modeled
Personal experience: lessons from the field
On one project, we migrated a dozen microservices from a Jenkins push pipeline to Argo CD. The biggest surprise was not the tooling but the habits. The team was used to SSHing into a bastion and editing manifests. The first week saw a handful of drift incidents. We added a controller policy that blocked manual changes in prod and introduced a “drift alert” in Slack via Argo CD notifications. The noise died down in a couple of weeks.
Another moment that stands out: during an incident, we needed to roll back a bad release. In the old world, we’d scramble for the previous image tag, hope we remembered the right Helm values, and click deploy. With GitOps, we reverted a single commit in the overlay, merged the PR, and watched Argo CD synchronize. The rollback was auditable, calm, and complete. That’s when the team truly believed.
Common mistakes I’ve seen:
- Over-templating: teams create complex Helm charts with deep nesting that nobody understands. Kustomize overlays are often simpler and easier to reason about. Use Helm when you need library charts and complex templating, otherwise prefer Kustomize for straightforward overlays.
- No promotion gating: auto-updating prod via image updater without PR gates leads to surprises. Keep prod promotion manual and explicit.
- Ignoring health checks: without proper readiness/liveness probes, canary and progressive delivery won’t work as intended.
- Secret sprawl: committing unencrypted secrets or distributing them via insecure channels. Adopt Sealed Secrets or External Secrets early.
Getting started: workflow and mental model
Start by aligning on Git structure and a clear mapping between repos and clusters. Decide on one controller and stick to it for consistency. Set up your templating tooling, typically Kustomize or Helm, and define base manifests that all environments share.
Workflow for developers:
- Change app code, open PR, CI builds image and runs tests
- CI pushes image to registry and opens a PR in the manifest repo to update the overlay tag
- Code owner reviews, merges PR for the target environment
- GitOps controller detects change and reconciles the cluster
- Automated health checks verify rollout; alerts notify if unhealthy
Workflow for platform engineers:
- Maintain base overlays and cluster-wide resources (namespaces, ingress, monitoring)
- Define ApplicationSets or Flux Kustomizations for new clusters
- Enforce RBAC policies in Git and access control in Git hosting
- Monitor sync/health metrics and refine policies
Folder structure for a monorepo with multiple apps and clusters:
monorepo/
├── apps/
│ ├── app-a/
│ │ ├── base/
│ │ └── overlays/
│ │ ├── dev/
│ │ ├── staging/
│ │ └── prod/
│ └── app-b/
│ └── ...
├── clusters/
│ ├── dev/
│ │ ├── kustomization.yaml
│ │ └── appset.yaml
│ ├── prod/
│ │ ├── kustomization.yaml
│ │ └── appset.yaml
├── infrastructure/
│ ├── monitoring/
│ ├── ingress-nginx/
│ └── namespaces/
└── README.md
Example cluster-level Kustomization that pulls in apps and infra:
# clusters/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../apps/app-a/overlays/dev
- ../../apps/app-b/overlays/dev
- ../../infrastructure/monitoring
- ../../infrastructure/namespaces
Security tips:
- Use Least Privilege: developers update manifests but don’t have cluster admin
- Store controller credentials in the cluster, not CI
- Encrypt secrets at rest in Git (Sealed Secrets) and manage keys safely
- Scan manifests and images for vulnerabilities before merge
Free learning resources
- Argo CD documentation: https://argo-cd.readthedocs.io/ - strong starting point for GitOps controllers and ApplicationSets
- Flux documentation: https://fluxcd.io/docs/ - excellent for understanding reconciliation loops and GitRepository objects
- Kustomize documentation: https://kubectl.docs.kubernetes.io/guides/config_management/kustomize/ - practical examples for overlays and patches
- Sealed Secrets: https://github.com/bitnami-labs/sealed-secrets - how to safely commit encrypted secrets
- Argo Rollouts: https://argoproj.github.io/argo-rollouts/ - progressive delivery patterns for canary/blue-green
- CNCF GitOps Working Group: https://opengitops.dev/ - community specifications and best practices
Summary and final thoughts
GitOps is a strong match for teams delivering containerized services who value auditability, safety, and consistency. It fits organizations with multiple environments and clusters, compliance requirements, and a desire to give developers ownership without handing over cluster admin keys. It’s less ideal for environments where changes must be extremely fast or where workloads aren’t yet amenable to declarative management.
Who should adopt GitOps:
- Platform and DevOps teams managing Kubernetes across multiple environments
- Product teams that want PR-based promotion and clear rollback paths
- Organizations that need traceability for compliance or incident response
Who might skip or defer:
- Teams with minimal infrastructure or single-host setups where traditional CI/CD is sufficient
- Workloads with strict ordering or state constraints that aren’t covered by available operators
- Teams without the appetite to maintain a Git repository structure and templating discipline
The takeaway is simple: GitOps turns deployment into a transparent, repeatable, and collaborative process. It reduces risk, clarifies ownership, and makes rollbacks boring. When you align Git structure with team boundaries and choose a controller that fits your stack, the result is a delivery pipeline that scales with your product instead of against it.




