πŸ“– Looking for karrio's legacy docs? Visit docs.karrio.io

Kubernetes

Deploy Karrio on any Kubernetes cluster using the official Helm chart.

Prerequisites

  • Kubernetes cluster (1.25+)
  • Helm 3.10+
  • kubectl configured for your cluster
  • An Ingress controller (e.g. ingress-nginx)
  • cert-manager (recommended for TLS)
  • External PostgreSQL 14+ database
  • External Redis 6+ instance

Architecture Overview

The Helm chart deploys three workloads into your cluster:

                  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                  β”‚   Ingress   β”‚
                  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
             β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
             β–Ό                       β–Ό
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚  API (Django)   β”‚     β”‚  Dashboard       β”‚
    β”‚  Port 5002      β”‚     β”‚  (Next.js)       β”‚
    β”‚  2 replicas     β”‚     β”‚  Port 3002       β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
            β”‚
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚  Worker         β”‚
    β”‚  (async tasks)  β”‚
    β”‚  1 replica      β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
            β”‚
   ─────────┴──────────
   β”‚                  β”‚
β”Œβ”€β”€β–Όβ”€β”€β”€β”        β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”
β”‚ Postgres β”‚     β”‚  Redis  β”‚
β”‚ (ext.)   β”‚     β”‚  (ext.) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

What the chart manages: API deployment + service, Worker deployment, Dashboard deployment + service, Ingress, ConfigMap, Secret, ServiceAccount, HPA (optional), PodDisruptionBudgets.

What stays external: PostgreSQL and Redis. Use a managed service (e.g. RDS, Cloud SQL, ElastiCache) or deploy your own β€” the chart does not include stateful dependencies.

Quick Start

1. Add the chart (local)

Clone the repository and install from the local chart:

1git clone https://github.com/karrioapi/karrio.git 2cd karrio

2. Create a namespace

1kubectl create namespace karrio

3. Create your values file

1cat > my-values.yaml <<EOF 2config: 3 secretKey: "$(openssl rand -hex 32)" 4 jwtSecret: "$(openssl rand -hex 32)" 5 6database: 7 host: your-postgres-host 8 port: 5432 9 name: karrio 10 username: karrio 11 password: your-db-password 12 13redis: 14 host: your-redis-host 15 port: 6379 16 17ingress: 18 enabled: true 19 className: nginx 20 api: 21 host: api.yourdomain.com 22 dashboard: 23 host: app.yourdomain.com 24 tls: 25 - secretName: karrio-tls 26 hosts: 27 - api.yourdomain.com 28 - app.yourdomain.com 29 30dashboardUrl: "https://app.yourdomain.com" 31karrioPublicUrl: "https://api.yourdomain.com" 32EOF

4. Install

1helm install karrio ./charts/karrio \ 2 --namespace karrio \ 3 -f my-values.yaml

5. Verify

1kubectl get pods -n karrio

Wait for all pods to reach Running / Ready state (the API takes 1–2 minutes to start). Then access:

  • API: https://api.yourdomain.com
  • Dashboard: https://app.yourdomain.com

Default login: admin@example.com / demo

Configuration Reference

Images

ParameterDescriptionDefault
image.server.repositoryAPI / Worker imagekarrio.docker.scarf.sh/karrio/server
image.server.tagImage tag (defaults to chart appVersion)""
image.dashboard.repositoryDashboard imagekarrio.docker.scarf.sh/karrio/dashboard
image.dashboard.tagImage tag""
imagePullSecretsRegistry pull secrets[]

API

ParameterDescriptionDefault
api.replicaCountNumber of API pods2
api.resources.requests.cpuCPU request250m
api.resources.requests.memoryMemory request512Mi
api.resources.limits.cpuCPU limit1000m
api.resources.limits.memoryMemory limit1Gi
api.autoscaling.enabledEnable HPAfalse
api.autoscaling.minReplicasHPA min replicas2
api.autoscaling.maxReplicasHPA max replicas10
api.autoscaling.targetCPUUtilizationPercentageHPA CPU target70
api.podDisruptionBudget.enabledEnable PDBtrue
api.podDisruptionBudget.minAvailablePDB min available1

Worker

ParameterDescriptionDefault
worker.replicaCountNumber of Worker pods1
worker.resources.requests.cpuCPU request250m
worker.resources.requests.memoryMemory request512Mi
worker.resources.limits.cpuCPU limit1000m
worker.resources.limits.memoryMemory limit1Gi
worker.podDisruptionBudget.enabledEnable PDBtrue

Dashboard

ParameterDescriptionDefault
dashboard.replicaCountNumber of Dashboard pods1
dashboard.resources.requests.cpuCPU request100m
dashboard.resources.requests.memoryMemory request256Mi
dashboard.resources.limits.cpuCPU limit500m
dashboard.resources.limits.memoryMemory limit512Mi

Ingress

ParameterDescriptionDefault
ingress.enabledEnable Ingress resourcetrue
ingress.classNameIngress class namenginx
ingress.annotationsIngress annotations{}
ingress.api.hostAPI hostnameapi.example.com
ingress.dashboard.hostDashboard hostnameapp.example.com
ingress.tlsTLS configuration[]

Application Config

ParameterDescriptionDefault
config.secretKeyDjango SECRET_KEY (required)""
config.jwtSecretDashboard NEXTAUTH_SECRET (required)""
config.debugModeDjango debug mode"False"
config.useHttpsEnforce HTTPS"True"
config.detachedWorkerRun worker separately"True"
config.enableAllPluginsEnable all carrier plugins"True"

External Dependencies

ParameterDescriptionDefault
database.hostPostgreSQL host (required)""
database.portPostgreSQL port5432
database.nameDatabase namekarrio
database.usernameDatabase userkarrio
database.passwordDatabase password (required)""
redis.hostRedis host (required)""
redis.portRedis port6379

Secrets

ParameterDescriptionDefault
existingSecretUse an existing Secret (must contain SECRET_KEY, JWT_SECRET, DATABASE_PASSWORD)""
dashboardUrlPublic URL of the dashboard""
karrioPublicUrlPublic URL of the API""

Production Hardening

TLS with cert-manager

If you have cert-manager installed, add a ClusterIssuer annotation and TLS block:

1ingress: 2 annotations: 3 cert-manager.io/cluster-issuer: letsencrypt-prod 4 tls: 5 - secretName: karrio-tls 6 hosts: 7 - api.yourdomain.com 8 - app.yourdomain.com

External secrets

For production, store secrets in an external secrets manager (e.g. AWS Secrets Manager, HashiCorp Vault) and reference an existing Kubernetes Secret:

1existingSecret: karrio-secrets

The Secret must contain these keys: SECRET_KEY, JWT_SECRET, DATABASE_PASSWORD.

Resource tuning

The defaults are conservative starting points. Monitor actual usage and adjust:

1api: 2 resources: 3 requests: 4 cpu: 500m 5 memory: 1Gi 6 limits: 7 cpu: 2000m 8 memory: 2Gi

Autoscaling

Enable the Horizontal Pod Autoscaler for the API under load:

1api: 2 autoscaling: 3 enabled: true 4 minReplicas: 2 5 maxReplicas: 10 6 targetCPUUtilizationPercentage: 70

Pod Disruption Budgets

PDBs are enabled by default for both the API and Worker to ensure availability during node drains and cluster upgrades. Adjust minAvailable based on your replica count:

1api: 2 podDisruptionBudget: 3 enabled: true 4 minAvailable: 1

Node affinity and topology

For multi-zone clusters, the API deployment automatically applies topologySpreadConstraints when api.replicaCount > 1 to spread pods across availability zones.

You can also set explicit node selectors or tolerations:

1api: 2 nodeSelector: 3 node.kubernetes.io/instance-type: m5.large 4 tolerations: 5 - key: dedicated 6 operator: Equal 7 value: karrio 8 effect: NoSchedule

Upgrading

Upgrade the chart

1helm upgrade karrio ./charts/karrio \ 2 --namespace karrio \ 3 -f my-values.yaml

Upgrade to a new Karrio version

Set the image tag explicitly:

1helm upgrade karrio ./charts/karrio \ 2 --namespace karrio \ 3 -f my-values.yaml \ 4 --set image.server.tag=2026.1.27 \ 5 --set image.dashboard.tag=2026.1.27

Rolling back

1helm rollback karrio -n karrio

Troubleshooting

Pods stuck in CrashLoopBackOff

Check the logs:

1kubectl logs -n karrio deploy/karrio-api --tail=100 2kubectl logs -n karrio deploy/karrio-worker --tail=100

Common causes:

  • Database not reachable β€” verify database.host and network policies
  • Invalid SECRET_KEY or DATABASE_PASSWORD β€” check the Secret contents
  • Missing database β€” run CREATE DATABASE karrio; on your PostgreSQL instance

API pods not becoming Ready

The Django API takes 1–2 minutes to start. If readiness probes keep failing:

1kubectl describe pod -n karrio -l app.kubernetes.io/component=api

Increase the probe timings if your cluster is slow:

1api: 2 readinessProbe: 3 initialDelaySeconds: 120 4 periodSeconds: 20 5 failureThreshold: 10 6 livenessProbe: 7 initialDelaySeconds: 120 8 failureThreshold: 10

OOMKilled pods

If pods are terminated with OOMKilled, increase memory limits:

1api: 2 resources: 3 limits: 4 memory: 2Gi 5worker: 6 resources: 7 limits: 8 memory: 2Gi

Worker liveness probe failing

The worker liveness probe uses celery inspect ping. If it fails:

1kubectl exec -n karrio deploy/karrio-worker -- \ 2 celery -A karrio.server.asgi inspect ping

If Redis is unreachable, the probe will fail. Verify connectivity:

1kubectl exec -n karrio deploy/karrio-worker -- \ 2 bash -c "redis-cli -h \$REDIS_HOST -p \$REDIS_PORT ping"

Dashboard showing β€œInternal Server Error”

Ensure dashboardUrl and karrioPublicUrl are set correctly and the API is reachable from within the cluster:

1kubectl exec -n karrio deploy/karrio-dashboard -- \ 2 wget -qO- http://karrio-api:5002/v1/references

Checking all resources

1kubectl get all -n karrio 2helm status karrio -n karrio

Uninstalling

1helm uninstall karrio -n karrio 2kubectl delete namespace karrio

This removes all chart-managed resources. External databases and Redis instances are not affected.