Kubernetes
Deploy Karrio on any Kubernetes cluster using the official Helm chart.
Prerequisites
- Kubernetes cluster (1.25+)
- Helm 3.10+
kubectlconfigured for your cluster- An Ingress controller (e.g. ingress-nginx)
- cert-manager (recommended for TLS)
- External PostgreSQL 14+ database
- External Redis 6+ instance
Architecture Overview
The Helm chart deploys three workloads into your cluster:
βββββββββββββββ
β Ingress β
ββββββββ¬βββββββ
βββββββββββββ΄ββββββββββββ
βΌ βΌ
ββββββββββββββββββ βββββββββββββββββββ
β API (Django) β β Dashboard β
β Port 5002 β β (Next.js) β
β 2 replicas β β Port 3002 β
βββββββββ¬βββββββββ βββββββββββββββββββ
β
βββββββββ΄βββββββββ
β Worker β
β (async tasks) β
β 1 replica β
βββββββββ¬βββββββββ
β
ββββββββββ΄ββββββββββ
β β
ββββΌββββ ββββββΌβββββ
β Postgres β β Redis β
β (ext.) β β (ext.) β
ββββββββββββ βββββββββββ
What the chart manages: API deployment + service, Worker deployment, Dashboard deployment + service, Ingress, ConfigMap, Secret, ServiceAccount, HPA (optional), PodDisruptionBudgets.
What stays external: PostgreSQL and Redis. Use a managed service (e.g. RDS, Cloud SQL, ElastiCache) or deploy your own β the chart does not include stateful dependencies.
Quick Start
1. Add the chart (local)
Clone the repository and install from the local chart:
1git clone https://github.com/karrioapi/karrio.git 2cd karrio
2. Create a namespace
1kubectl create namespace karrio
3. Create your values file
1cat > my-values.yaml <<EOF 2config: 3 secretKey: "$(openssl rand -hex 32)" 4 jwtSecret: "$(openssl rand -hex 32)" 5 6database: 7 host: your-postgres-host 8 port: 5432 9 name: karrio 10 username: karrio 11 password: your-db-password 12 13redis: 14 host: your-redis-host 15 port: 6379 16 17ingress: 18 enabled: true 19 className: nginx 20 api: 21 host: api.yourdomain.com 22 dashboard: 23 host: app.yourdomain.com 24 tls: 25 - secretName: karrio-tls 26 hosts: 27 - api.yourdomain.com 28 - app.yourdomain.com 29 30dashboardUrl: "https://app.yourdomain.com" 31karrioPublicUrl: "https://api.yourdomain.com" 32EOF
4. Install
1helm install karrio ./charts/karrio \ 2 --namespace karrio \ 3 -f my-values.yaml
5. Verify
1kubectl get pods -n karrio
Wait for all pods to reach Running / Ready state (the API takes 1β2 minutes to start). Then access:
- API:
https://api.yourdomain.com - Dashboard:
https://app.yourdomain.com
Default login: admin@example.com / demo
Configuration Reference
Images
| Parameter | Description | Default |
|---|---|---|
image.server.repository | API / Worker image | karrio.docker.scarf.sh/karrio/server |
image.server.tag | Image tag (defaults to chart appVersion) | "" |
image.dashboard.repository | Dashboard image | karrio.docker.scarf.sh/karrio/dashboard |
image.dashboard.tag | Image tag | "" |
imagePullSecrets | Registry pull secrets | [] |
API
| Parameter | Description | Default |
|---|---|---|
api.replicaCount | Number of API pods | 2 |
api.resources.requests.cpu | CPU request | 250m |
api.resources.requests.memory | Memory request | 512Mi |
api.resources.limits.cpu | CPU limit | 1000m |
api.resources.limits.memory | Memory limit | 1Gi |
api.autoscaling.enabled | Enable HPA | false |
api.autoscaling.minReplicas | HPA min replicas | 2 |
api.autoscaling.maxReplicas | HPA max replicas | 10 |
api.autoscaling.targetCPUUtilizationPercentage | HPA CPU target | 70 |
api.podDisruptionBudget.enabled | Enable PDB | true |
api.podDisruptionBudget.minAvailable | PDB min available | 1 |
Worker
| Parameter | Description | Default |
|---|---|---|
worker.replicaCount | Number of Worker pods | 1 |
worker.resources.requests.cpu | CPU request | 250m |
worker.resources.requests.memory | Memory request | 512Mi |
worker.resources.limits.cpu | CPU limit | 1000m |
worker.resources.limits.memory | Memory limit | 1Gi |
worker.podDisruptionBudget.enabled | Enable PDB | true |
Dashboard
| Parameter | Description | Default |
|---|---|---|
dashboard.replicaCount | Number of Dashboard pods | 1 |
dashboard.resources.requests.cpu | CPU request | 100m |
dashboard.resources.requests.memory | Memory request | 256Mi |
dashboard.resources.limits.cpu | CPU limit | 500m |
dashboard.resources.limits.memory | Memory limit | 512Mi |
Ingress
| Parameter | Description | Default |
|---|---|---|
ingress.enabled | Enable Ingress resource | true |
ingress.className | Ingress class name | nginx |
ingress.annotations | Ingress annotations | {} |
ingress.api.host | API hostname | api.example.com |
ingress.dashboard.host | Dashboard hostname | app.example.com |
ingress.tls | TLS configuration | [] |
Application Config
| Parameter | Description | Default |
|---|---|---|
config.secretKey | Django SECRET_KEY (required) | "" |
config.jwtSecret | Dashboard NEXTAUTH_SECRET (required) | "" |
config.debugMode | Django debug mode | "False" |
config.useHttps | Enforce HTTPS | "True" |
config.detachedWorker | Run worker separately | "True" |
config.enableAllPlugins | Enable all carrier plugins | "True" |
External Dependencies
| Parameter | Description | Default |
|---|---|---|
database.host | PostgreSQL host (required) | "" |
database.port | PostgreSQL port | 5432 |
database.name | Database name | karrio |
database.username | Database user | karrio |
database.password | Database password (required) | "" |
redis.host | Redis host (required) | "" |
redis.port | Redis port | 6379 |
Secrets
| Parameter | Description | Default |
|---|---|---|
existingSecret | Use an existing Secret (must contain SECRET_KEY, JWT_SECRET, DATABASE_PASSWORD) | "" |
dashboardUrl | Public URL of the dashboard | "" |
karrioPublicUrl | Public URL of the API | "" |
Production Hardening
TLS with cert-manager
If you have cert-manager installed, add a ClusterIssuer annotation and TLS block:
1ingress: 2 annotations: 3 cert-manager.io/cluster-issuer: letsencrypt-prod 4 tls: 5 - secretName: karrio-tls 6 hosts: 7 - api.yourdomain.com 8 - app.yourdomain.com
External secrets
For production, store secrets in an external secrets manager (e.g. AWS Secrets Manager, HashiCorp Vault) and reference an existing Kubernetes Secret:
1existingSecret: karrio-secrets
The Secret must contain these keys: SECRET_KEY, JWT_SECRET, DATABASE_PASSWORD.
Resource tuning
The defaults are conservative starting points. Monitor actual usage and adjust:
1api: 2 resources: 3 requests: 4 cpu: 500m 5 memory: 1Gi 6 limits: 7 cpu: 2000m 8 memory: 2Gi
Autoscaling
Enable the Horizontal Pod Autoscaler for the API under load:
1api: 2 autoscaling: 3 enabled: true 4 minReplicas: 2 5 maxReplicas: 10 6 targetCPUUtilizationPercentage: 70
Pod Disruption Budgets
PDBs are enabled by default for both the API and Worker to ensure availability during node drains and cluster upgrades. Adjust minAvailable based on your replica count:
1api: 2 podDisruptionBudget: 3 enabled: true 4 minAvailable: 1
Node affinity and topology
For multi-zone clusters, the API deployment automatically applies topologySpreadConstraints when api.replicaCount > 1 to spread pods across availability zones.
You can also set explicit node selectors or tolerations:
1api: 2 nodeSelector: 3 node.kubernetes.io/instance-type: m5.large 4 tolerations: 5 - key: dedicated 6 operator: Equal 7 value: karrio 8 effect: NoSchedule
Upgrading
Upgrade the chart
1helm upgrade karrio ./charts/karrio \ 2 --namespace karrio \ 3 -f my-values.yaml
Upgrade to a new Karrio version
Set the image tag explicitly:
1helm upgrade karrio ./charts/karrio \ 2 --namespace karrio \ 3 -f my-values.yaml \ 4 --set image.server.tag=2026.1.27 \ 5 --set image.dashboard.tag=2026.1.27
Rolling back
1helm rollback karrio -n karrio
Troubleshooting
Pods stuck in CrashLoopBackOff
Check the logs:
1kubectl logs -n karrio deploy/karrio-api --tail=100 2kubectl logs -n karrio deploy/karrio-worker --tail=100
Common causes:
- Database not reachable β verify
database.hostand network policies - Invalid
SECRET_KEYorDATABASE_PASSWORDβ check the Secret contents - Missing database β run
CREATE DATABASE karrio;on your PostgreSQL instance
API pods not becoming Ready
The Django API takes 1β2 minutes to start. If readiness probes keep failing:
1kubectl describe pod -n karrio -l app.kubernetes.io/component=api
Increase the probe timings if your cluster is slow:
1api: 2 readinessProbe: 3 initialDelaySeconds: 120 4 periodSeconds: 20 5 failureThreshold: 10 6 livenessProbe: 7 initialDelaySeconds: 120 8 failureThreshold: 10
OOMKilled pods
If pods are terminated with OOMKilled, increase memory limits:
1api: 2 resources: 3 limits: 4 memory: 2Gi 5worker: 6 resources: 7 limits: 8 memory: 2Gi
Worker liveness probe failing
The worker liveness probe uses celery inspect ping. If it fails:
1kubectl exec -n karrio deploy/karrio-worker -- \ 2 celery -A karrio.server.asgi inspect ping
If Redis is unreachable, the probe will fail. Verify connectivity:
1kubectl exec -n karrio deploy/karrio-worker -- \ 2 bash -c "redis-cli -h \$REDIS_HOST -p \$REDIS_PORT ping"
Dashboard showing βInternal Server Errorβ
Ensure dashboardUrl and karrioPublicUrl are set correctly and the API is reachable from within the cluster:
1kubectl exec -n karrio deploy/karrio-dashboard -- \ 2 wget -qO- http://karrio-api:5002/v1/references
Checking all resources
1kubectl get all -n karrio 2helm status karrio -n karrio
Uninstalling
1helm uninstall karrio -n karrio 2kubectl delete namespace karrio
This removes all chart-managed resources. External databases and Redis instances are not affected.
