Skip to main content

Developer Experience: Coding in K8s

Key Takeaways for AI & Readers
  • Inner Loop Efficiency: The standard Kubernetes development cycle (code, build image, push to registry, deploy, wait, test) takes 3-10 minutes per iteration. Specialized tools reduce this to seconds.
  • Networking Bridging: Tools like Telepresence create a bidirectional network tunnel between your local machine and the cluster, allowing local code to talk to in-cluster services (databases, caches, APIs) without deploying.
  • Automated Sync: Skaffold and Tilt automate the build-push-deploy cycle, detecting file changes and triggering updates in seconds. Tilt adds a browser dashboard for multi-service visibility.
  • Virtual Clusters: vcluster creates lightweight virtual Kubernetes clusters inside a host cluster, giving each developer an isolated environment with full cluster admin access.
  • Remote Development: Okteto and devcontainers allow developers to write code locally but execute it in a Kubernetes-connected environment, combining the comfort of local IDEs with cluster connectivity.
  • No One-Size-Fits-All: The best tool depends on your team size, cluster architecture, and development workflow. Most teams benefit from combining 2-3 tools from different categories.

The biggest complaint about Kubernetes is that it is slow to develop for. The traditional development cycle looks like this:

Code → Build Image → Push to Registry → Deploy to Cluster → Wait → Test

Each iteration through this loop takes 3 to 10 minutes. Compare that to local development where saving a file triggers a hot reload in under a second. This gap -- the difference between your inner loop (write code, see results) and outer loop (CI/CD, staging, production) -- is where developer productivity dies.

The tools in this guide exist to make the Kubernetes inner loop feel like local development.

The Inner Loop Problem in Detail

To understand why Kubernetes development is slow, consider what happens when you change a single line of code:

  1. Build: Docker builds your image (30s-3min depending on caching).
  2. Push: Upload the image to a registry (10s-60s depending on image size and bandwidth).
  3. Deploy: kubectl apply or Helm upgrade (5s-30s for the rollout).
  4. Wait: Kubernetes pulls the new image, starts the container, passes health checks (10s-60s).
  5. Test: Manually verify the change works.

Total: 1-5 minutes per change. Multiply by 50 changes per day and you lose hours of productive time. Worse, this latency breaks flow state -- by the time your change is deployed, you have forgotten what you were testing.

1. Local-to-Cluster Bridging: Telepresence

Instead of moving your code to the cluster, move the cluster's network to your laptop.

💻
Local Laptop
Port: 8080 (Node.js)
K8s Cluster
📦PROD POD
Telepresence swaps a pod in the cluster with a proxy that routes traffic directly to your local machine. You can debug production traffic on your laptop!

Telepresence creates a bidirectional network tunnel between your local machine and the Kubernetes cluster. When running, your local process can:

  • Resolve cluster DNS names (redis.default.svc.cluster.local).
  • Connect to in-cluster services using their service names.
  • Receive traffic that would normally go to a pod in the cluster.

How Telepresence Works

  1. Connect: telepresence connect establishes a VPN-like tunnel to the cluster. Your local machine gains access to the cluster's DNS and network.
  2. Intercept: telepresence intercept <deployment> redirects traffic destined for a specific pod to your local machine.
  3. Develop: Run your application locally with your favorite IDE and debugger. It can talk to in-cluster databases, caches, and APIs.

Basic Telepresence Workflow

# Install Telepresence and connect to the cluster
telepresence connect

# Verify connectivity -- this should resolve to a cluster IP
curl http://redis.default.svc.cluster.local:6379

# Intercept a specific service
# All traffic to "web-frontend" now routes to localhost:8080
telepresence intercept web-frontend --port 8080

# Run your application locally
npm run dev
# Your local server at :8080 now receives cluster traffic
# and can talk to cluster services by DNS name

# When done, leave the intercept
telepresence leave web-frontend
telepresence quit

Personal Intercepts

In a shared development cluster, you do not want to intercept traffic from your teammates. Personal intercepts use HTTP headers to route only your requests to your local machine:

# Create a personal intercept with a header filter
telepresence intercept web-frontend \
--port 8080 \
--http-header x-telepresence-id=alice

# Only requests with the header "x-telepresence-id: alice"
# are routed to your machine. All other traffic goes to the
# in-cluster pod as normal.

Volume Mounts

Telepresence can mount volumes from the intercepted pod to your local filesystem. This gives you access to ConfigMaps, Secrets, and other mounted files:

telepresence intercept web-frontend \
--port 8080 \
--mount /tmp/telepresence

# ConfigMaps and Secrets from the pod are now at
# /tmp/telepresence/var/run/secrets/...

2. Automated Build-Deploy: Skaffold

If you need your code to run inside the cluster (not locally), Skaffold automates the build-push-deploy cycle. It watches your local files and triggers an update whenever you save.

Skaffold Workflow

File saved → Skaffold detects change → Build image → Push → Deploy → Ready

With optimizations like Jib (for Java) or file sync (for interpreted languages), this cycle can be as fast as 2-5 seconds.

skaffold.yaml Example

apiVersion: skaffold/v4beta6
kind: Config
metadata:
name: my-app
build:
artifacts:
- image: my-registry/my-app
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.js' # Sync JS files directly (no rebuild)
dest: /app/src
local:
push: true # Push to remote registry
deploy:
kubectl:
manifests:
- k8s/*.yaml

Key Skaffold Features

  • File sync: For interpreted languages (Python, Node.js), Skaffold can copy changed files directly into the running container without rebuilding the image. This reduces the cycle to 1-2 seconds.
  • Build optimization: Supports Jib (Java), Buildpacks, Bazel, and custom build scripts in addition to Docker.
  • Profiles: Define different configurations for dev, staging, and production.
  • Port forwarding: Automatically forwards service ports to your local machine.
# Start development mode (watches for changes)
skaffold dev

# One-time deploy
skaffold run

# Deploy with a specific profile
skaffold dev -p staging

3. Multi-Service Dashboard: Tilt

Tilt is similar to Skaffold in functionality but adds a real-time browser dashboard that shows the status of all your services, their logs, and build progress. It is particularly valuable for microservices architectures where you need to see what is happening across 5-20 services at once.

Tiltfile Example

Tilt uses a Starlark-based configuration language (similar to Python):

# Tiltfile
# Build the Docker image
docker_build('my-registry/web-frontend', './frontend')
docker_build('my-registry/api-server', './api')

# Deploy Kubernetes manifests
k8s_yaml(['k8s/frontend.yaml', 'k8s/api.yaml'])

# Define resource groupings (shown in the dashboard)
k8s_resource('web-frontend', port_forwards='3000:3000')
k8s_resource('api-server', port_forwards='8080:8080')

# Live update: sync files without rebuilding the image
docker_build(
'my-registry/web-frontend',
'./frontend',
live_update=[
sync('./frontend/src', '/app/src'), # Sync source files
run('npm install', trigger='./frontend/package.json'), # Reinstall on dep change
]
)

Tilt vs. Skaffold

FeatureSkaffoldTilt
ConfigurationYAMLStarlark (Python-like)
DashboardCLI onlyBrowser-based UI
Multi-service visibilityLimitedExcellent
Live updateFile syncLive update with run commands
CI/CD integrationStrong (skaffold run)Possible but less common
Learning curveLowerModerate

4. Development Containers (devcontainers)

Development containers standardize the development environment so every developer has the same tools, dependencies, and configuration regardless of their local OS. They are particularly valuable for Kubernetes development because they can include kubectl, helm, cloud CLI tools, and kubeconfig pre-configured.

.devcontainer/devcontainer.json Example

{
"name": "K8s Development",
"image": "mcr.microsoft.com/devcontainers/base:ubuntu",
"features": {
"ghcr.io/devcontainers/features/kubectl-helm-minikube:1": {},
"ghcr.io/devcontainers/features/docker-in-docker:2": {}
},
"forwardPorts": [8080, 3000],
"postCreateCommand": "kubectl config use-context dev-cluster",
"mounts": [
"source=${localEnv:HOME}/.kube,target=/home/vscode/.kube,type=bind"
],
"customizations": {
"vscode": {
"extensions": [
"ms-kubernetes-tools.vscode-kubernetes-tools",
"redhat.vscode-yaml"
]
}
}
}

Devcontainers work with VS Code (locally and via Remote SSH), GitHub Codespaces, JetBrains IDEs, and other editors that support the specification.

5. Virtual Clusters: vcluster

vcluster creates a fully functional Kubernetes cluster inside a namespace of an existing (host) cluster. Each developer gets their own cluster with full admin access, without consuming the resources of a real cluster.

How vcluster Works

  • The virtual cluster runs its own API server, controller manager, and etcd (or SQLite) inside a single pod on the host cluster.
  • Workloads created in the virtual cluster are synced to the host cluster for actual scheduling.
  • From the developer's perspective, they have a full cluster. From the platform team's perspective, it is just another namespace.
# Create a virtual cluster
vcluster create my-dev-env --namespace dev-team

# Connect to it (updates your kubeconfig)
vcluster connect my-dev-env --namespace dev-team

# You now have a full cluster -- install CRDs, create namespaces, etc.
kubectl create namespace test
kubectl apply -f my-app.yaml

# Disconnect when done
vcluster disconnect

# Delete the virtual cluster
vcluster delete my-dev-env --namespace dev-team

vcluster Advantages

  • Seconds to create: A new cluster is ready in 10-30 seconds (vs. 5-10 minutes for a real cluster).
  • Full isolation: Each developer can install their own CRDs, admission webhooks, and RBAC rules without affecting others.
  • Minimal resources: The virtual cluster's control plane uses about 128Mi of memory.
  • Cost-effective: One host cluster supports dozens of virtual clusters.

6. Remote Development: Okteto

Okteto takes a different approach: instead of bridging networks or syncing files, it replaces your pod with a development container that syncs files in real-time. You code locally, but your application runs in the cluster with full access to cluster resources.

# okteto.yaml
dev:
web-frontend:
image: node:20
command: npm run dev
sync:
- .:/app # Sync current directory to /app in the pod
forward:
- 3000:3000 # Forward port 3000
environment:
- NODE_ENV=development
# Activate development mode
okteto up

# Your pod is replaced with a development container.
# Local file changes are synced in real-time.
# Port 3000 is forwarded to your local machine.

# When done
okteto down

Port Forwarding Tips

Regardless of which tool you use, kubectl port-forward is essential for quick access to cluster services during development:

# Forward a single service
kubectl port-forward svc/my-api 8080:80

# Forward a specific pod
kubectl port-forward pod/my-pod-abc123 5432:5432

# Forward from all interfaces (accessible from other machines on your network)
kubectl port-forward --address 0.0.0.0 svc/my-api 8080:80

# Forward multiple ports
kubectl port-forward svc/my-app 8080:80 8443:443

Important: kubectl port-forward is a debugging tool, not a production proxy. It creates a single TCP connection and does not handle load balancing, retries, or TLS termination.

Tool Comparison

ToolApproachBest ForLatencyTeam Size
TelepresenceNetwork bridgingIndividual debugging, microservicesInstant (local execution)Small-medium
SkaffoldAuto build-deployCI/CD-integrated dev, Kubernetes-native workflows2-30s per changeAny
TiltAuto build-deploy + dashboardMicroservices with many components2-30s per changeMedium-large
DevcontainersStandardized dev envOnboarding, consistent toolingN/A (environment tool)Any
vclusterVirtual clustersPlatform teams, CRD development, isolation10-30s to createMedium-large
OktetoPod replacement + file syncFull cluster-context development1-3s per changeSmall-medium

Common Pitfalls

  1. Running everything locally: Trying to run all microservices on your laptop does not scale. Use Telepresence to connect to real cluster services and run only the service you are developing locally.

  2. Skipping resource limits in dev: Development pods without resource limits can consume cluster resources and affect other developers. Always set requests and limits, even in dev.

  3. Sharing mutable cluster resources: Without isolation (namespaces, vclusters), developers overwrite each other's ConfigMaps, Secrets, and deployments. Give each developer their own namespace or virtual cluster.

  4. Ignoring image pull time: Large Docker images (1GB+) take minutes to pull on every deployment. Use multi-stage builds, slim base images, and registry mirrors to reduce image size.

  5. Not leveraging file sync: If your language supports hot reload (Node.js, Python, Go with Air), use Skaffold's file sync or Tilt's live update instead of rebuilding the entire image.

  6. Forgetting to disconnect Telepresence: Leaving a Telepresence intercept running can confuse other developers by routing their traffic to your disconnected laptop. Always run telepresence quit when done.

Best Practices

  1. Layer your tools: Use devcontainers for environment standardization, vcluster for isolation, and Skaffold or Tilt for the development loop. These tools complement each other.

  2. Minimize image build time: Use Docker layer caching, multi-stage builds, and language-specific optimizers (Jib for Java, ko for Go) to keep build times under 10 seconds.

  3. Create a shared development cluster: Instead of every developer running Minikube or Kind locally, maintain a shared development cluster with vcluster for isolation. This mirrors production more accurately and reduces "works on my machine" issues.

  4. Automate environment setup: New developers should be able to run a single command (make dev or tilt up) to get their full development environment running.

  5. Use port forwarding for quick debugging: Before setting up a full development workflow, kubectl port-forward gets you 80% of the way there for quick fixes and debugging.

  6. Document your team's workflow: Different teams settle on different tool combinations. Document the recommended setup (including install instructions, configuration files, and common commands) so new members can be productive within hours, not days.

What's Next?

  • Troubleshooting: When your development environment is not working, systematic debugging gets you back on track.
  • Observability: Set up monitoring in your development cluster for faster feedback on performance issues.
  • OpenTelemetry: Add distributed tracing to your development workflow for debugging microservice interactions.
  • Ingress: Expose your development services with proper routing for realistic testing.