Skip to main content

Services: Networking & Discovery

Key Takeaways for AI & Readers
  • Purpose: Provides a stable virtual IP (ClusterIP) and DNS name for a dynamic set of Pods, decoupling consumers from the lifecycle of individual Pod instances.
  • Service Types: ClusterIP (internal-only), NodePort (external via static port), LoadBalancer (cloud-provisioned LB), ExternalName (CNAME alias to external DNS), Headless (no ClusterIP -- returns Pod IPs directly).
  • Matching: Uses label selectors to build a dynamic set of backend endpoints. Traffic is distributed across healthy, ready Pods.
  • Implementation: kube-proxy programs iptables or IPVS rules on every Node; no user-space proxy is involved in the default mode.
  • Discovery: CoreDNS provides <service>.<namespace>.svc.cluster.local records. Headless Services return individual Pod A records instead of the ClusterIP.

1. Why Services Exist

Pods are ephemeral. A Deployment may scale from 3 replicas to 10, or a rolling update may replace every Pod with a new IP address. If your frontend hard-codes 10.244.1.37 to reach the backend, it will break the moment that Pod is rescheduled.

A Service solves this by giving you:

  1. A stable virtual IP (the ClusterIP) that never changes for the lifetime of the Service object.
  2. A DNS name (my-svc.my-ns.svc.cluster.local) that resolves to that IP.
  3. Automatic load balancing across all Pods whose labels match the Service's selector.
  4. Health-aware routing -- only Pods that pass their readiness probe receive traffic.

Think of a Service as a stable front door. Behind it, Pods come and go, but consumers never need to know.


2. Service Types: Choosing Your Exposure

Kubernetes offers four spec.type values plus the "headless" pattern. Each builds on the one before it.

ClusterIP (Default)

Allocates a virtual IP reachable only from inside the cluster. This is the right choice for internal microservice-to-microservice communication.

apiVersion: v1
kind: Service
metadata:
name: backend-api
namespace: production
spec:
type: ClusterIP # This is the default; you can omit it
selector:
app: backend
tier: api
ports:
- name: http
protocol: TCP
port: 80 # The port other Pods use
targetPort: 8080 # The port your container listens on

After applying this manifest, any Pod in the cluster can reach the backend at http://backend-api.production.svc.cluster.local (or simply http://backend-api if the caller is in the same namespace). The ClusterIP (e.g., 10.96.42.12) is assigned automatically from the Service CIDR range and stays constant until you delete the Service.

NodePort

Extends ClusterIP by opening a static port on every Node in the cluster. External clients can hit <NodeIP>:<NodePort> to reach the Service.

apiVersion: v1
kind: Service
metadata:
name: frontend-web
spec:
type: NodePort
selector:
app: frontend
ports:
- name: http
protocol: TCP
port: 80
targetPort: 3000
nodePort: 31080 # Optional: K8s picks from 30000-32767 if omitted

Traffic flow: Client -> NodeIP:31080 -> ClusterIP:80 -> PodIP:3000.

When to use it: Development and on-prem environments where you do not have a cloud load balancer. In production, you typically place a reverse proxy or hardware LB in front of the NodePort.

Limitations: The NodePort range is restricted to 30000-32767 by default (configurable via --service-node-port-range on the API server). Every NodePort Service consumes a port from this finite pool across the entire cluster.

LoadBalancer

Extends NodePort by requesting an external load balancer from the cloud provider (AWS ELB/NLB, GCP TCP/HTTP LB, Azure LB, etc.). The cloud controller provisions the LB and wires it to the NodePorts automatically.

apiVersion: v1
kind: Service
metadata:
name: public-api
annotations:
# Cloud-specific annotations to configure the load balancer
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
spec:
type: LoadBalancer
selector:
app: api-gateway
ports:
- name: https
protocol: TCP
port: 443
targetPort: 8443
# Optional: restrict which source IPs can reach the LB
loadBalancerSourceRanges:
- "203.0.113.0/24"

After the cloud controller reconciles, kubectl get svc public-api will show an EXTERNAL-IP. This is the public endpoint your DNS should point to.

Cost warning: Each LoadBalancer Service provisions a separate cloud load balancer, which incurs cost. For multiple HTTP services, consider using a single Ingress controller (or Gateway API) that fans out to ClusterIP Services internally.

ExternalName

Does not create a ClusterIP or proxy traffic at all. Instead, it creates a CNAME DNS record pointing to an external hostname. This is useful for giving cluster-internal consumers a Kubernetes-native DNS name for an external dependency.

apiVersion: v1
kind: Service
metadata:
name: legacy-db
namespace: production
spec:
type: ExternalName
externalName: db-primary.legacy-infra.example.com

Now a Pod can connect to legacy-db.production.svc.cluster.local, and CoreDNS will return a CNAME to db-primary.legacy-infra.example.com. No proxying, no ClusterIP, no selector.

Caveats:

  • ExternalName Services do not support ports configuration (there is no proxy layer).
  • Some applications do not follow CNAME chains properly, especially those that validate TLS certificates against the original hostname.
  • If the external endpoint changes to a cluster-internal service later, you can swap the type to ClusterIP with a selector -- no client changes needed.

Headless Services (clusterIP: None)

A Headless Service deliberately sets clusterIP: None. Instead of a single virtual IP, a DNS lookup returns the individual Pod IPs as A/AAAA records. This is critical for stateful workloads where clients need to address specific Pods.

apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: database
spec:
clusterIP: None # This makes it headless
selector:
app: postgres
ports:
- name: pg
port: 5432
targetPort: 5432

A DNS query for postgres.database.svc.cluster.local returns multiple A records -- one per ready Pod. This is covered in greater depth in the dedicated section below.


3. Cheat Sheet: Understanding Ports

The most common mistake in Kubernetes networking is mixing up the three types of ports.

Port TypeLocationPurpose
portThe ServiceThe port other Pods use to talk to this service internally.
targetPortThe PodThe port your application is actually listening on (e.g. 8080).
nodePortThe NodeA port (30000-32767) opened on every physical machine for external access.

Analogy: nodePort is the front gate of the park, port is the information desk inside, and targetPort is the specific seat at the cafe.

Pro Tip: Dual-Stack Networking (IPv4/IPv6) Modern clusters support both IPv4 and IPv6. To enable this, set ipFamilyPolicy: PreferDualStack on your Service. This assigns both an IPv4 and IPv6 ClusterIP, making your service future-proof and compliant with mobile/ISP networks.

Named ports: targetPort can reference a port by name rather than number. This lets you change the container's port without updating the Service:

# In the Deployment's Pod template
ports:
- name: http
containerPort: 8080

# In the Service
ports:
- port: 80
targetPort: http # Resolves to 8080 via the Pod spec

This is especially useful during migrations -- you can change the container port from 8080 to 9090 without touching the Service manifest, as long as the port name stays http.


4. Under the Hood: How kube-proxy Works

When you create a Service, no actual process "listens" on the ClusterIP. The ClusterIP is a virtual IP that exists only as an entry in the kernel's packet-processing tables. The component responsible for programming these rules is kube-proxy.

kube-proxy Modes

Every Node runs kube-proxy as a DaemonSet (or static pod). It watches the API Server for Service and EndpointSlice changes, then programs the local node's networking rules.

iptables Mode (Default)

This is the standard mode in most clusters. kube-proxy writes iptables NAT rules that intercept packets destined for Service ClusterIPs and DNAT (Destination NAT) them to a randomly selected backend Pod IP.

How it works:

  1. A Pod sends a packet to 10.96.42.12:80 (the ClusterIP).
  2. The packet hits the PREROUTING or OUTPUT chain in the node's iptables.
  3. A chain of rules matches the destination IP and port to a specific Service.
  4. A probabilistic rule set (using --probability flags) picks one backend Pod.
  5. The destination IP is rewritten (DNAT) to the chosen Pod IP, e.g., 10.244.2.5:8080.
  6. The packet is routed to the Pod. Return traffic is automatically un-NAT'd (conntrack).

Characteristics:

  • No user-space overhead -- all packet processing happens in the kernel.
  • Load balancing is random (statistically uniform), not round-robin.
  • Connection affinity can be enabled with sessionAffinity: ClientIP.
  • Rule count scales as O(N * M) where N = Services and M = endpoints per Service. In clusters with thousands of Services, rule updates can take seconds and consume significant CPU.

IPVS Mode (High Performance)

Uses the IP Virtual Server module in the Linux kernel, which is purpose-built for load balancing. kube-proxy creates an IPVS virtual server for each Service ClusterIP and adds real servers for each backend Pod.

Advantages over iptables:

  • Uses hash tables instead of sequential rule chains -- O(1) lookup time.
  • Supports multiple load-balancing algorithms: round-robin (rr), least connections (lc), destination hashing (dh), source hashing (sh), shortest expected delay (sed), and never queue (nq).
  • Handles tens of thousands of Services without performance degradation.
  • Rule programming is incremental rather than full-table rewrite.

How to enable:

# In the kube-proxy ConfigMap
mode: "ipvs"
ipvs:
scheduler: "rr" # round-robin by default

You must ensure the ip_vs, ip_vs_rr, ip_vs_wrr, and ip_vs_sh kernel modules are loaded on every Node.

nftables Mode (Emerging)

Kubernetes v1.29+ introduced an alpha nftables mode. nftables is the modern successor to iptables in the Linux kernel and offers better performance and a cleaner rule representation. Check the Kubernetes release notes for your version to see if it has reached beta or GA.

The Virtual IP Is Not Real

This is worth emphasizing: if you run ip addr on any Node, you will not find the ClusterIP on any interface. If you try to ping a ClusterIP, it may not respond (ICMP is not always programmed into the rules). The ClusterIP only "exists" in the sense that the kernel intercepts TCP/UDP packets destined for it and rewrites them. This is why curl to a ClusterIP works but ping may not.


5. EndpointSlices: The Scalable Backend Registry

When you create a Service with a selector, the Endpoints controller creates EndpointSlice objects that list the IP addresses of all matching, ready Pods.

Why EndpointSlices Replaced Endpoints

In older Kubernetes versions (<1.21 as default), a Service mapped to a single Endpoints object containing ALL Pod IPs. If you had 5,000 Pods behind a Service, this single object became several megabytes. Every time one Pod was added or removed, the entire object was sent to every kube-proxy on every Node. This caused:

  • API server and etcd write amplification.
  • Network bandwidth spikes during rolling updates.
  • Slow convergence in large clusters.

EndpointSlices break this list into chunks (slices), each holding up to 100 endpoints by default. When a Pod changes, only the affected slice is updated and transmitted. This reduces API server load by orders of magnitude in large clusters.

Inspecting EndpointSlices

# List EndpointSlices for a Service
kubectl get endpointslices -l kubernetes.io/service-name=backend-api

# View details
kubectl describe endpointslice backend-api-abc12

Each EndpointSlice contains:

  • addressType: IPv4 or IPv6.
  • endpoints[]: List of Pod IPs, each annotated with ready, serving, and terminating conditions.
  • ports[]: The port numbers the endpoints expose.

Topology-Aware Routing

EndpointSlices carry topology hints (zone information). When enabled, kube-proxy prefers routing traffic to Pods in the same availability zone as the caller, reducing cross-zone data transfer costs in cloud environments. This is configured via the service.kubernetes.io/topology-mode: Auto annotation on the Service.


6. Service Discovery: DNS vs Environment Variables

How does frontend find backend?

Kubernetes runs a DNS server -- CoreDNS by default -- as a Deployment inside the kube-system namespace. Every Service gets a DNS entry following this pattern:

<service-name>.<namespace>.svc.cluster.local

If caller and callee are in the same namespace, the short name works:

http://backend-api          # Resolves via search domains
http://backend-api.production # Cross-namespace, explicit
http://backend-api.production.svc.cluster.local # Fully qualified

SRV records are also created for named ports:

_http._tcp.backend-api.production.svc.cluster.local

This allows clients to discover both the hostname and port dynamically using SRV lookups, which is useful for protocols like gRPC that support SRV-based service discovery.

Option B: Environment Variables (Legacy)

When a Pod starts, the kubelet injects environment variables for every Service that exists at that point in time in the same namespace:

BACKEND_API_SERVICE_HOST=10.96.42.12
BACKEND_API_SERVICE_PORT=80
BACKEND_API_SERVICE_PORT_HTTP=80

Pitfall: This only works if the Service was created before the Pod. If you create the Service after the Pod, the Pod will not have the variables. For this reason, always use DNS.


7. Headless Services and StatefulSets

Headless Services (with clusterIP: None) are the backbone of stateful workloads in Kubernetes. They pair naturally with StatefulSets.

How Headless DNS Works

For a normal ClusterIP Service, a DNS lookup returns a single A record -- the ClusterIP. For a headless Service, DNS returns one A record per ready Pod.

# Normal Service:
dig backend-api.production.svc.cluster.local
# ANSWER: 10.96.42.12

# Headless Service:
dig postgres.database.svc.cluster.local
# ANSWER: 10.244.1.10, 10.244.2.15, 10.244.3.22

Per-Pod DNS with StatefulSets

When a headless Service is used as the serviceName of a StatefulSet, each Pod gets a stable, predictable DNS name:

<pod-name>.<service-name>.<namespace>.svc.cluster.local

For example, a StatefulSet named postgres with 3 replicas and a headless Service also named postgres:

postgres-0.postgres.database.svc.cluster.local -> 10.244.1.10
postgres-1.postgres.database.svc.cluster.local -> 10.244.2.15
postgres-2.postgres.database.svc.cluster.local -> 10.244.3.22

This is essential for:

  • Database replication: The primary (postgres-0) can be addressed directly. Replicas know which peer to replicate from.
  • Distributed consensus: Systems like etcd, ZooKeeper, and Kafka need stable network identities for peer discovery.
  • Client-side routing: Applications like Redis Cluster expect clients to connect to specific shard owners.

Complete StatefulSet + Headless Service Example

apiVersion: v1
kind: Service
metadata:
name: redis
namespace: cache
spec:
clusterIP: None
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
namespace: cache
spec:
serviceName: redis # Must match the headless Service name
replicas: 3
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7
ports:
- containerPort: 6379
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi

Each Pod gets a persistent identity: redis-0, redis-1, redis-2. Their DNS names (redis-0.redis.cache.svc.cluster.local) survive Pod restarts, even if the underlying IP changes.


8. Session Affinity and Traffic Policies

Session Affinity

By default, a Service distributes each connection to a random backend. If your application requires that a client consistently hits the same Pod (e.g., for in-memory sessions), enable session affinity:

spec:
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 3600 # Affinity lasts 1 hour

This causes kube-proxy to route all connections from the same source IP to the same backend Pod for the duration of the timeout.

Internal and External Traffic Policies

Two fields control how traffic is routed:

spec.internalTrafficPolicy: Controls routing for traffic originating from within the cluster.

  • Cluster (default): Traffic can be routed to any backend Pod across all Nodes.
  • Local: Traffic is only routed to backends on the same Node as the caller. If there are no local endpoints, the traffic is dropped. Useful for DaemonSet-backed Services (like log collectors) where node-local access is required.

spec.externalTrafficPolicy: Controls routing for traffic entering through NodePort or LoadBalancer.

  • Cluster (default): Traffic can be routed to any backend Pod, but the original client IP is lost (due to SNAT).
  • Local: Traffic is only sent to Pods on the Node that received the request. This preserves the client's source IP but means that Nodes without a backend Pod will fail health checks and be removed from the load balancer.

9. Common Pitfalls

1. Selector Mismatch

The number one cause of "my Service has no endpoints." The labels in spec.selector must exactly match the labels on the Pod template. Watch for typos, missing labels, and namespace mismatches.

# Quick check:
kubectl get endpoints my-service
# If <none>, compare:
kubectl get svc my-service -o jsonpath='{.spec.selector}'
kubectl get pods --show-labels

2. targetPort Does Not Match containerPort

Your Service sends traffic to targetPort. If your container listens on 3000 but the Service targets 8080, connections will be refused. Always verify:

kubectl describe svc my-service     # Check targetPort
kubectl get pod <pod> -o jsonpath='{.spec.containers[*].ports[*].containerPort}'

3. Forgetting Readiness Probes

Without a readiness probe, a Pod is considered "Ready" as soon as its container starts -- even if the application inside takes 30 seconds to initialize. During that window, the Service will route traffic to a Pod that is not ready to handle it, causing 502 or connection-refused errors. Always define readiness probes for production workloads.

4. ExternalTrafficPolicy: Local with Uneven Distribution

When externalTrafficPolicy: Local is set, only Nodes running a backend Pod receive traffic from the load balancer. If Node A has 1 Pod and Node B has 5 Pods, each Pod on Node A gets 5x the traffic of each Pod on Node B, because the cloud LB distributes equally across Nodes, not Pods.

5. DNS Resolution Caching

Some application runtimes (Java, Node.js, Go) cache DNS results aggressively. If a headless Service's Pod set changes, the application may not notice for minutes. Configure your runtime's DNS TTL appropriately:

  • Java: Set networkaddress.cache.ttl=5 in java.security or via the -Dsun.net.inetaddr.ttl=5 JVM flag.
  • Node.js: Use dns.setDefaultResultOrder('verbatim') and consider the cacheable-lookup package.
  • Go: The net package re-resolves on each Dial by default, so this is rarely an issue.

Why this is confusing: CoreDNS itself updates endpoints immediately when Pods change. The stale DNS problem is almost always in the application layer, not in CoreDNS. Connection pools, HTTP clients, and gRPC channels often resolve DNS once at startup and reuse the same IP for the lifetime of the connection.

Debugging steps:

  1. Verify CoreDNS is serving fresh records: run dig +short my-headless-svc.ns.svc.cluster.local from a debug pod repeatedly and compare against kubectl get endpoints.
  2. Check your application's DNS caching behavior — look for connection pool configurations, DNS TTL overrides, or HTTP keep-alive settings.
  3. For headless Services, watch DNS records update in real time during a rollout:
    # From a netshoot debug pod — watch DNS records change as pods roll
    watch -n 1 "dig +short my-headless-svc.default.svc.cluster.local"

6. Creating Too Many LoadBalancer Services

Each LoadBalancer type Service provisions a cloud load balancer. In AWS, each ALB/NLB costs money and counts against account limits. Use an Ingress controller or Gateway API to multiplex many HTTP routes through a single load balancer.


10. Troubleshooting Guide

Problem: "I can't connect to my Service!"

Follow this checklist top to bottom:

Step 1: Check Endpoints

kubectl get endpoints my-service
# or the modern equivalent:
kubectl get endpointslices -l kubernetes.io/service-name=my-service
  • Result is empty? Your selector does not match any Pods. Verify labels match.
  • Result has IPs? Good -- the Service knows where to send traffic. Move to the next step.

Step 2: Check Pod Readiness

kubectl get pods -l app=my-app -o wide

Are the Pods in Running state? Are they showing READY 1/1? If readiness probes are failing, the Pod will be removed from the EndpointSlice even though it is Running.

Step 3: Verify Ports

# From inside a debug Pod:
kubectl run debug --rm -it --image=nicolaka/netshoot -- bash
# Then:
curl -v http://my-service.default.svc.cluster.local:80

If the Service is reachable but your app returns errors, the Service networking is fine -- the problem is in your application.

Step 4: Test Direct Pod Connectivity

# Get a Pod IP from the endpoints list
kubectl exec debug -- curl -v http://10.244.2.5:8080

If you can reach the Pod directly but not through the Service, the issue is in kube-proxy rules. Check if kube-proxy is running on the Node:

kubectl get pods -n kube-system -l k8s-app=kube-proxy
kubectl logs -n kube-system <kube-proxy-pod>

Step 5: DNS Resolution

kubectl exec debug -- nslookup my-service.default.svc.cluster.local

If DNS fails, check that CoreDNS is running:

kubectl get pods -n kube-system -l k8s-app=kube-dns
kubectl logs -n kube-system -l k8s-app=kube-dns

Step 5b: Deep DNS Debugging

When basic DNS checks pass but resolution is still flaky, use these techniques from a debug pod:

# Launch a debug pod with network tools
kubectl run netshoot --rm -it --image=nicolaka/netshoot -- bash

# 1. Trace the full DNS search path — shows which domains are attempted
dig my-service.default.svc.cluster.local +search +showsearch

# 2. Test with FQDN (trailing dot) to bypass search domain expansion
dig my-service.default.svc.cluster.local. +short

# 3. Check the pod's DNS configuration
cat /etc/resolv.conf
# Look for: nameserver, search domains, and ndots value

CoreDNS Troubleshooting:

If DNS resolution fails entirely, check the CoreDNS stack:

# Are CoreDNS pods running?
kubectl get pods -n kube-system -l k8s-app=kube-dns -o wide

# Check CoreDNS logs for errors (NXDOMAIN, timeouts, loop detection)
kubectl logs -n kube-system -l k8s-app=kube-dns --tail=50

# Inspect the CoreDNS ConfigMap for misconfigurations
kubectl get configmap coredns -n kube-system -o yaml

# Check CoreDNS metrics (if prometheus is configured)
kubectl exec -n kube-system <coredns-pod> -- wget -qO- http://localhost:9153/metrics | grep coredns_dns_requests_total
The ndots:5 Performance Trap

By default, Kubernetes sets ndots: 5 in every Pod's /etc/resolv.conf. This means any hostname with fewer than 5 dots is treated as a relative name, and the resolver appends each search domain before trying the name as-is.

For an external hostname like api.stripe.com (2 dots, less than 5), the resolver tries:

  1. api.stripe.com.default.svc.cluster.local (NXDOMAIN)
  2. api.stripe.com.svc.cluster.local (NXDOMAIN)
  3. api.stripe.com.cluster.local (NXDOMAIN)
  4. api.stripe.com (success)

This means 4 DNS queries instead of 1 for every external call — multiplied across all your Pods, this can overwhelm CoreDNS.

Quick fix: Set ndots: 2 on Pods that make frequent external DNS calls:

spec:
dnsConfig:
options:
- name: ndots
value: "2"

For a comprehensive discussion of DNS optimization and service discovery patterns, see Service Discovery.

Step 6: Network Policies

If you have NetworkPolicy objects in the namespace, they may be blocking ingress to your Pods. Check:

kubectl get networkpolicies -n <namespace>

A NetworkPolicy that selects your Pods will deny all ingress by default unless explicitly allowed.


11. Hands-On Exercise

Put the concepts from this page into practice with this exercise. You will create a backend Deployment, expose it with different Service types, and observe the networking behavior.

Prerequisites

  • A running Kubernetes cluster (minikube, kind, or a cloud cluster).
  • kubectl configured and connected.

Step 1: Create the Backend

kubectl create namespace svc-lab

kubectl apply -n svc-lab -f - <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-server
spec:
replicas: 3
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: hashicorp/http-echo:0.2.3
args: ["-text=hello from echo-server"]
ports:
- containerPort: 5678
EOF

Step 2: Expose as ClusterIP

kubectl apply -n svc-lab -f - <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: echo-clusterip
spec:
type: ClusterIP
selector:
app: echo
ports:
- port: 80
targetPort: 5678
EOF

Verify:

kubectl get svc -n svc-lab echo-clusterip
kubectl get endpoints -n svc-lab echo-clusterip
# You should see 3 endpoint IPs

# Test from inside the cluster:
kubectl run curl-test --rm -it --image=curlimages/curl -n svc-lab -- \
curl -s http://echo-clusterip
# Expected output: hello from echo-server

Step 3: Expose as NodePort

kubectl apply -n svc-lab -f - <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: echo-nodeport
spec:
type: NodePort
selector:
app: echo
ports:
- port: 80
targetPort: 5678
nodePort: 30080
EOF

Verify:

# Get the Node IP (for minikube):
minikube ip
# Then from your host machine:
curl http://$(minikube ip):30080

Step 4: Create a Headless Service and Observe DNS

kubectl apply -n svc-lab -f - <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: echo-headless
spec:
clusterIP: None
selector:
app: echo
ports:
- port: 5678
targetPort: 5678
EOF

Verify:

kubectl run dns-test --rm -it --image=nicolaka/netshoot -n svc-lab -- \
dig echo-headless.svc-lab.svc.cluster.local +short
# You should see 3 individual Pod IPs instead of a single ClusterIP

Step 5: Clean Up

kubectl delete namespace svc-lab

Interactive: Service & Deployment Simulator

Explore how Services, Deployments, and Pods interact:


12. What's Next?

Now that you understand how Services provide stable networking and discovery for Pods, explore these related topics:

  • Ingress / Gateway API: Route external HTTP/HTTPS traffic to multiple internal Services based on hostname and path rules, using a single load balancer.
  • Network Policies: Define firewall rules that control which Pods can communicate with each other and with the outside world.
  • Deployments: Understand how rolling updates interact with Service endpoints -- specifically how readiness probes gate traffic during rollouts.
  • Storage: Learn about PersistentVolumes and PersistentVolumeClaims, which pair with StatefulSets and headless Services for stateful workloads.
  • CoreDNS Configuration: Customize DNS behavior, add upstream resolvers, or configure conditional forwarding for hybrid-cloud setups.