Services provide points of entry to one or more Pods, behaving as load balancers operating at layer 4 of the OSI model.
Services are long-lived, designed to provide access to applications at a predictable location despite the ephemeral nature of pods. Once assigned, the service name, IP address and port will never change -- it's stable.
"Endpoints" providing a service can be specified in one of two ways:
- Endpoint resources can be manually created for each backend; or
- A selector can be specified matching pod labels are used to determine which Pods are members of a service, in which case Kubernetes manages Endpoints.
Readiness probes determine whether or not they'll receive traffic and aren't to be confused with liveness probes.
Traffic is directed to the correct pods, even if no suitable pod is running on the node, by `kube-proxy`.
There are four different types of service:
- ClusterIP uses a VIP accessible within the cluster. These services are accessible from all cluster Nodes, even those not currently running a Pod matching the Service's selector; traffic will be proxied to an appropriate Pod. ClusterIP services are accessible only within the cluster by default, though they can accept external traffic via Ingress.
- NodePort exposes a port either set by
nodePortor selected from a predefined range (30,000-32,767 by default) on each Node.
- LoadBalancer uses a cloud-specific load balancer implementation.
- ExternalName maps an external DNS name for a service to the cluster using a CNAME, allowing us to reduce the number of configuration changes applied to our Pods.
None is a special type which allows service discovery via cluster DNS but doesn't perform any load balancing nor provision a cluster IP for accessing the service.
Session affinity, where a user session is tied to a single pod, can be configured in the
# Inside a Service.spec
timeoutSeconds: 10800 # default
Forward port 8080 to port 80 on the pod (only accessible locally):
kubectl port-forward pod/name 8080:80
Or using a manifest:
- name: http
Ingress is designed to address these limitations:
- Manual configuration of external load balancers when using NodePort services on-premises.
kube-proxynetwork hops introduce latency.
- Load balancers per-service can be expensive.
- No host- or path-based routing, as there's no layer 7 awareness.