Learn Services & Networking (CKA) with Interactive Flashcards
Master key concepts in Services & Networking through our interactive flashcard system. Click on each card to reveal detailed explanations and enhance your understanding.
Understand connectivity between Pods
In the context of the Certified Kubernetes Administrator (CKA) exam, understanding Pod connectivity is anchored in the 'IP-per-Pod' model. Unlike legacy container networking which relied on port mapping, Kubernetes mandates a flat network structure where every Pod receives a unique IP address that is routable within the cluster.
The fundamental requirements for this network model, implemented by CNI (Container Network Interface) plugins like Calico, Flannel, or Cilium, are:
1. Pod-to-Pod: All Pods can communicate with all other Pods without Network Address Translation (NAT), regardless of which node they reside on.
2. Node-to-Pod: Agents on a node (e.g., system daemons, Kubelet) can communicate with all Pods on that node.
3. Host Network: Pods in the host network of a node can communicate with all Pods on all nodes without NAT.
Mechanically, inside a single node, connectivity is often handled via a virtual ethernet bridge and veth pairs connecting the Pod's network namespace to the root namespace. Across nodes, the CNI plugin manages routing via overlay networks (encapsulation like VXLAN) or direct routing protocols (like BGP) to ensure packets reach the destination node.
Crucially for the CKA, you must remember that this connectivity is open by default; any Pod can reach any other Pod. To restrict this, you must implement Network Policies, which serve as an in-cluster firewall. A Network Policy selects groups of Pods using labels and defines specific allow-list rules for Ingress (incoming) and Egress (outgoing) traffic. Mastery of this topic involves not just ensuring connectivity works (debugging via `ping` or `curl` from ephemeral containers), but also securing it explicitly.
Define and enforce Network Policies
In Kubernetes, Network Policies act as a software-defined firewall governing how groups of Pods communicate with each other and other network endpoints. For the CKA exam, understanding this concept is vital for securing cluster networking.
By default, Kubernetes follows an "allow-all" philosophy: any Pod can talk to any other Pod across namespaces. Network Policies allow you to enforce granular traffic control to override this behavior. However, the enforcement relies entirely on the CNI (Container Network Interface) plugin; simple plugins like Flannel do not support them, while plugins like Calico, Weave Net, and Cilium do. If the CNI does not support policies, creating the resource will have no effect.
To define a policy, you create a `NetworkPolicy` resource using YAML. The core logic relies on the `podSelector`. If a Pod matches the selector, it becomes "isolated," meaning it will reject all traffic unless explicitly allowed by the policy's rules. If a Pod is not selected by any policy, it remains open to all traffic.
A policy specification typically includes:
1. podSelector: Defines the target Pods to secure.
2. policyTypes: Specifies whether the policy controls `Ingress` (incoming traffic), `Egress` (outgoing traffic), or both.
3. Rules: The ingress and egress sections define whitelist rules based on `IPBlock` (CIDR ranges), `namespaceSelector` (traffic from specific namespaces), `podSelector` (traffic from specific Pods via labels), and specific TCP/UDP ports.
A common CKA task involves implementing a "Default Deny" policy. By selecting all Pods in a namespace but providing no allow rules, you effectively block all traffic. You then layer specific policies on top to whitelist necessary communication (e.g., allowing a frontend Pod to access a backend database on port 5432). Mastery involves correctly combining label selectors to achieve the principle of least privilege.
Use ClusterIP, NodePort, LoadBalancer service types and endpoints
In Kubernetes CKA context, Services abstract connectivity to a set of Pods using labels.
**ClusterIP** is the default Service type. It exposes the Service on a cluster-internal IP. This type makes the Service only reachable from within the cluster, making it ideal for internal communication between microservices (e.g., a frontend pod connecting to a backend database pod).
**NodePort** exposes the Service on each Node's IP at a static port (default range 30000-32767). A ClusterIP is automatically created to route the traffic. You can access the Service from outside the cluster by requesting <NodeIP>:<NodePort>. It is useful for development or on-premise environments where cloud load balancers are unavailable.
**LoadBalancer** exposes the Service externally using a cloud provider's load balancer (e.g., AWS ELB, Google Cloud LB). It assigns a fixed, external IP to the Service. NodePort and ClusterIP types are created automatically to support this routing. This is the standard way to expose production applications to the internet on managed clouds.
**Endpoints** track the actual IP addresses of the Pods backing a Service. When you create a Service with a selector, Kubernetes automatically creates an Endpoints object populated with the IPs of matching Pods. If you create a Service without a selector, no Endpoints object is created automatically; this allows you to manually define Endpoints to map a Service to external IPs (non-Kubernetes workloads) while still using the internal DNS discovery.
Use the Gateway API to manage Ingress traffic
The Kubernetes Gateway API represents the evolution of traffic management, designed to supersede the traditional Ingress API by addressing limitations like annotation sprawl and lack of portability. In the context of the CKA exam and general Services & Networking, the Gateway API introduces a role-oriented resource model that decouples infrastructure provisioning from application routing. It utilizes three primary Custom Resource Definitions (CRDs):
1. GatewayClass: Managed by infrastructure providers, this defines the controller implementation (e.g., Istio, NGINX, or cloud-provider load balancers).
2. Gateway: Managed by cluster operators, this resource instantiates the load balancer, defining entry points (listeners), ports, and protocols.
3. HTTPRoute: Managed by developers, this defines the actual routing logic (path matching, header manipulation) and links to backend Services.
To manage Ingress traffic using this API, you typically define an HTTPRoute that explicitly references a specific Gateway using the 'parentRefs' field. This binding mechanism allows for advanced patterns like cross-namespace routing and traffic splitting (e.g., canary deployments) without relying on non-standard annotations. Unlike the monolithic Ingress resource, the Gateway API allows multiple routes to attach to a single listener, enabling different teams to manage their routing rules independently while sharing a single physical load balancer. For CKA scenarios, you should focus on understanding how to configure 'match' rules, 'backendRefs', and how to bind routes to gateways to successfully direct external traffic to internal ClusterIP services.
Know how to use Ingress controllers and Ingress resources
In the context of the CKA exam, mastering Ingress requires understanding the relationship between the **Ingress Controller** and the **Ingress Resource** to manage Layer 7 (HTTP/HTTPS) external access.
1. **The Ingress Controller**: This is the backend software (e.g., NGINX, HAProxy, Traefik) that actually routes the traffic. Unlike built-in controllers, it is not part of the standard kube-controller-manager and must be deployed separately (usually as a Deployment exposed via a NodePort or LoadBalancer Service). In the exam, you may be asked to deploy a controller using provided manifests or debug why an existing controller isn't processing rules.
2. **The Ingress Resource**: This is the configuration object (`networking.k8s.io/v1`) where you define routing rules. You must be proficient in creating YAML manifests that specify:
- **Rules**: Mapping traffic based on **Hosts** (e.g., `video.example.com`) and **Paths** (e.g., `/api` vs `/login`).
- **Backends**: Pointing specific rules to the correct internal Service (name and port).
- **PathType**: Correctly setting `Prefix` or `Exact` matching.
- **Annotations**: Using controller-specific metadata, such as `nginx.ingress.kubernetes.io/rewrite-target`, to modify request paths before they reach the application.
- **IngressClass**: Specifying which controller should handle the resource.
A vital concept to remember is that an Ingress Resource has no effect without a running Ingress Controller. If you create a resource and the `ADDRESS` field remains empty, the controller is likely missing or misconfigured.
Understand and use CoreDNS
CoreDNS is the modular, extensible DNS server that serves as the default cluster DNS for Kubernetes. It allows Pods to locate Services via human-readable names (Service Discovery) rather than unstable IP addresses. It runs as a Deployment in the `kube-system` namespace, typically named `coredns`, exposed by a Service named `kube-dns`.
**How it Works:**
When a Pod is created, the kubelet configures the Pod's `/etc/resolv.conf` to point to the `kube-dns` Service IP. When an application requests a name like `db-service`, the query goes to CoreDNS, which resolves it to the Service's ClusterIP. It handles FQDNs like `my-svc.my-ns.svc.cluster.local`.
**Configuration (The Corefile):**
CoreDNS behavior is defined in a ConfigMap named `coredns`. The configuration text is called the `Corefile`. Important plugins include:
- `kubernetes`: Resolves in-cluster service names.
- `forward`: Forwards queries for external domains (like google.com) to upstream nameservers (usually inherited from the Node).
- `log`: Useful for debugging DNS query errors.
**CKA Exam Focus:**
1. **Troubleshooting:** You must know how to verify DNS is working. A common technique is deploying a `busybox` pod and running `nslookup kubernetes.default`.
2. **Customization:** You may be asked to configure **conditional forwarding** or **Stub Domains**. This involves editing the `coredns` ConfigMap to route specific traffic (e.g., `*.corp.local`) to a specific external DNS server.
3. **Apply Changes:** After editing the ConfigMap, you must restart the CoreDNS pods (e.g., `kubectl rollout restart deployment coredns -n kube-system`) for changes to take effect.
Kubernetes networking model
The Kubernetes networking model is designed to simplify communication between distributed components by enforcing a 'flat' network structure. Unlike traditional Docker networking, which often relies on port mapping and bridges, Kubernetes mandates an 'IP-per-Pod' model. This treats every Pod like a Virtual Machine with its own unique IP address, accessible across the cluster.
For the CKA exam, you must understand the three fundamental requirements that any Container Network Interface (CNI) plugin (such as Calico, Flannel, or Weave) must satisfy:
1. All containers (Pods) can communicate with all other containers without Network Address Translation (NAT).
2. All nodes can communicate with all containers (and vice versa) without NAT.
3. The IP that a container sees itself as is the same IP that others see it as.
Because Pod IPs are ephemeral and change when Pods are re-created, Kubernetes introduces **Services** and **CoreDNS**. A Service provides a stable Virtual IP (ClusterIP) and DNS name to abstract the dynamic nature of Pods. The **kube-proxy** component runs on every node, managing network rules (usually via iptables or IPVS) to route traffic from these stable Service IPs to the actual backend Pod IPs.
While the network model allows open communication by default, administrators use **NetworkPolicies** to lock down traffic and secure the cluster. Understanding how to troubleshoot the CNI, configure Services, and define NetworkPolicies are critical skills for the CKA certification.
CNI plugins and configuration
In the context of the Certified Kubernetes Administrator (CKA) exam, the Container Network Interface (CNI) is a pivotal standard that defines how plugins configure network interfaces for Linux containers. Kubernetes does not implement networking natively; it relies on CNI plugins to handle Pod connectivity and IP management.
The system is composed of two primary file system locations on every node:
1. **CNI Binaries (`/opt/cni/bin`):** This directory houses the executable files. These include base plugins (like `bridge`, `loopback`, and `host-local`) and third-party provider plugins (like `calico`, `flannel`, or `weave`). These binaries perform the actual work of plumbing the network interfaces.
2. **CNI Configuration (`/etc/cni/net.d`):** The Kubelet inspects this directory to identify which plugin to execute. It reads configuration files (usually `.conf` or `.conflist`) that define the `type` of plugin, the subnet ranges, and IP Address Management (IPAM) details. If multiple files exist, the Kubelet typically uses the one that comes first lexicographically.
**The Workflow:**
When the Kubelet creates a Pod, it reads the config from `/etc/cni/net.d`, finds the corresponding binary in `/opt/cni/bin`, and executes it with the `ADD` command. This assigns an IP address to the Pod and attaches it to the cluster network. Conversely, when a Pod is terminated, the `DEL` command is invoked to clean up the network namespace and release the IP address.
For the CKA exam, you must be able to locate these directories to troubleshoot issues where Pods remain in a `ContainerCreating` state due to missing binaries or malformed configuration files.
Service discovery and DNS
In the context of the Certified Kubernetes Administrator (CKA) exam, Service Discovery is fundamental because Kubernetes Pods are ephemeral; their IP addresses change dynamically upon recreation, rendering static IP configuration impossible. Kubernetes abstracts this complexity using Services and solves the connectivity challenge primarily through its built-in DNS system, implemented via CoreDNS.
While Kubernetes initially supported service discovery via Environment Variables (injecting host/port variables into Pods at startup), this method is dependent on creation order. Consequently, Cluster DNS is the industry standard. CoreDNS runs as a Deployment in the 'kube-system' namespace and exposes a Service (usually named 'kube-dns'). It watches the Kubernetes API; whenever a Service is created, CoreDNS automatically generates a DNS record mapping the Service name to its ClusterIP.
Key concepts for the exam include the Fully Qualified Domain Name (FQDN) hierarchy: '<service-name>.<namespace>.svc.cluster.local'. Within the same namespace, a Pod can access a Service simply by its name. Across namespaces, the namespace must be appended (e.g., 'db-service.dev'). Technically, the Kubelet configures every Pod's '/etc/resolv.conf' to point to the CoreDNS Service IP as the nameserver.
For CKA troubleshooting, you must know how to: 1) Verify CoreDNS Pods are 'Running' via 'kubectl get pods -n kube-system', 2) Inspect the CoreDNS ConfigMap containing the 'Corefile', and 3) Debug resolution using a temporary utility Pod (e.g., 'kubectl run test --image=busybox:1.28 --restart=Never -- nslookup my-service').
Endpoints and EndpointSlices
In Kubernetes networking, Services provide a stable IP for a set of Pods, but the actual traffic routing requires mapping that stable IP to dynamic Pod IPs. This is where Endpoints and EndpointSlices function as the glue between Services and Pods.
**Endpoints** represent the legacy mechanism. When a Service uses a label selector, the control plane creates a single Endpoints object listing the IP addresses and ports of all matching Pods. The `kube-proxy` on every node watches this object to configure iptables or IPVS rules. However, this has scalability limitations. In large clusters with thousands of Pods behind a Service, the Endpoints object becomes massive. Adding or removing a single Pod requires re-transmitting the entire huge object to every node, causing significant network strain.
**EndpointSlices** were introduced to solve this bottleneck. Rather than one monolithic object, the backing Pods are grouped into multiple smaller resources (slices), usually containing up to 100 endpoints each. When a Pod changes, only the specific slice containing that Pod updates, and only that small update is sent to the nodes. This drastically improves performance and scalability.
For the **CKA exam**, you must understand that while you define Services, connectivity fails if the Endpoints are not populated. If a Service is unreachable, use `kubectl get endpoints` or `kubectl get endpointslices` to verify that the Service's selector is correctly finding the Pod IPs. If the list is empty, your labels likely do not match.
External traffic policy and session affinity
In the context of the CKA exam and Kubernetes Networking, configuring how Services handle incoming traffic is vital for performance, observability, and application logic. Two primary mechanisms control this behavior: External Traffic Policy and Session Affinity.
**External Traffic Policy (`externalTrafficPolicy`)** defines how traffic arriving at a NodePort or LoadBalancer is routed to the actual Pods.
1. **`Cluster` (Default):** Traffic arriving at any node is forwarded to any ready Pod matching the service selector, regardless of which node the Pod is actually running on. This often involves Source Network Address Translation (SNAT), meaning the Pod sees the Node's IP rather than the client's real IP. It creates an extra network hop (latency) but ensures even load balancing across the cluster.
2. **`Local`:** Traffic is only forwarded to Pods running on the specific node that received the request. If the receiving node has no relevant Pods, the traffic is dropped. This setting preserves the client's original Source IP and reduces latency by eliminating the cross-node hop. However, it can result in uneven load distribution if Pods are not spread evenly across nodes.
**Session Affinity (`sessionAffinity`)** controls how consecutive requests from a single client are distributed among available Pods.
1. **`None` (Default):** Requests are distributed using a standard round-robin or random algorithm. There is no guarantee a client will hit the same Pod twice. This is ideal for stateless applications.
2. **`ClientIP`:** This enables "sticky sessions." The Service uses the client's IP address to ensure that all requests from that specific IP are routed to the same Pod for a configurable duration. This is critical for stateful applications where the server stores session data locally and requires continuity.
Headless services
In the context of the Certified Kubernetes Administrator (CKA) exam and Kubernetes networking, a Headless Service is a specific service configuration defined by explicitly setting the .spec.clusterIP field to "None". Unlike standard services, which assign a single virtual ClusterIP to load-balance traffic across a set of backend Pods, a Headless Service does not provide a single IP address or Layer 4 load balancing via kube-proxy.
The primary mechanism of a Headless Service relies on DNS. When a client application performs a DNS lookup for a Headless Service, the Kubernetes DNS server returns multiple A records (or AAAA records) containing the IP addresses of all the backing Pods matched by the service's selector. This bypasses the standard round-robin load balancing and allows the client to connect directly to a specific Pod IP.
The most critical use case for Headless Services is deploying stateful applications using StatefulSets. Distributed databases and clustered applications (like Cassandra, Kafka, or MongoDB) often require individual nodes to have stable network identities to communicate with specific peers for data replication, sharding, or leader election. By using a Headless Service, each Pod in a StatefulSet gets a predictable and resolvable DNS hostname (e.g., pod-0.service-name.namespace.svc.cluster.local). Additionally, Headless Services can be created without selectors to manually map to external IPs or custom Endpoints, offering flexibility for hybrid cloud scenarios.