In the context of the CKA exam and Kubernetes Workloads & Scheduling, controlling Pod placement is achieved primarily through nodeSelector and Affinity rules.
nodeSelector is the simplest constraint mechanism. It relies on strict equality matching of key-value labels attached to nodes. To use it, β¦In the context of the CKA exam and Kubernetes Workloads & Scheduling, controlling Pod placement is achieved primarily through nodeSelector and Affinity rules.
nodeSelector is the simplest constraint mechanism. It relies on strict equality matching of key-value labels attached to nodes. To use it, you apply labels to nodes (e.g., 'disktype=ssd') and define a matching 'nodeSelector' map in the Pod specification. If the labels match, the Pod can be scheduled; otherwise, it remains pending.
Node Affinity provides a much more expressive language for constraints. Unlike the exact matching of nodeSelector, Affinity supports operators such as In, NotIn, Exists, DoesNotExist, Gt, and Lt. Node affinity comes in two flavors:
1. Hard rules ('requiredDuringSchedulingIgnoredDuringExecution'): These act like a filter; the Pod will not be scheduled unless the rule is met.
2. Soft rules ('preferredDuringSchedulingIgnoredDuringExecution'): These act like a weighting system; the scheduler attempts to find a matching node but will fallback to any available node if the preference cannot be satisfied.
Pod Affinity and Anti-Affinity extend this logic by allowing you to constrain a Pod's schedule based on the labels of *other Pods* already running on the node, rather than the node's labels. This is crucial for high availability (spreading Pods across zones using Anti-Affinity) or performance (co-locating chatty microservices using Affinity).
For the exam, remember that 'IgnoredDuringExecution' means changes to labels after a Pod is scheduled do not trigger eviction. You should be comfortable writing the nested YAML syntax under 'spec.affinity' and using 'kubectl label nodes' to set up the environment.
Complete Guide to Node Selectors and Affinity
Why is it important? In a standard Kubernetes cluster, the scheduler determines where to place Pods based on resource availability. However, in production scenarios, hardware is often heterogeneous. You might have nodes with GPUs, nodes with high-performance SSDs, or nodes located in specific availability zones. Node Selectors and Node Affinity are the mechanisms used to constrain a Pod so that it can only run on particular set of node(s). Understanding this is critical for workload optimization and cost management.
What is it? There are two primary ways to assign Pods to Nodes: 1. nodeSelector: The early, simple method. It relies on exact key-value matching. If a Pod requires disktype: ssd, only nodes with that exact label are eligible. 2. Node Affinity: The advanced method. It supports expressive language (operators like In, NotIn, Exists) and 'soft' rules (preferences vs requirements).
How it works Both methods rely on Labels attached to Nodes.
The Workflow: 1. You attach a label to a node: kubectl label nodes node01 size=large 2. You define the constraint in the Pod manifest under spec.
Node Selector Syntax: spec: nodeSelector: size: large
Node Affinity Syntax: Node affinity introduces two main types: requiredDuringSchedulingIgnoredDuringExecution (Hard rule): The Pod will NOT be scheduled if the rule isn't met (similar to nodeSelector but more flexible). preferredDuringSchedulingIgnoredDuringExecution (Soft rule): The scheduler will try to find a matching node, but if it can't, it will schedule the Pod anywhere.
How to answer questions regarding Node selectors and affinity rules in an exam? CKA questions usually ask you to schedule a pod on a specific node or a node with specific criteria.
Step 1: Check existing labels. Run kubectl get nodes --show-labels to see if the target nodes are already labeled.
Step 2: Label the node (if required). If the question asks to place a pod on 'node01', and no unique label exists, add one: kubectl label node node01 env=prod.
Step 3: Generate the YAML. Use dry-run to get a base manifest: kubectl run my-pod --image=nginx --restart=Never --dry-run=client -o yaml > pod.yaml.
Step 4: Edit the Manifest. For nodeSelector, simply add the field at the same indentation level as containers. For Affinity, it is safer to copy-paste the syntax from the Kubernetes documentation or use kubectl explain (see tips below) as the indentation and field names are long and complex.
Exam Tips: Answering Questions on Node selectors and affinity rules 1. Don't Memorize, Use 'Explain': The field names for affinity are incredibly long. Instead of memorizing them, run: kubectl explain pod.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions This will give you the exact spelling and hierarchy.
2. Copy-Paste from Docs: In the exam, keep the Kubernetes documentation search tab open. Search for 'Node Affinity' and copy the block of YAML. It is faster and less error-prone than typing requiredDuringScheduling... manually.
3. Know the Operators: Understand that In allows you to list multiple values (logical OR), while Exists only checks for the label key regardless of the value.
4. Pending Pods: If your Pod remains in a Pending state, it usually means your affinity rules or selectors are too restrictive and no node matches the criteria. Double-check your node labels.