Azure Load Balancer operates at Layer 4 (Transport Layer) of the OSI model to distribute inbound TCP or UDP traffic flows across a group of backend resources, such as Virtual Machines or Virtual Machine Scale Sets. It is designed to maximize throughput and ensure high availability by automatically …Azure Load Balancer operates at Layer 4 (Transport Layer) of the OSI model to distribute inbound TCP or UDP traffic flows across a group of backend resources, such as Virtual Machines or Virtual Machine Scale Sets. It is designed to maximize throughput and ensure high availability by automatically routing traffic around failed instances based on configured health probes.
There are two primary configurations:
**Public Load Balancer**: This acts as the gateway for internet traffic. It maps the public IP address and port of incoming traffic to the private IP address and port of the VM in the backend pool. It is used for internet-facing applications, such as web servers. Additionally, it provides outbound connectivity for backend VMs via Source Network Address Translation (SNAT), allowing private resources to access the internet securely without dedicated public IPs.
**Internal (Private) Load Balancer**: This balances traffic only within a virtual network or from linked on-premises networks (via VPN or ExpressRoute). It uses a private IP address from the virtual network's subnet as the frontend. This is standard for multi-tier applications; for example, a frontend web tier communicates with a backend database tier through an internal load balancer, ensuring that sensitive database traffic never traverses the public internet.
For the Azure Administrator Associate (AZ-104), it is crucial to distinguish between Basic and Standard SKUs. The Standard SKU is production-ready, supporting Availability Zones, HTTPS health probes, and a secure-by-default model that requires Network Security Groups (NSGs) to permit traffic. Proper implementation involves configuring the Frontend IP, Backend Pool, Health Probes, and Load Balancing Rules to create a fault-tolerant network architecture.
Azure Load Balancer: Concepts, Configuration, and Exam Strategy for AZ-104
Introduction: Why is Azure Load Balancer Important? In high-availability cloud architectures, distributing incoming network traffic across multiple servers is crucial to prevent overloading a single resource and to ensure application uptime. The Azure Load Balancer is the fundamental component for distributing traffic at Layer 4 (Transport Layer) of the OSI model. For the AZ-104 exam, understanding this service is vital because it forms the backbone of redundancy and scalability strategies using Virtual Machine Scale Sets and Availability Zones.
What is Azure Load Balancer? Azure Load Balancer operates at Layer 4 (TCP and UDP). It distributes inbound flows that arrive at the load balancer's frontend to backend pool instances according to configured rules and health probes. It is not an application gateway; it does not understand HTTP/HTTPS or URLs (Layer 7).
There are two primary SKUs (Stock Keeping Units) you must know for the exam: 1. Basic Load Balancer: Legacy, limited features, no SLA, and does not support Availability Zones. 2. Standard Load Balancer: The default for production. It supports HTTPS health probes, Availability Zones, and is secure by default (meaning traffic is blocked unless allowed by a Network Security Group).
Public vs. Internal Load Balancers Depending on where the traffic originates, you will configure one of two types: Public Load Balancer: Maps the public IP address and port of incoming traffic to the private IP address and port of the VM used to process the traffic. The response returns to the client via the same path. It is used for internet-facing applications. Internal (Private) Load Balancer: Distributes traffic from resources inside a virtual network to other resources inside a virtual network. The frontend IP is a private IP address from your subnet. It is used for internal application tiers (e.g., connecting a web tier to a database tier).
How it Works: Key Components To configure a Load Balancer successfully in an exam scenario, you must understand its four constituent parts: 1. Frontend IP Configuration: The IP address (public or private) to which clients connect. 2. Backend Pool: The group of VMs or Virtual Machine Scale Set instances that will receive the traffic. 3. Health Probes: These dynamically determine the health of the backend instances. If a probe fails (e.g., HTTP 200 OK is not received), the Load Balancer stops sending new connections to that specific instance. 4. Load Balancing Rules: These bind the frontend IP, the backend pool, and the health probe together. They define the port (e.g., 80) and protocol (TCP/UDP). Note: You can also use Inbound NAT Rules to port forward traffic to a specific VM (e.g., RDP on port 3389) without distributing it.
Exam Tips: Answering Questions on Azure Load Balancer When facing AZ-104 questions regarding Load Balancers, look for these specific keywords and scenarios:
1. Traffic Distribution Mode (Session Persistence): By default, Azure Load Balancer uses a 5-tuple hash (Source IP, Source Port, Destination IP, Destination Port, Protocol). This ensures that traffic is distributed, but it does not guarantee that a specific client always hits the same server. If an exam question asks to ensure a client stays connected to the same backend server (sticky sessions), look for the option to change the distribution mode to Source IP Affinity (2-tuple or 3-tuple).
2. Health Probe Failures: If a question describes a scenario where a VM is running but receiving no traffic, check the Health Probe configuration. A common troubleshooting answer involves fixing a misconfigured protocol (checking TCP instead of HTTP) or a wrong path.
3. Floating IP (Direct Server Return): If the question involves SQL Always On availability groups, you almost always need to enable Floating IP. This allows the backend server to accept traffic sent to the frontend IP address of the load balancer directly.
4. SKU Mismatches: Remember that resources must match SKUs. You cannot attach a Basic Public IP to a Standard Load Balancer, and VMs in an Availability Zone require a Standard Load Balancer.
5. Backend Pool Flexibility: For the Standard Load Balancer, the backend pool is defined by IP addresses, meaning you can add resources from different virtual networks if they are peered, though they are usually in the same VNet. Basic Load Balancers are restricted to a single Availability Set or Scale Set.
6. Secure by Default: If you deploy a Standard Load Balancer and traffic fails immediately, check the Network Security Group (NSG). Unlike Basic, Standard Load Balancers require an explicit allow rule in the NSG for traffic to flow.