Learn Implement and manage virtual networking (AZ-104) with Interactive Flashcards
Master key concepts in Implement and manage virtual networking through our interactive flashcard system. Click on each card to reveal detailed explanations and enhance your understanding.
Configure virtual networks and subnets
Configuring virtual networks (VNets) and subnets is a critical task within the Azure Administrator Associate curriculum, serving as the fundamental layer for cloud resource connectivity. An Azure VNet is a logically isolated section of the Azure cloud dedicated to your subscription. When configuring a VNet, the primary requirement is defining an IPv4 or IPv6 address space using Classless Inter-Domain Routing (CIDR) notation (e.g., 10.0.0.0/16). It is vital to ensure these ranges do not overlap with your on-premises infrastructure or other cloud networks to facilitate feasible peering and hybrid connectivity.
Once the VNet is established, the address space is segmented into subnets. Subnets allow for the logical organization of resources and the application of granular security controls. Each subnet must fall within the VNet's address space (e.g., 10.0.1.0/24). Administrators must remember that Azure reserves five IP addresses in every subnet for internal routing and management, which impacts capacity planning.
Security and routing are managed at the subnet level. Administrators deploy Network Security Groups (NSGs) associated with subnets to filter inbound and outbound traffic based on rules defined by source/destination IP, port, and protocol. Furthermore, specific architectural roles require dedicated subnets with immutable names, such as 'GatewaySubnet' for Virtual Network Gateways or 'AzureBastionSubnet' for secure remote access.
Finally, VNet configuration includes defining DNS settings. While Azure provides default name resolution, enterprise environments often require configuring custom DNS servers on the VNet to handle hybrid name resolution. Proper configuration of VNets and subnets ensures a precise balance of connectivity, isolation, and security for virtual machines and PaaS services.
Virtual network peering
In Azure, Virtual Network (VNet) peering is a fundamental networking capability that enables you to connect two separate virtual networks. Once peered, the virtual networks appear as one for connectivity purposes. Traffic between virtual machines in the peered virtual networks is routed through the Microsoft backbone infrastructure, rather than a gateway or the public internet. This ensures low latency, high bandwidth, and enhanced security by keeping data entirely within the private Azure network.
There are two types of peering: Regional VNet Peering, which connects VNets within the same Azure region, and Global VNet Peering, which connects VNets across different Azure regions. This is crucial for building geo-redundant applications and disaster recovery strategies.
Peering is essential for implementing a Hub-and-Spoke network topology. Through a feature called 'gateway transit,' peered networks (spokes) can use the VPN gateway in the hub network to connect to on-premises infrastructure, eliminating the need to deploy expensive gateways in every VNet.
Key administration constraints to remember are that the IP address spaces of the two VNets must not overlap, and peering is non-transitive (if VNet A peers with B, and B peers with C, A does not automatically connect to C). Peering can be established without downtime to the resources in either virtual network.
Configure public IP addresses
In the context of the Azure Administrator Associate certification, configuring Public IP addresses is a critical component of managing virtual networking. A Public IP address in Azure is a standalone resource that enables inbound communication from the internet to Azure resources—such as Virtual Machines (VMs), Load Balancers, and VPN Gateways—and outbound connectivity to the internet.
When configuring a Public IP, the primary decision involves selecting the **SKU (Stock Keeping Unit)**, which determines capabilities:
1. **Basic SKU:** This legacy tier allows for both Dynamic and Static IP assignment. Dynamic IPs may change when a resource is stopped and started. Basic SKUs do not support Availability Zones and do not require a Network Security Group (NSG) by default, making them less secure.
2. **Standard SKU:** This is the production standard. It implies Static assignment (the IP never changes) and supports zone redundancy for high availability. Crucially, Standard IPs are 'secure by default,' meaning they act as a closed firewall until you explicitly associate an NSG to allow traffic.
Public IPs are not permanently bound to the hardware; they are associated via software configurations. For a VM, the Public IP connects to the Network Interface (NIC). For Load Balancers and Application Gateways, it acts as the frontend IP configuration.
Administrators can also configure **DNS name labels** (e.g., `myapp.eastus.cloudapp.azure.com`) for easier access or use **Public IP Prefixes** to reserve a contiguous range of addresses. Utilizing prefixes ensures you have a predictable block of IPs, simplifying firewall allow-list configurations for external partners.
User-defined network routes (UDR)
In Azure Virtual Networks (VNets), the platform automatically generates default system routes to facilitate communication between subnets, the Internet, and connected on-premises networks. However, specific security or architectural scenarios require overriding these defaults. This is achievable through User-Defined Routes (UDRs), configured within Azure Route Tables.
A UDR allows an Azure Administrator to steer traffic flows precisely. By associating a Route Table with a specific subnet, you force traffic leaving that subnet to traverse a specific path rather than going directly to the destination. The most common implementation is routing traffic through a Network Virtual Appliance (NVA), such as a firewall, for packet inspection and filtering.
When configuring a UDR, you define a destination CIDR block and a 'Next hop type.' Critical next hop types include:
• **Virtual Appliance:** Routes traffic to a specific IP address (e.g., a firewall VM).
• **Virtual Network Gateway:** Routes traffic to on-premises via VPN or ExpressRoute.
• **Internet:** Explicitly routes traffic externally.
• **None:** drops traffic intended for the destination address.
Azure determines which route to use based on the Longest Prefix Match (LPM) algorithm. Ideally, the route with the most specific address range is selected. If the prefix lengths are identical, User-Defined Routes take precedence over default System Routes.
For the AZ-104 exam, it is vital to remember that UDRs are applied at the subnet level, not the network interface level, and they are essential for implementing forced tunneling and secure hub-and-spoke topologies.
Troubleshoot network connectivity
Troubleshooting network connectivity within the context of the Azure Administrator Associate (AZ-104) exam focuses on diagnosing communication interruptions between Azure resources, on-premises networks, and the internet. The primary suite of tools used for this purpose is **Azure Network Watcher**, which is enabled on a per-region basis.
When detailed diagnosis is required, administrators should start with **IP Flow Verify**. This tool checks if a packet is allowed or denied to/from a specific Virtual Machine based on the effective Network Security Group (NSG) rules. It quickly identifies if a firewall rule is the root cause of a blockage.
If security rules are configured correctly, the issue may be routing. The **Next Hop** tool helps determine where traffic destined for a specific IP is being sent (e.g., Internet, Virtual Network, Virtual Appliance, or None). This is essential for troubleshooting misconfigured User Defined Routes (UDRs) that might be blackholing traffic.
For a holistic view, **Connection Troubleshoot** tests the connectivity between a source VM and a destination (another VM, FQDN, or IP). It validates the path, checks for latency, and identifies the reachable status hop-by-hop. Specific to hybrid connectivity, **VPN Troubleshoot** gathers health diagnostics for Virtual Network Gateways and connections.
Administrators must also consider **NSG Flow Logs** and **Traffic Analytics** for historical data on traffic patterns, which aid in spotting intermittent issues. finally, outside of Azure-specific tools, one must verify **OS-level firewalls** (like Windows Firewall or iptables) and **VNet DNS settings**, as these are common points of failure that Azure infrastructure tools might report as successful connections despite the application failing.
Network Security Groups (NSGs) and ASGs
In the context of the Azure Administrator Associate certification, Network Security Groups (NSGs) and Application Security Groups (ASGs) are fundamental components for securing virtual networks.
**Network Security Groups (NSGs)** act as the primary stateful packet filtering firewall for Azure Virtual Networks to control traffic flow. An NSG contains a list of access control rules that allow or deny inbound or outbound traffic based on the 5-tuple information: source IP, source port, destination IP, destination port, and protocol. These rules are processed in priority order, with lower numbers taking precedence. While NSGs can be associated with individual Network Interfaces (NICs), it is generally best practice to associate them with Subnets to enforce a standardized security policy across all resources within that segment.
**Application Security Groups (ASGs)** are used to simplify the management of these NSG rules by abstracting specific IP addresses. ASGs allow you to group virtual machines (via their NICs) based on their application workload or function, such as 'WebServers' or 'DatabaseClusters,' regardless of their network topology.
**Combined Implementation**: Without ASGs, scaling an application requires manually updating NSG rules with the IP addresses of every new server. With ASGs, you define an NSG rule once—for example, 'Allow HTTPS to Destination: WebServer-ASG'. When you deploy new VMs, you simply associate their NICs with the 'WebServer-ASG', and the security rules are automatically applied. This approach decouples security definitions from static IP addresses, streamlining network segmentation and significantly reducing administrative overhead and the risk of misconfiguration.
Azure Bastion
Azure Bastion is a fully managed Platform as a Service (PaaS) deployed within Microsoft Azure that provides secure and seamless RDP (Remote Desktop Protocol) and SSH (Secure Shell) access to your virtual machines. It is a critical component for implementing secure virtual networking, designed specifically to minimize the attack surface of your infrastructure by removing the need for public IP addresses on individual VMs.
Traditionally, administrators accessed VMs by assigning public IPs or maintaining dedicated 'jump box' servers. These methods present security risks; public IPs expose management ports to the open internet, inviting port scanning and brute-force attacks, while jump boxes require constant OS patching and maintenance. Azure Bastion eliminates these issues by functioning as a hardened gateway provisioned inside your Virtual Network (VNet), specifically within a dedicated subnet named 'AzureBastionSubnet'.
When using Azure Bastion, you connect to your VMs directly through the Azure portal using a modern HTML5-based web browser. The traffic is encapsulated over SSL/TLS (port 443), ensuring the session is encrypted and traverse firewalls easily. Because the Bastion host sits within your VNet, it connects to your target VMs using their private IP addresses. Consequently, your VMs remain hidden from the public internet.
Key benefits include: enhanced security (protecting against malware targeting RDP/SSH ports), zero maintenance (Microsoft handles patching, scaling, and hardening), and ease of management. Furthermore, it supports VNet peering, allowing a single Bastion deployment to manage VMs across peered networks in a hub-and-spoke topology, effectively centralizing secure access control within your Azure environment.
Service endpoints for Azure PaaS
Azure Virtual Network (VNet) Service Endpoints significantly enhance network security by optimizing the connectivity between your virtual networks and Azure Platform-as-a-Service (PaaS) resources, such as Azure Storage, Azure SQL Database, and Azure Key Vault. In the context of Azure administration, implementing Service Endpoints allows you to extend your VNet's private identity and address space directly to the intented Azure services.
When you enable a Service Endpoint on a specific subnet, the traffic route changes immediately. Instead of traversing the public internet to reach the public IP of the PaaS resource, the traffic flows entirely over the Microsoft Azure backbone network. This direct route keeps critical data traffic off the public internet, reducing latency and exposure to external threats.
The most significant advantage is the ability to lock down PaaS resources. Once the endpoint is active, you can configure the firewall of the Azure resource to deny all public internet traffic and only allow traffic originating from your specific VNet subnet. This essentially creates a secure network boundary around your cloud resources without requiring you to provision a dedicated private IP address (as is the case with Azure Private Link).
Furthermore, Service Endpoints simplify network architecture. They remove the need for Network Address Translation (NAT) or gateway devices for your VNet to access these services. The route table is automatically updated with 'VirtualNetworkServiceEndpoint' as the next hop type for the specific service traffic. This ensures that Azure Administrators can maintain high security and optimal routing performance with minimal management overhead.
Private endpoints for Azure PaaS
In the context of Azure networking, a Private Endpoint is a network interface responsible for connecting your virtual network (VNet) privately and securely to a service powered by Azure Private Link. By utilizing a private IP address from your VNet's specific address space, the Private Endpoint effectively brings the PaaS service (such as Azure Storage, Azure SQL Database, or Key Vault) into your virtual network topology.
The critical architectural benefit for an Azure Administrator is security. Traditional PaaS connectivity often relies on public endpoints accessible via the internet. In contrast, Private Endpoints ensure that traffic between your virtual network and the service travels exclusively across the Microsoft backbone network, completely bypassing the public internet. This architecture removes the need for public IP addresses, NAT devices, or strict firewall whitelisting on public endpoints, thereby mitigating risks associated with data exfiltration and reducing the attack surface.
Implementation requires careful DNS management. While the PaaS resource enables private access, it retains its public FQDN. To ensure clients resolve the service name to the new private IP rather than the public IP, administrators must configure a Private DNS Zone or utilize custom DNS servers. This ensures seamless connectivity for clients without changing connection strings.
Additionally, Private Endpoints support hybrid connectivity. Clients located on-premises can access these Azure PaaS resources over ExpressRoute or VPN tunnels using the private IP address, facilitating secure, compliant hybrid cloud scenarios. This feature is essential for enterprise environments requiring strict network isolation and adherence to Zero Trust principles.
Configure Azure DNS
Configuring Azure DNS involves setting up a hosting service for DNS domains that provides name resolution using Microsoft Azure’s global infrastructure. In the context of the Azure Administrator Associate certification, this primarily revolves around managing Public DNS Zones and Private DNS Zones to ensure connectivity.
To configure a Public DNS Zone, you create a resource for your domain (e.g., contoso.com) within the Azure Portal. Azure assigns name servers to this zone. You must then configure delegation by updating the records at your domain registrar to point to these Azure name servers. Once delegated, you manage standard record sets (A, AAAA, CNAME, MX) to resolve internet queries to your public Azure resources.
For internal networking, you configure Azure Private DNS Zones. This feature provides a reliable, secure DNS service to manage and resolve domain names in a Virtual Network (VNet) without needing to add a custom DNS solution. You create a private zone (e.g., private.contoso.com) and link it to your VNets. A critical configuration option here is 'Auto-registration.' When enabled on a VNet link, virtual machines deployed in that VNet automatically create and update their DNS records in the private zone, drastically reducing manual administrative overhead.
Additionally, Azure DNS supports Alias Record Sets, which allow you to refer to a specific Azure resource (like a Public IP or Traffic Manager profile) seamlessly. If the underlying IP of the resource changes, the DNS record updates automatically. Security is managed via Azure Resource Manager (ARM), allowing you to apply Role-Based Access Control (RBAC) to restrict who can create or modify these DNS records.
Azure Load Balancer (Internal and Public)
Azure Load Balancer operates at Layer 4 (Transport Layer) of the OSI model to distribute inbound TCP or UDP traffic flows across a group of backend resources, such as Virtual Machines or Virtual Machine Scale Sets. It is designed to maximize throughput and ensure high availability by automatically routing traffic around failed instances based on configured health probes.
There are two primary configurations:
**Public Load Balancer**: This acts as the gateway for internet traffic. It maps the public IP address and port of incoming traffic to the private IP address and port of the VM in the backend pool. It is used for internet-facing applications, such as web servers. Additionally, it provides outbound connectivity for backend VMs via Source Network Address Translation (SNAT), allowing private resources to access the internet securely without dedicated public IPs.
**Internal (Private) Load Balancer**: This balances traffic only within a virtual network or from linked on-premises networks (via VPN or ExpressRoute). It uses a private IP address from the virtual network's subnet as the frontend. This is standard for multi-tier applications; for example, a frontend web tier communicates with a backend database tier through an internal load balancer, ensuring that sensitive database traffic never traverses the public internet.
For the Azure Administrator Associate (AZ-104), it is crucial to distinguish between Basic and Standard SKUs. The Standard SKU is production-ready, supporting Availability Zones, HTTPS health probes, and a secure-by-default model that requires Network Security Groups (NSGs) to permit traffic. Proper implementation involves configuring the Frontend IP, Backend Pool, Health Probes, and Load Balancing Rules to create a fault-tolerant network architecture.
Troubleshoot load balancing
To effectively troubleshoot Azure Load Balancer issues within the context of the Azure Administrator Associate certification, follow a structured approach focusing on connectivity, configuration, and health monitoring.
First, verify the **Health Probes**. The load balancer relies on these probes to determine which backend instances are healthy. If a probe fails, the load balancer stops sending new flows to that specific instance. Check that the configured protocol (TCP/HTTP/HTTPS) and port match the application listening port on the VM. For HTTP/HTTPS probes, ensure the specific URL path is valid and returns a 200 OK status.
Second, examine **Data Path Connectivity**. Use Azure Monitor metrics to analyze 'Data Path Availability' and 'Health Probe Status'. If the availability is zero, validate the backend pool configuration. Ensure the VMs or Virtual Machine Scale Sets are in a 'Running' state and their network interfaces are correctly associated.
Third, investigate **Network Security Groups (NSGs)** and **Firewalls**. Traffic must be allowed on the backend subnet NSG for both the service port and the probe port. A critical step is identifying the default source tag `AzureLoadBalancer` in the NSG inbound rules; if this is blocked, probes will fail regardless of application health. Additionally, check the VM's local OS firewall (Windows Firewall or iptables) to ensure it allows ingress traffic on the probe port.
Fourth, analyze **SKU and VNet Configuration**. Mismatches between Basic and Standard SKUs are common pitfalls; usually, the Load Balancer, Public IP, and backend resources must share the same SKU type. Ensure all backend instances reside in the same Virtual Network.
Finally, utilize **Diagnostics and Logging**. Enable diagnostic settings to send logs to a Log Analytics workspace. Query `LoadBalancerProbeHealthStatus` alerts to pinpoint identifying failures. Use Network Watcher tools like **IP Flow Verify** to confirm rule precedence is not inadvertently dropping traffic.