Learn Deploy and manage Azure compute resources (AZ-104) with Interactive Flashcards

Master key concepts in Deploy and manage Azure compute resources through our interactive flashcard system. Click on each card to reveal detailed explanations and enhance your understanding.

Azure Resource Manager (ARM) templates

In the context of the Azure Administrator Associate certification, specifically regarding the deployment and management of compute resources, Azure Resource Manager (ARM) templates serve as the native Infrastructure as Code (IaC) solution for the platform. They allow administrators to define the infrastructure and configuration of their Azure solution using a declarative JSON (JavaScript Object Notation) syntax.

Unlike imperative scripts (such as PowerShell or Azure CLI) that require a step-by-step sequence of commands to execute tasks, ARM templates allow you to define 'what' you want to deploy—such as a specific SKU of a Virtual Machine, a Virtual Network, and a Load Balancer—without detailing 'how' to create them. The Resource Manager orchestration engine parses the template, resolves dependencies (e.g., ensuring a network interface is created before the VM that uses it), and provisions resources in parallel where possible.

Structurally, a template includes schema, parameters (for dynamic inputs), variables (for internal reuse), resources (the actual Azure components), and outputs. For an Azure Administrator, using ARM templates provides distinct operational benefits:

1. Idempotency: This is crucial for administration. You can deploy the same template repeatedly to the same resource group. If resources already exist and match the defined state, no changes are made; if configurations differ, they are updated. This ensures environmental consistency across Development, Test, and Production.
2. Validation: The platform validates the template before the actual deployment process begins, preventing costly provision errors.
3. Automation: Templates are text files, allowing them to be version-controlled in repositories (like Azure DevOps or GitHub). This facilitates automated pipelines where infrastructure changes are code-reviewed and deployed programmatically.

Ultimately, mastering ARM templates transforms manual VM creation into a streamlined, repeatable code-based process, essential for efficiently managing scale in cloud computing environments.

Bicep files

Bicep is a Domain-Specific Language (DSL) developed by Microsoft to simplify the deployment of Azure resources through Infrastructure as Code (IaC). In the context of the Azure Administrator Associate (AZ-104) certification, specifically regarding the deployment and management of compute resources, Bicep serves as a modern evolution of Azure Resource Manager (ARM) templates. Unlike the verbose and complex JSON syntax required by traditional ARM templates, Bicep offers a much cleaner, concise, and readable syntax while maintaining full feature parity with the underlying platform.

When an Azure Administrator deploys compute resources like Virtual Machines, Virtual Machine Scale Sets, or App Services, using Bicep allows for consistent and repeatable deployments. The code defines the desired state of the infrastructure (e.g., VM size, image, networking config), and Azure handles the creation or update process. Bicep files (.bicep) are transparently transpiled into standard ARM template JSON files during deployment, meaning they run on the proven ARM engine.

Key advantages for administrators include improved modularity, which allows you to break down complex templates into smaller, reusable files, and superior developer experience with Visual Studio Code extensions providing IntelliSense and validation. It simplifies the definition of dependencies between resources, such as ensuring a Network Interface Card (NIC) is created before the Virtual Machine it attaches to. For AZ-104, understanding Bicep is crucial for automating administrative tasks, preventing configuration drift, and integrating deployments into CI/CD pipelines, ensuring that compute environments are deployed identically across development, testing, and production environments without manual intervention.

Create and configure virtual machines

Creating and configuring Azure Virtual Machines (VMs) is a fundamental skill in the "Deploy and manage Azure compute resources" domain of the AZ-104 certification. It requires moving beyond simple deployment to mastering Infrastructure-as-a-Service (IaaS) architecture. Administrators must choose the optimal deployment method, utilizing the Azure Portal for ad-hoc resources or leveraging automation tools like Azure PowerShell, CLI, and ARM templates/Bicep for reproducible infrastructure.

Configuration begins with High Availability strategies. Administrators must configure Availability Zones to protect against datacenter-level failures or Availability Sets to isolate VMs across Fault Domains (power/network hardware) and Update Domains within a specific datacenter. Selecting the correct VM image (Windows/Linux) and Size (SKU) is critical to balance performance and cost, covering General Purpose, Compute Optimized, or Memory Optimized workloads.

Storage and networking are pivotal configuration aspects. You must configure managed disks, selecting the appropriate tier (Standard HDD to Ultra Disk) based on IOPS and throughput requirements. Networking involves associating the VM with a Virtual Network (VNet) and Subnet, configuring Network Interface Cards (NICs), and managing Public IP addresses. Crucially, Network Security Groups (NSGs) and Application Security Groups (ASGs) must be defined to strictly control inbound and outbound traffic flow.

Finally, configuration management extends to post-deployment actions. Administrators utilize VM Extensions to inject scripts (Custom Script Extension) or monitoring agents, and leverage Cloud-init (Linux) or User Data for boot-time configuration. Secure management is enforced via Azure Bastion to enable RDP/SSH access without exposing public IPs, or by integrating with Microsoft Entra ID (formerly Azure AD) for identity-based login. Mastering these elements ensures scalable, secure, and resilient compute resources.

Configure Azure Disk Encryption

Azure Disk Encryption (ADE) is a mechanism provided by Microsoft to encrypt the Operating System (OS) and data disks of Azure Virtual Machines (VMs). It helps organizations meet security and compliance commitments by protecting data at rest. ADE utilizes industry-standard features of the guest operating system: BitLocker for Windows and DM-Crypt for Linux.

The configuration process heavily relies on Azure Key Vault (AKV). To configure ADE, you must first have a Key Vault instance created to control and manage the disk encryption keys and secrets. A specific access policy setting, 'Enabled for Disk Encryption,' must be checked on the Key Vault to allow the platform to retrieve secrets on behalf of the VM.

When configuring ADE, you have the option to use a Key Encryption Key (KEK). A KEK adds a layer of security by wrapping the encryption secrets before writing them to the Key Vault. If you do not use a KEK, the encryption secret is stored directly in the Key Vault.

Implementation is typically performed via Azure PowerShell (`Set-AzVMDiskEncryptionExtension`), Azure CLI (`az vm encryption enable`), or the Azure Portal. When the command runs, an extension is installed on the VM that initiates the encryption inside the guest OS. While encrypting data disks can often be done while the VM is running, encrypting the OS disk usually requires a reboot. Once enabled, the VM disks are encrypted, preventing unauthorized access even if the VHD files are downloaded or copied. It is highly recommended to take a backup or snapshot of the VM before enabling encryption to ensure a recovery point exists.

Manage VM sizes and disks

Managing VM sizes and disks in Azure is critical for optimizing performance and cost within the 'Deploy and manage Azure compute resources' domain.

**VM Sizes** determine the compute power (vCPU) and memory (RAM) available to a virtual machine. Azure categorizes sizes into families tailored for specific workloads: **General Purpose** (e.g., D-series) for balanced tasks, **Compute Optimized** (e.g., F-series) for high CPU processing, **Memory Optimized** (e.g., E-series) for databases, and specialized series for GPU or storage-heavy needs. As an administrator, you must select the right size during deployment but can resize (vertical scaling) later. Resizing requires a reboot if the new size is not supported on the current underlying hardware cluster, necessitating a deallocation of the VM.

**VM Disks** are the storage backbone, primarily utilizing **Azure Managed Disks**, which abstract the underlying storage accounts for high availability. VMs utilize an **OS Disk** (boot drive) and optional **Data Disks** for application storage. Administrators must choose the appropriate disk tier based on IOPS and throughput needs:
1. **Ultra Disk:** Sub-millisecond latency for mission-critical I/O.
2. **Premium SSD:** High performance for production workloads.
3. **Standard SSD:** Reliable performance for web servers or dev/test environments.
4. **Standard HDD:** Cost-effective magnetic storage for backups.

Management tasks include adding data disks, expanding existing disk capacities (often achievable without downtime depending on the OS), and configuring security. Data protection is enforced via Server-Side Encryption (SSE) using Platform-Managed Keys (PMK) by default, or Customer-Managed Keys (CMK), and Azure Disk Encryption (ADE) for OS-level encryption using BitLocker or DM-Crypt.

Availability zones and availability sets

In the context of Azure compute resources, ensuring High Availability (HA) is crucial to protect Virtual Machines (VMs) from downtime. Two primary configurations achieve this: **Availability Sets** and **Availability Zones**.

**Availability Sets** provide logical grouping of VMs within a **single datacenter**. They protect against localized hardware failures and maintenance events using two mechanisms: **Fault Domains** (sharing power and network sources, effectively a server rack) and **Update Domains** (groups rebooted together during patching). By distributing VMs across these domains, Azure ensures that a single rack failure or maintenance update doesn't take down all your instances simultaneously. This configuration typically offers a 99.95% SLA.

**Availability Zones** provide a higher level of resilience by protecting against **entire datacenter failures**. A Zone is a physically separate datacenter within an Azure Region, possessing independent power, cooling, and networking. If a disaster (such as fire or flood) impacts one specific datacenter, VMs running in a different Zone within the same region remain operational. Because of this physical separation, Migration to Availability Zones typically offers a 99.99% SLA.

In summary, select Availability Sets for redundancy within a specific building (protecting against rack failures), and select Availability Zones for redundancy across different buildings (protecting against site-level failures).

Azure Virtual Machine Scale Sets

In the context of the Azure Administrator Associate (AZ-104) certification, Azure Virtual Machine Scale Sets (VMSS) are a critical tool for deploying and managing compute resources effectively. VMSS allows administrators to create and manage a group of load-balanced Virtual Machines (VMs). The core value proposition of VMSS is automation and elasticity; the number of VM instances can automatically increase (scale-out) or decrease (scale-in) in response to demand or a defined schedule.

For an Azure Administrator, VMSS simplifies the management of large-scale applications. Instead of manually provisioning and patching individual VMs, you define a single configuration profile (including the OS image, VM size, and networking). Azure applies this profile to all instances, ensuring consistency.

Key concepts to master include:

1. **Autoscaling:** You configure scaling profiles based on metrics (e.g., if CPU usage exceeds 75%, add two instances). This ensures performance during spikes and cost optimization during lulls.
2. **High Availability:** VMSS integrates tightly with Azure Load Balancer or Application Gateway. To ensure resiliency, you can distribute instances across multiple Availability Zones (different datacenters within a region) or Fault Domains.
3. **Orchestration Modes:** Administrators must understand 'Uniform' mode (optimized for large-scale, identical, stateless workloads) versus 'Flexible' mode (which offers high availability at scale with potentially different VM types, functioning more like standard VMs).
4. **Lifecycle Management:** VMSS provides automated upgrade policies (Manual, Automatic, or Rolling) to handle OS updates without causing application downtime.

Ultimately, VMSS is the standard solution for running specific workloads that require high availability and auto-scaling redundancy without the manual overhead of managing individual servers.

Azure Container Registry

Azure Container Registry (ACR) is a managed, private registry service based on the open-source Docker Registry 2.0. Within the context of the Azure Administrator Associate (AZ-104) certification, ACR is critical for the 'Deploy and manage Azure compute resources' domain, acting as the centralized repository for storing and managing private Docker container images and Helm charts before they are deployed to compute targets.

Unlike public hubs, ACR provides a secure environment integrated directly with Azure services like Azure Kubernetes Service (AKS), Azure Container Instances (ACI), and Azure App Service. For administrators, the primary management tasks involve authenticating and securing these registries using Microsoft Entra ID (formerly Azure AD). Administrators leverage Role-Based Access Control (RBAC) to grant specific permissions, such as assigning 'AcrPush' to build pipelines and 'AcrPull' to compute resources via Managed Identities, eliminating the need to manage raw credentials.

ACR is available in three tiers—Basic, Standard, and Premium. Administrators must choose the appropriate tier based on storage and throughput requirements. The Premium tier is notable for enabling Geo-replication, which allows a single registry to serve images across multiple global regions, reducing network latency for distributed deployments.

Furthermore, ACR includes 'ACR Tasks,' a suite of features that automates image building and patching in the cloud. This allows administrators to define workflows where images are automatically rebuilt and updated whenever source code commits are pushed or base images are patched. By managing ACR, administrators ensure a secure, efficient, and automated supply chain for containerized compute resources.

Azure Container Instances (ACI)

Azure Container Instances (ACI) is a fully managed, serverless compute service designed to run Docker containers within the Microsoft Azure environment without the need to provision or manage underlying virtual machines or clusters. In the context of the Azure Administrator Associate (AZ-104) certification and the domain of deploying compute resources, ACI represents the fastest and simplest way to execute isolated containers.

Unlike Azure Kubernetes Service (AKS), which is an orchestrator intended for complex, multi-container architectures, ACI is optimized for singular instances or small groups. It is ideal for simple applications, ephemeral jobs, automation scripts, and burst scenarios. The service operates on a consumption-based pricing model, billing only for the vCPU and memory resources used per second.

Key technical features administrators must master include:
1. **Container Groups:** The top-level resource in ACI. A group can host multiple containers that share the same host hardware, lifecycle, and network stack (conceptually similar to a Kubernetes Pod). This is useful for sidecar patterns, such as a logging agent running alongside a main application.
2. **Networking:** ACI allows deep networking configurations, including assigning public IP addresses with FQDNs or integrating directly into an Azure Virtual Network (VNet) to communicate securely with other backend resources.
3. **Storage Persistence:** State can be preserved by mounting Azure Files shares directly as volumes within the container.

For operational management, administrators control behavior using restart policies (Always, OnFailure, Never), which determine how the instance reacts to process termination. This is critical for differentiating between long-running web services and one-off batch tasks. ACI supports deployment via Azure Portal, CLI, PowerShell, and ARM templates, offering flexibility for rapid development and 'lift and shift' scenarios where the overhead of cluster management is unjustified.

Azure Container Apps (ACA)

Azure Container Apps (ACA) is a managed serverless container service designed to run microservices and containerized applications without the complexity of managing infrastructure. In the context of the Azure Administrator Associate (AZ-104) exam, ACA represents a strategic middle ground between the simplicity of Azure Container Instances (ACI) and the full orchestration control of Azure Kubernetes Service (AKS).

Built on Kubernetes, ACA abstracts away cluster management while integrating powerful open-source technologies like KEDA (Kubernetes Event-driven Autoscaling), Dapr (Distributed Application Runtime), and Envoy. This allows administrators to focus on application deployment rather than networking overlays or node configuration. Critical concepts for managing ACA include:

1. **Environments:** These serve as security boundaries for groups of apps, allowing them to share the same Virtual Network and logging destination.
2. **Autoscaling:** Utilizing KEDA, apps can scale dynamically based on HTTP traffic, TCP connections, CPU/Memory usage, or external event triggers (e.g., messages in an Azure Storage Queue). Crucially, ACA supports scaling to zero, eliminating costs when the app is idle.
3. **Revisions:** ACA creates immutable snapshots of an application configuration. This enables advanced traffic management, allowing administrators to split traffic between different revisions for Blue-Green deployments or A/B testing.
4. **Ingress:** The service provides built-in external or internal HTTP/HTTPS ingress, handling TLS termination automatically without requiring manual load balancer configuration.

For an Azure Administrator, ACA is the primary choice for deploying scalable, event-driven, or microservices-based workloads where the operational overhead of a full Kubernetes cluster is unnecessary, yet the capabilities of a single container instance are insufficient.

Provision and configure App Service plans

In Azure, an App Service plan serves as the underlying infrastructure container for your web applications. It defines the region, operating system (Windows or Linux), and the set of compute resources (CPU, RAM, and storage) available to the apps running within it. Essentially, it represents the server farm backing your App Service.

**Provisioning**
When provisioning a plan, the most critical decision is the **Pricing Tier (SKU)**. This selection dictates:
1. **Hardware:** The power of the underlying Virtual Machines (e.g., Core counts, Memory).
2. **Features:** Access to specific capabilities like Custom Domains, SSL certificates, Deployment Slots, and VNet Integration.
3. **Cost:** Tiers range from **Free/Shared** (for development on shared infrastructure) to **Basic/Standard/Premium** (dedicated production hardware) and **Isolated** (dedicated environments for high security).

**Configuration**
Post-deployment configuration focuses primarily on capacity management:
* **Scaling Up (Vertical):** You alter the plan's pricing tier to add more power or unlock features (e.g., moving from Standard to Premium).
* **Scaling Out (Horizontal):** You increase the number of VM instances running your app. In Standard tiers and above, you can configure **Autoscale** settings, allowing Azure to automatically add or remove instances based on metric thresholds (like CPU usage) or schedules.

A single App Service plan can host multiple web apps, allowing them to share resources to save costs, provided the apps do not exceed the plan's aggregate resource limits.

Configure App Service certificates and TLS

Securing web applications is a critical component of the Azure Administrator Associate curriculum, specifically within the realm of deploying and managing compute resources. Configuring App Service certificates and Transport Layer Security (TLS) ensures that data transmitted between clients and your application remains private and encrypted via HTTPS.

Azure provides flexibility in how certificates are sourced and managed. The simplest method is using **App Service Managed Certificates**, which are free, fully managed by Microsoft, and automatically renewed. While excellent for securing apex domains and subdomains, they do not support wildcard domains. For comprehensive coverage, admins can purchase **App Service Certificates** directly via the Azure Portal, which supports wildcards and integrates with Azure Key Vault for centralized lifecycle management.

Alternatively, administrators can bring their own certificates by uploading PFX files or importing them from Azure Key Vault. Once a certificate is available in the App Service, it must be bound to a custom domain. This **TLS/SSL binding** creates the trust relationship. You typically choose **SNI SSL** (Server Name Indication), which is standard for most modern browsers and allows multiple domains to share an IP, or **IP SSL**, which requires a dedicated IP address.

Finally, configuration management involves enforcing security policies. In the App Service settings, administrators should enable the **HTTPS Only** feature to automatically redirect unencrypted HTTP traffic to HTTPS. Additionally, setting the **Minimum TLS Version** to 1.2 or higher is mandatory for meeting modern security compliance standards, effectively blocking outdated and vulnerable communication protocols.

App Service custom DNS and networking

In the context of the Azure Administrator Associate exam, managing App Service involves configuring how users reach your application (DNS) and how the application communicates with other networks.

**Custom DNS:**
By default, an App Service is assigned a generic `*.azurewebsites.net` URL. To utilize a branded domain (e.g., `contoso.com`), administrators must map DNS records. A **CNAME record** is used for subdomains (like `www`) pointing to the app's default hostname, while an **A record** is used for root domains pointing to the app's public IP address. Azure requires a specific TXT record to verify domain ownership. Once configured, SSL/TLS certificates are bound to the custom domain to ensure encrypted HTTPS communication.

**Networking:**
Networking controls isolate traffic and enable connectivity to protected resources:
1. **Inbound Security:** **Access Restrictions** allow administrators to define allow/deny lists based on IP addresses. For enhanced security, **Private Endpoints** provide the app with a private IP address from an Azure Virtual Network (VNet), allowing access only from within that network and eliminating public internet exposure.
2. **Outbound Connectivity:** **VNet Integration** allows the web app to access resources located inside an Azure VNet (such as an SQL database or VM) while maintaining public availability. Alternatively, **Hybrid Connections** enable access to on-premises resources via a relay agent without requiring inbound firewall ports to be opened.

App Service deployment slots

In the context of the Azure Administrator Associate certification and the management of Azure compute resources, App Service deployment slots are a critical feature mainly used to achieve zero-downtime deployments and efficient lifecycle management. A deployment slot is essentially a live web application with its own unique hostname, running within the same App Service Plan (VM instances) as the production app. This feature is available in the Standard, Premium, and Isolated service tiers.

The primary function of deployment slots is to act as staging environments. Administrators can deploy a new version of an application to a non-production slot (e.g., 'Staging'), validate the functionality, and warm up the instances without impacting the live production traffic. Once validated, the administrator initiates a 'swap' operation.

During a swap, Azure ensures that the instances in the staging slot are fully ready before redirecting production traffic to them. This redirection happens at the load balancer level, ensuring no downtime. Conversely, the old production version is moved to the staging slot. This mechanism provides an immediate fallback strategy: if the new version encounters errors in production, you can immediately swap back to the previous version, reverting to the last known good state.

Administrators must also manage configuration settings during swaps. Settings can be 'sticky' (slot-specific) or swappable. For instance, a connection string to a development database should be sticky to the staging slot, preventing the production app from connecting to a test database after a swap.

Finally, slots support 'Testing in Production' by allowing valid traffic routing rules. An administrator can route a specific percentage of user traffic (e.g., 5%) to a beta slot to test performance or features before commit to a 100% rollout.

More Deploy and manage Azure compute resources questions
423 questions (total)