Learn Implement and manage storage (AZ-104) with Interactive Flashcards

Master key concepts in Implement and manage storage through our interactive flashcard system. Click on each card to reveal detailed explanations and enhance your understanding.

Configure Azure Storage firewalls and virtual networks

Configuring Azure Storage firewalls and virtual networks is a critical security task in the Azure Administrator Associate curriculum, designed to protect data at the network layer. By default, Azure Storage accounts accept connections from clients on any network. To secure the resource, administrators switch the default network access setting from 'All networks' to 'Selected networks,' establishing a deny-by-default posture.

Once restricted, access is granted through two primary mechanisms:

1. **IP Address Rules:** These allow traffic from specific public IPv4 addresses or CIDR ranges. This is typically used to grant access to on-premises corporate networks or specific external clients connecting over the internet.

2. **Virtual Network (VNet) Rules:** This method utilizes Service Endpoints. By enabling the 'Microsoft.Storage' service endpoint on a specific subnet within an Azure VNet, administrators can whitelist that subnet in the storage firewall. This ensures that traffic between Azure resources (like Virtual Machines) and the storage account travels exclusively distinctively over the Azure backbone network, rather than the public internet. This enhances security by isolating traffic types.

Furthermore, administrators must configure **Exceptions**. The 'Allow trusted Microsoft services' option is essential; when enabled, it grants specific Azure platform services—such as Azure Backup, Azure Monitor, and Site Recovery—access to the storage account even when strict firewall rules are active.

Proper configuration ensures that any request originating from outside the allowed IP ranges or subnets is rejected with a '403 Forbidden' error, effectively minimizing the attack surface and preventing authorized data exfiltration.

Shared Access Signature (SAS) tokens

In the context of Azure storage management, a Shared Access Signature (SAS) is a signed URI that grants restricted access rights to Azure Storage resources without exposing the master storage account keys. It is a critical mechanism for Azure Administrators to implement secure, delegated access adhering to the principle of least privilege.

A SAS token allows granular control over three key parameters: the resources exposed (e.g., a specific blob versus an entire container), the permissions granted (e.g., Read, Write, List, Delete), and the duration of access (start and expiry times).

There are three primary types of SAS tokens:
1. **User Delegation SAS:** Secured via Microsoft Entra ID credentials. This is the most secure option for Blob storage as it does not require managing storage keys.
2. **Service SAS:** Delegates access to specific resources in the Blob, Queue, Table, or File services.
3. **Account SAS:** Grants access to operations at the service level (e.g., creating file systems) that a Service SAS cannot handle.

A crucial concept for the AZ-104 exam is the **Stored Access Policy**. If a standalone SAS is compromised, it can only be revoked by rotating the account keys, which may affect other applications. However, if a Service SAS is associated with a Stored Access Policy defined on the container, the administrator can revoke or modify access immediately by editing the policy, invalidating the tokens linked to it. Best practices include always using HTTPS, setting short expiration times, and prioritizing User Delegation SAS to minimize key exposure.

Manage access keys and stored access policies

In the context of Azure Storage management, securing data relies heavily on properly handling authentication credentials. Two fundamental components of this security model are **Access Keys** and **Stored Access Policies**.

**Access Keys**
When you create a storage account, Azure generates two 512-bit access keys (Key1 and Key2). These keys act as 'root passwords,' granting full administrative rights and unlimited access to all data within the account. Because they provide absolute control, they must be protected rigorously. Azure provides two keys specifically to facilitate **key rotation** without downtime. Administrators can configure applications to use the secondary key, regenerate the primary key, and then swap back. Best practices dictate that keys should never be hard-coded in applications; instead, they should be managed via Azure Key Vault or ideally replaced by Microsoft Entra ID authentication where possible.

**Stored Access Policies**
While Shared Access Signatures (SAS) allow you to grant granular, time-limited access without sharing the master keys, managing individual SAS tokens is risky. If a standard SAS is compromised, it remains valid until it expires unless you regenerate the master Access Keys, which disrupts all users. **Stored Access Policies** provide a solution to this management challenge. Defined at the container level (blob container, file share, queue, or table), these policies group shared constraints like permissions, start times, and expiration dates.

By associating a SAS token with a Stored Access Policy, you gain the ability to modify or revoke access centrally. If you need to cancel access, you simply update or delete the policy, and all linked SAS tokens become invalid immediately. This mechanism offers a robust way to manage external access lifecycles without compromising the primary storage account keys.

Configure identity-based access for Azure Files

Configuring identity-based access for Azure Files allows you to replace Shared Access Keys with specific identity permissions for Server Message Block (SMB) access. This integration supports three identity sources: On-premises Active Directory Domain Services (AD DS), Azure Active Directory Domain Services (Azure AD DS), and Azure Active Directory Kerberos for hybrid identities.

The configuration process relies on a two-layer permission model. First, you must enable the chosen identity source on the storage account and join the storage account to the domain. Once enabled, you configure Share-Level Permissions using Azure Role-Based Access Control (RBAC). You must assign specific RBAC roles—such as 'Storage File Data SMB Share Reader' (read-only), 'Contributor' (read/write), or 'Elevated Contributor' (allows changing permissions)—to the Azure AD users or groups that require access. Users cannot access the share without one of these roles.

Second, you rely on Directory/File-Level Permissions. While RBAC controls entry to the share, standard Windows Access Control Lists (ACLs) control granular access to files and folders within it. You configure these ACLs using Windows File Explorer or icacls by initially mounting the share with the storage account key (acting as a superuser). When a user accesses the file share, Azure Files enforces both the share-level RBAC role and the file-level ACLs. If there is a conflict, the most restrictive permission applies. This setup ensures secure, audited access management suitable for enterprise environments.

Configure Azure Storage redundancy

Azure Storage redundancy protects your data against hardware failures, network outages, or natural disasters by ensuring multiple copies of your data are maintained. As an Azure Administrator, configuring redundancy requires balancing durability, availability, and cost.

Within the Primary Region, you have two options:
1. Locally-redundant storage (LRS): Replicates data three times within a single physical datacenter. It is the most cost-effective but lowest durability option (11 nines). It does not protect against datacenter-level disasters.
2. Zone-redundant storage (ZRS): Replicates data synchronously across three Azure availability zones. This protects against zonal failures, ensuring continued access even if one datacenter goes dark.

For Disaster Recovery (Secondary Region), you can extend protection:
1. Geo-redundant storage (GRS): Copies data using LRS in the primary region, then asynchronously replicates it to a secondary paired region. This offers protection against regional outages.
2. Geo-zone-redundant storage (GZRS): Combines ZRS in the primary region with replication to a secondary region, offering maximum durability (16 nines) and availability.

By default, data in the secondary region is unavailable until a failover occurs. To read data from the secondary region at any time, you must configure Read-Access (RA-GRS or RA-GZRS).

You configure these settings during storage account creation or modify them in the 'Configuration' blade later. However, switching certain types (like LRS to ZRS) may require a manual migration or support request depending on the scenario.

Configure object replication

Object replication in Azure Blob Storage is a critical feature for Azure Administrators to understand when managing storage, specifically for scenarios requiring disaster recovery, data minimization, or lower latency access across regions. It creates an asynchronous copy of block blobs between a source storage account and a destination storage account.

To configure object replication effectively, you must first satisfy specific prerequisites. Both the source and destination storage accounts must be General Purpose v2 or Premium Block Blob accounts. Crucially, you must enable **Blob Versioning** on both accounts and enable the **Blob Change Feed** on the source account. Without these features, the replication engine cannot track changes or manage state.

The core of the configuration involves creating a replication policy. This policy defines the relationship between the two accounts and contains one or more rules. Each rule maps a specific source container to a destination container. Administrators can apply filters within these rules, such as blob prefix matches, to replicate only a subset of data (e.g., specific log files) rather than the entire container. This granular control helps optimize storage costs and bandwidth usage.

In the context of the AZ-104 exam, remember that because this process is asynchronous, data is not instantly available at the destination. The time it takes to replicate depends on the file size and network latency. Administrators must monitor the 'Replication Status' property on the blobs (which will show 'Complete' or 'Failed') to ensure data consistency. Unlike Geo-Redundant Storage (GRS), which replicates an entire account to a paired region, object replication offers precise, policy-based control over what data is copied and where it lands.

Configure storage account encryption

In Azure, all data written to a storage account is automatically encrypted using Server-Side Encryption (SSE) with 256-bit AES encryption, compliant with FIPS 140-2. This feature is enabled by default and cannot be disabled. As an Azure Administrator, your configuration duties focus on key management and encryption scopes.

**Key Management Options:**
1. **Microsoft-Managed Keys (MMK):** The default setting. Microsoft manages key rotation and storage. No administrative overhead is required.
2. **Customer-Managed Keys (CMK):** Required for specific compliance needs. You manage keys using Azure Key Vault (AKV). To configure this, you must enable 'Soft Delete' and 'Purge Protection' on the AKV. You then assign a Managed Identity (System or User-assigned) to the Storage Account and grant it permissions to wrap and unwrap keys in the vault.

**Advanced Configurations:**
* **Infrastructure Encryption:** You can enable this at the time of account creation to add a second layer of encryption at the infrastructure level (double encryption) for highly sensitive data.
* **Encryption Scopes:** These allow you to manage encryption at the blob or container level rather than the account level. This is critical for multi-tenant scenarios where different customers require distinct keys.

Changing key types (e.g., switching to CMK) updates the encryption settings for the account immediately, but existing data is not re-encrypted until it is rewritten.

Azure Storage Explorer and AzCopy

In the context of the Azure Administrator Associate certification, effectively managing storage requires mastering two primary tools: **Azure Storage Explorer** and **AzCopy**.

**Azure Storage Explorer** is a standalone, cross-platform graphical user interface (GUI) compatible with Windows, macOS, and Linux. It allows administrators to visually browse and interact with Azure Storage resources, including Blobs, Files, Queues, and Tables, as well as Azure Data Lake Storage and Cosmos DB. It is the ideal tool for ad-hoc tasks, such as verifying data integrity, managing access policies, generating Shared Access Signatures (SAS), and performing small-scale uploads or downloads. Its visual nature makes it perfect for day-to-day maintenance and troubleshooting without the need for code.

**AzCopy**, conversely, is a high-performance command-line utility designed for optimal data transfer speeds and automation. It is the standard tool for bulk data movement options, such as migrating on-premises data to the cloud, synchronizing file systems, or copying data between Azure Storage accounts (server-side copy). AzCopy features check-pointing (to resume failed transfers), configurable concurrency, and the ability to copy directly from AWS S3 or Google Cloud Storage. It is meant to be scripted via PowerShell or CLI for repeatable, scheduled tasks.

**Summary:** While Azure Storage Explorer actually utilizes AzCopy in its backend to execute transfers, the distinction lies in usage: Administrators use the Explorer for visual, interactive management, and AzCopy for scripting, high-throughput migrations, and automated data synchronization.

Configure Azure File shares

Configuring Azure File shares within the Azure Administrator curriculum involves deploying fully managed, serverless file shares accessible via industry-standard SMB and NFS protocols. The configuration process begins within an Azure Storage Account, where administrators must select the appropriate performance tier—Standard (HDD-based) for general workloads or Premium (SSD-based) for high-IOPS applications—and define the replication strategy (LRS, ZRS, or GRS) for data durability.

A critical aspect of configuration is managing access tiers. Administrators switch between Transaction Optimized, Hot, and Cool tiers to align storage costs with data access patterns. Security configuration is paramount; this includes setting up identity-based authentication using Azure Active Directory Domain Services (Azure AD DS) or on-premises Active Directory. This integration allows the enforcement of granular Windows NTFS Access Control Lists (ACLs) for permission management, moving beyond simple Storage Account Key access.

For hybrid infrastructure, the scope extends to configuring Azure File Sync. Administrators deploy the Storage Sync Agent on on-premises Windows Servers and define Sync Groups in the Azure Portal. This configuration enables 'cloud tiering,' where frequently accessed data remains cached locally for low-latency performance, while older data is moved to the cloud to free up local space.

Finally, operational configuration includes enabling Soft Delete to recover from accidental deletions, scheduling Snapshots for backup resilience, and implementing Private Endpoints to ensure traffic between clients and the file share remains on the secure Microsoft backbone network rather than the public internet.

Configure Blob Storage containers

Configuring Blob Storage containers is a fundamental aspect of the Azure Administrator Associate role. A container serves as a logical grouping for blobs, functioning similarly to a folder in a file system. When implementing and managing storage, an administrator must first address the Public Access Level, which determines visibility. Options include 'Private' (no anonymous access), 'Blob' (anonymous read access for specific blobs), and 'Container' (anonymous read and list access). For enhanced security, 'Private' is the default and recommended setting.

Beyond basic access, administrators configure Stored Access Policies to manage Shared Access Signatures (SAS) efficiently. These policies allow for the revocation or modification of access permissions without invalidating existing SAS tokens immediately. To ensure data integrity and compliance, Immutability Policies are configured. These include Time-Based Retention Policies and Legal Holds, effectively making the container Write-Once, Read-Many (WORM) compliant to prevent data modification or deletion for a specified period.

Data protection and cost management are also handled here. Administrators can enable Soft Delete to recover accidental deletions within a retention window. Furthermore, Lifecycle Management policies are implemented to automate the movement of blobs to cooler storage tiers (Cool, Cold, or Archive) or delete them based on age and access patterns. Finally, metadata can be added as key-value pairs to organize resources, and encryption scopes can be defined to control encryption keys at the container level, ensuring specific tenants or applications meet distinct security requirements.

Configure storage tiers (Hot, Cool, Archive)

In Azure Blob Storage, configuring storage tiers is a critical cost-management strategy that aligns storage costs with data access frequency. There are three primary tiers: Hot, Cool, and Archive.

The **Hot Tier** is optimized for data that is accessed frequently or strictly required for active workloads. It incurs the highest storage costs but offers the lowest access and transaction fees. This tier is the default setting and is ideal for data in active use, such as user content for websites, active database backups, or real-time telemetry.

The **Cool Tier** is designed for data that is infrequently accessed (typically less than once a month) but requires immediate availability when requested. It offers lower storage costs than the Hot tier but higher access transaction fees. Data stored here must remain for a minimum of 30 days; early deletion results in a pro-rated fee. Common use cases include short-term backup preservation and disaster recovery datasets.

The **Archive Tier** provides the lowest storage costs but the highest data retrieval costs and latency. Data in this tier is offline and not immediately accessible; reading it requires a process called 'rehydration' (changing the tier to Hot or Cool), which can take up to 15 hours. The minimum retention period is 180 days. This tier is specifically for long-term retention, such as compliance records, medical history logs, or tape replacements.

Administrators can change tiers manually at the blob level or automate the process using **Lifecycle Management** policies. These policies automatically transition data between tiers based on rules (e.g., move to Cool after 30 days of inactivity), ensuring optimal cost-efficiency throughout the data lifecycle.

Configure soft delete and snapshots

In the context of the Azure Administrator Associate (AZ-104) certification, configuring soft delete and snapshots is essential for implementing robust data protection and business continuity strategies within Azure Storage.

**Soft Delete** acts as a safeguard against accidental deletion or modification. When enabled for Blobs, Containers, or Azure File Shares, deleted data is not permanently erased immediately. Instead, it is retained in a soft-deleted state for a configurable retention period (between 1 and 365 days). During this window, administrators can restore the data to its original state. To configure this, you navigate to the 'Data protection' blade of the Storage Account in the Azure Portal. This feature is critical for recovering from human error or malicious deletion scripts without relying on full-scale backups.

**Snapshots** differ in that they create a read-only version of a blob at a specific point in time. They are used primarily for version control and backing up data states before modifications occur. A snapshot shares the same data pages as the base blob until changes are made, making them cost-effective; you only pay for the unique data blocks stored in the snapshot. Snapshots allow you to read, copy, or delete specific versions of data. However, managing them requires care: you cannot delete a base blob without also deleting its snapshots, unless specific delete parameters are used.

By combining these tools, an Azure Administrator ensures data resilience: Soft Delete handles 'oops' moments regarding deletion, while Snapshots provide granular point-in-time recovery options for application consistency and version history.

Configure blob lifecycle management

Azure Blob Lifecycle Management is a feature designed to optimize storage costs and automate data retention strategies, a core competency for the Azure Administrator Associate. It functions as a rule-based policy engine that automatically transitions block blobs to appropriate access tiers or deletes them when they are no longer needed.

Data usage often changes over time: files are accessed frequently when created (Hot), strictly for backup after a month (Cool/Cold), and solely for long-term compliance eventually (Archive). Storing rarely accessed data in the Hot tier is financially inefficient. Lifecycle management addresses this by applying automated rules.

A policy is comprised of up to 100 rules. Each rule defines:
1. Filter Sets: These limit the rule to specific containers or blobs using prefix strings or index tags.
2. Actions: The specific operation to perform, such as TierToCool, TierToArchive, or Delete.
3. Run Conditions: Time-based triggers, specifically daysAfterModification, daysAfterCreation, or daysAfterLastAccessTime (if access tracking is enabled).

For example, you can configure a policy to move log files to the Cool tier 30 days after creation, to the Archive tier after 180 days, and delete them after 365 days. This is configured via the Azure Portal, PowerShell, CLI, or REST APIs on General Purpose v2 or Blob Storage accounts.

In the context of the exam, understanding how to maximize cost savings using the daysAfterLastAccessTime property is vital. Administrators must also note that while the feature itself is free, the transaction costs for moving data between tiers still apply. This automation ensures compliance and efficiency without manual administrative oversight.

More Implement and manage storage questions
393 questions (total)