Learn Develop Azure compute solutions (AZ-204) with Interactive Flashcards

Master key concepts in Develop Azure compute solutions through our interactive flashcard system. Click on each card to reveal detailed explanations and enhance your understanding.

Create and manage container images for solutions

Container images are fundamental building blocks for deploying applications in Azure. They encapsulate your application code, runtime, libraries, and dependencies into a portable, consistent package that can run anywhere containers are supported.

**Azure Container Registry (ACR)** is Microsoft's managed Docker registry service for storing and managing container images. Key features include geo-replication, automated image builds, and integration with Azure DevOps pipelines.

**Creating Container Images:**

1. **Dockerfile Creation**: Define your image using a Dockerfile that specifies the base image, copies application files, installs dependencies, and sets entry points.

2. **Building Images**: Use Docker CLI commands like 'docker build -t myimage:v1 .' to create images locally, or leverage ACR Tasks for cloud-based builds.

3. **ACR Tasks**: Enable automated builds triggered by source code commits, base image updates, or scheduled timers. This supports continuous integration workflows.

**Managing Container Images:**

1. **Pushing to Registry**: Use 'docker push' or 'az acr build' to upload images to ACR after authenticating with 'az acr login'.

2. **Image Tagging**: Apply meaningful tags (version numbers, environment names) to organize and identify images. Consider semantic versioning for production deployments.

3. **Security Scanning**: ACR integrates with Microsoft Defender to scan images for vulnerabilities and compliance issues.

4. **Retention Policies**: Configure automatic purging of old or untagged images to manage storage costs and maintain registry hygiene.

5. **Replication**: Enable geo-replication to distribute images across multiple Azure regions for faster pulls and redundancy.

**Best Practices:**
- Use multi-stage builds to minimize image size
- Implement image signing for authenticity verification
- Store secrets using Azure Key Vault rather than embedding in images
- Regularly update base images for security patches
- Use specific tags rather than 'latest' for production deployments

These capabilities enable reliable, secure container workflows for Azure solutions.

Publish an image to Azure Container Registry

Azure Container Registry (ACR) is a managed Docker registry service that allows you to store and manage container images for Azure deployments. Publishing an image to ACR involves several key steps that every Azure developer should understand.

First, you need to create an Azure Container Registry instance through the Azure Portal, Azure CLI, or ARM templates. When creating the registry, you select a SKU (Basic, Standard, or Premium) based on your storage and throughput requirements.

Before pushing an image, you must authenticate to the registry. The most common method uses the Azure CLI command 'az acr login --name <registry-name>', which retrieves credentials and logs you into the registry. Service principals or managed identities can also be used for automated scenarios.

Next, you need to tag your local Docker image with the fully qualified registry path. The format follows: <registry-name>.azurecr.io/<image-name>:<tag>. For example, if your registry is named 'myregistry' and your image is 'myapp', you would use: 'docker tag myapp myregistry.azurecr.io/myapp:v1'.

Once tagged, push the image using the standard Docker push command: 'docker push myregistry.azurecr.io/myapp:v1'. The image layers are uploaded to ACR and stored securely.

Alternatively, ACR Tasks provides a feature called Quick Tasks that builds and pushes images in the cloud using 'az acr build --registry <registry-name> --image <image-name>:<tag> .' This approach eliminates the need for a local Docker installation.

After publishing, you can verify the image exists using 'az acr repository list' or 'az acr repository show-tags' commands. The image is now ready for deployment to Azure services like Azure Kubernetes Service, Azure Container Instances, or Azure App Service.

Best practices include using image scanning for vulnerabilities, implementing geo-replication for high availability, and applying retention policies to manage storage costs.

Run containers by using Azure Container Instances

Azure Container Instances (ACI) provides the fastest and simplest way to run containers in Azure without managing virtual machines or adopting higher-level orchestration services. ACI is ideal for scenarios requiring isolated containers, simple applications, task automation, and build jobs.

**Key Features:**

1. **Fast Startup**: Containers can start in seconds, making ACI perfect for burst workloads and quick scaling scenarios.

2. **Per-Second Billing**: You pay only for the resources consumed while your container runs, calculated per second based on vCPU and memory allocation.

3. **Custom Sizing**: Specify exact CPU cores and memory requirements, ensuring you only pay for what you need.

4. **Persistent Storage**: Mount Azure Files shares to containers for persistent data storage across container restarts.

5. **Linux and Windows Support**: Run both Linux and Windows containers using the same API.

**Container Groups:**
Multiple containers can be deployed together in a container group, sharing the same host machine, local network, storage, and lifecycle. This is similar to a pod in Kubernetes.

**Deployment Methods:**
- Azure CLI using 'az container create'
- Azure Portal
- ARM templates or Bicep for multi-container deployments
- YAML files for container group definitions

**Common CLI Example:**

az container create --resource-group myRG --name mycontainer --image mcr.microsoft.com/azuredocs/aci-helloworld --ports 80 --dns-name-label myapp

**Restart Policies:**
- Always (default): Container always restarts
- Never: Container runs once only
- OnFailure: Restarts only on failure

**Environment Variables and Secure Values:**
Pass configuration through environment variables, with support for secure values that remain hidden in container properties.

**Networking:**
Containers receive public IP addresses and fully qualified domain names (FQDN), or can be deployed into virtual networks for private connectivity.

ACI excels for batch processing, CI/CD build agents, and microservices testing scenarios.

Create solutions by using Azure Container Apps

Azure Container Apps is a fully managed serverless container service that enables you to run microservices and containerized applications on a serverless platform. It abstracts away infrastructure management while providing powerful features for modern application development.

**Key Concepts:**

1. **Container Apps Environment**: This serves as a secure boundary around groups of container apps. Apps within the same environment share the same virtual network, logging configuration, and Dapr components.

2. **Revisions**: Container Apps uses a revision-based deployment model. Each time you update your container app configuration or container image, a new revision is created. You can run multiple revisions simultaneously for A/B testing or gradual rollouts.

3. **Scaling**: Container Apps supports automatic horizontal scaling based on HTTP traffic, CPU/memory usage, Azure Queue length, or custom KEDA scalers. Apps can scale to zero when not in use, reducing costs.

4. **Ingress**: You can expose your container apps via HTTP or TCP ingress. The platform handles SSL termination and provides built-in authentication options.

**Creating Container Apps:**

You can deploy Container Apps using Azure CLI, ARM templates, Bicep, or the Azure Portal. A typical deployment requires specifying the container image, resource allocation (CPU/memory), environment variables, and scaling rules.

**Key Features:**
- **Dapr Integration**: Built-in support for Dapr enables microservices patterns like service invocation, state management, and pub/sub messaging.
- **Secrets Management**: Securely store and reference sensitive configuration values.
- **Volume Mounts**: Attach Azure Files storage for persistent data.

**Use Cases:**
- API endpoints and microservices
- Background processing jobs
- Event-driven applications
- Web applications

Container Apps is ideal when you need Kubernetes-style orchestration capabilities but prefer a simpler, managed experience that handles infrastructure complexity automatically.

Create an Azure App Service Web App

Azure App Service Web App is a fully managed platform for building, deploying, and scaling web applications. It supports multiple programming languages including .NET, Java, Node.js, Python, and PHP.

To create an Azure App Service Web App, you have several options:

**Azure Portal Method:**
1. Navigate to the Azure Portal and select 'Create a resource'
2. Search for 'Web App' and click Create
3. Configure the basics: subscription, resource group, web app name, runtime stack, operating system, and region
4. Select an App Service Plan which determines the compute resources and pricing tier
5. Review and create the resource

**Azure CLI Method:**
Use the command: az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name myWebApp --runtime "DOTNET|6.0"

**Key Configuration Elements:**
- **App Service Plan:** Defines the compute resources (CPU, memory) and features available. Tiers range from Free to Premium.
- **Runtime Stack:** The programming language and version your application uses
- **Operating System:** Choose between Windows or Linux based on your requirements
- **Region:** Geographic location affecting latency and compliance

**Important Features:**
- Built-in auto-scaling and load balancing
- Continuous deployment integration with GitHub, Azure DevOps, and other repositories
- Custom domain and SSL certificate support
- Deployment slots for staging environments
- Application Insights integration for monitoring

**Deployment Options:**
- FTP/FTPS upload
- Git-based deployment
- ZIP deploy
- Docker container deployment
- Azure DevOps pipelines

After creation, you can access your web app via the default URL format: https://[webappname].azurewebsites.net. Configuration settings, connection strings, and application settings can be managed through the portal or programmatically through the Azure SDK.

Configure and implement diagnostics and logging

Configuring and implementing diagnostics and logging in Azure compute solutions is essential for monitoring application health, troubleshooting issues, and maintaining operational visibility. Azure provides comprehensive tools to capture, store, and analyze diagnostic data from your applications and infrastructure.

Azure Monitor serves as the central platform for collecting metrics and logs from various Azure resources. It aggregates data from Application Insights, Log Analytics workspaces, and Azure diagnostics extensions. You can enable diagnostic settings on resources to route logs to storage accounts, Event Hubs, or Log Analytics workspaces.

For Azure App Service, you can enable application logging to capture trace information from your code. Configure diagnostic logs through the Azure portal by navigating to Monitoring > Diagnostic settings. You can choose between filesystem storage for temporary logging or blob storage for persistent retention. Enable Web Server Logging to capture HTTP request details and Failed Request Tracing for debugging specific errors.

Application Insights provides deep application performance monitoring. Integrate it by installing the SDK in your application or enabling auto-instrumentation. It captures request rates, response times, failure rates, dependency calls, exceptions, and custom telemetry. Use the Kusto Query Language (KQL) to query logs and create custom dashboards.

For Azure Functions, built-in logging integrates with Application Insights. Configure the host.json file to set logging levels and sampling rates. Use ILogger interface in your function code to emit custom log messages with appropriate severity levels.

Azure Virtual Machines require the Azure Diagnostics extension to collect performance counters, event logs, and crash dumps. Configure through ARM templates or Azure CLI to specify which data sources to collect and where to store them.

Best practices include implementing structured logging with correlation IDs for distributed tracing, setting appropriate log retention policies to manage costs, creating alerts based on log queries for proactive monitoring, and using Log Analytics workspaces to centralize logs across multiple resources for unified analysis and querying capabilities.

Deploy code and containerized solutions to App Service

Azure App Service is a fully managed platform for building, deploying, and scaling web applications. Deploying code and containerized solutions to App Service involves several approaches and methods.

**Code Deployment Options:**

1. **ZIP Deploy**: Upload your application as a ZIP file through Azure CLI, REST API, or Kudu. This method packages your code and deploys it efficiently.

2. **Git Deployment**: Connect your local Git repository or external repositories like GitHub, Azure DevOps, or Bitbucket for continuous deployment. App Service pulls code automatically when changes are pushed.

3. **FTP/FTPS**: Traditional file transfer protocol for uploading application files to the wwwroot directory.

4. **Azure DevOps Pipelines**: Create CI/CD pipelines that build, test, and deploy your applications automatically.

**Container Deployment:**

1. **Web App for Containers**: Deploy Docker containers from Azure Container Registry, Docker Hub, or private registries. Configure the container image, port, and startup commands.

2. **Multi-container Apps**: Use Docker Compose files to deploy multiple containers that work together as a single application.

**Key Configuration Elements:**

- **Deployment Slots**: Create staging environments to test deployments before swapping to production, enabling zero-downtime deployments.

- **Application Settings**: Configure environment variables, connection strings, and runtime settings through the Azure portal or CLI.

- **Deployment Center**: A centralized hub in Azure portal for configuring and managing deployment sources and CI/CD integration.

**Best Practices:**

- Use deployment slots for testing before production releases
- Implement health checks to verify deployment success
- Configure auto-scaling based on demand
- Enable diagnostic logging for troubleshooting
- Use managed identities for secure authentication to Azure resources

Whether deploying traditional code or containerized applications, App Service provides flexible, scalable hosting with built-in load balancing, SSL certificates, and integration with Azure services.

Configure App Service settings including TLS, API, and connections

Azure App Service settings configuration involves managing TLS/SSL, API settings, and connection strings to ensure secure and efficient application deployment. For TLS (Transport Layer Security) configuration, you navigate to the Azure Portal, select your App Service, and access the TLS/SSL settings blade. Here you can enforce HTTPS-only traffic by enabling the HTTPS Only option, which redirects all HTTP requests to HTTPS. You can also set the minimum TLS version (1.0, 1.1, or 1.2) to enhance security, with TLS 1.2 being the recommended standard. Custom SSL certificates can be uploaded or purchased through Azure, and you can bind these certificates to custom domains. API settings in App Service allow you to configure how your application exposes and consumes APIs. Through the API Management integration, you can import your App Service as an API, apply policies, and manage versioning. CORS (Cross-Origin Resource Sharing) settings can be configured to specify which origins are permitted to access your API endpoints. You can define allowed origins, methods, and headers through the CORS blade in your App Service configuration. Connection strings configuration enables your application to connect to databases and other Azure services. In the Configuration blade, you can add connection strings with specific types such as SQLServer, MySQL, PostgreSQL, or Custom. These connection strings override values in your application configuration files when deployed. App settings work similarly, storing key-value pairs that your application can access as environment variables. Both connection strings and app settings support slot-specific configurations, meaning you can have different values for production and staging environments. For sensitive information, Azure Key Vault references can be used instead of storing secrets in plain text, providing an additional security layer for your application configurations.

Implement autoscaling for App Service

Autoscaling in Azure App Service allows your application to automatically adjust resources based on demand, ensuring optimal performance and cost efficiency. This feature dynamically scales your app horizontally by adding or removing instances based on predefined rules or metrics.

There are two main autoscaling approaches in App Service:

**1. Automatic Scaling (Preview)**
This newer feature enables the platform to make scaling decisions based on HTTP traffic patterns. You configure minimum and maximum instance counts, and Azure handles the rest. It's ideal for applications with variable workloads.

**2. Custom Autoscale (Rules-Based)**
This approach uses Azure Monitor metrics to trigger scaling actions. You define scale-out and scale-in rules based on metrics like CPU percentage, memory usage, HTTP queue length, or custom metrics.

**Key Configuration Elements:**

- **Scale Conditions**: Define when scaling should occur, including default conditions and scheduled profiles for predictable load patterns.

- **Rules**: Specify metric thresholds, time aggregation, duration, and the scaling action (increase or decrease instance count).

- **Instance Limits**: Set minimum, maximum, and default instance counts to control costs and ensure availability.

**Implementation Steps:**

1. Navigate to your App Service in Azure Portal
2. Select 'Scale out (App Service plan)' under Settings
3. Choose between automatic scaling or custom autoscale
4. Configure rules specifying metric source, threshold values, and scaling actions
5. Set appropriate cooldown periods to prevent rapid scaling fluctuations

**Best Practices:**

- Always configure both scale-out and scale-in rules to optimize costs
- Use appropriate cooldown periods (typically 5-10 minutes)
- Monitor scaling events through Activity Log
- Consider using scheduled scaling for predictable traffic patterns
- Test your scaling configuration under load conditions

Autoscaling requires Standard tier or higher App Service plans. The feature integrates with Azure Monitor for comprehensive observability of scaling operations.

Configure deployment slots for App Service

Azure App Service deployment slots are powerful features that enable you to deploy and test applications in separate environments before swapping them to production. Deployment slots are live apps with their own hostnames, available in Standard, Premium, and Isolated App Service plans.

To configure deployment slots, navigate to your App Service in the Azure Portal and select 'Deployment slots' under the Deployment section. Click 'Add Slot' to create a new slot, providing a name and optionally cloning settings from an existing slot.

Each deployment slot maintains its own configuration settings. You can designate settings as 'slot-specific' (sticky) or 'swappable'. Sticky settings remain with the slot during swaps, while swappable settings move with the application code. Common sticky settings include connection strings for staging databases, debugging configurations, and environment-specific app settings.

The swap operation exchanges the roles of source and target slots. Before performing a swap, Azure warms up the source slot by applying target slot settings, ensuring minimal downtime. You can perform manual swaps or configure auto-swap for continuous deployment scenarios.

Key configuration steps include:

1. Creating slots with meaningful names like 'staging' or 'testing'
2. Configuring slot-specific application settings using the 'Deployment slot setting' checkbox
3. Setting up traffic routing to gradually shift user traffic between slots using percentage-based routing
4. Implementing swap with preview (multi-phase swap) for validation before completing the swap

Best practices involve using staging slots to validate changes, implementing health checks to verify application readiness, and utilizing slot-specific connection strings to prevent staging environments from accessing production data.

Deployment slots support various deployment methods including Git, Azure DevOps, GitHub Actions, and ZIP deploy. You can also use Azure CLI or PowerShell commands like 'az webapp deployment slot swap' for automation in CI/CD pipelines, making deployment slots essential for achieving zero-downtime deployments and robust release management.

Create and configure an Azure Functions app

Azure Functions is a serverless compute service that enables you to run event-driven code without managing infrastructure. Creating and configuring an Azure Functions app involves several key steps and considerations.

**Creating a Function App:**
You can create a Function App through the Azure Portal, Azure CLI, PowerShell, or ARM templates. Essential configuration includes selecting a runtime stack (such as .NET, Node.js, Python, Java, or PowerShell), choosing a hosting plan, and specifying a storage account for function metadata.

**Hosting Plans:**
- **Consumption Plan**: Pay-per-execution model with automatic scaling, ideal for intermittent workloads
- **Premium Plan**: Offers pre-warmed instances, VNet connectivity, and unlimited execution duration
- **Dedicated (App Service) Plan**: Runs on dedicated VMs, suitable for long-running functions

**Configuration Settings:**
Application settings store connection strings, API keys, and environment variables. These are accessed through environment variables in your code. The host.json file configures global behaviors like logging, extensions, and function timeouts.

**Triggers and Bindings:**
Triggers define how functions are invoked (HTTP, Timer, Blob, Queue, Event Hub, Service Bus, Cosmos DB). Bindings provide declarative connections to input and output data sources, reducing boilerplate code.

**Security Configuration:**
Authentication can be configured using Azure Active Directory, OAuth providers, or function-level authorization keys. For HTTP triggers, you can set authorization levels to anonymous, function, or admin.

**Deployment Options:**
Functions can be deployed through Visual Studio, VS Code, Azure DevOps, GitHub Actions, ZIP deployment, or continuous deployment from source control.

**Monitoring:**
Application Insights integration provides telemetry, logging, and performance monitoring. Configure sampling rates and log levels through host.json.

**Scaling:**
The platform automatically scales based on trigger events. You can configure maximum instance counts and manage scale-out behavior through the portal or configuration files.

Implement Azure Functions input and output bindings

Azure Functions bindings provide a declarative way to connect your functions to other Azure services and external resources. Input bindings allow your function to receive data from external sources, while output bindings enable your function to send data to external destinations.

**Input Bindings:**
Input bindings read data from a source and pass it to your function as a parameter. Common input binding types include:
- **Blob Storage**: Read files from Azure Blob containers
- **Cosmos DB**: Query documents from a database
- **Queue Storage**: Receive messages from queues
- **Table Storage**: Retrieve table entities

To configure an input binding, you define it in the function.json file or use attributes in C#. For example, a Blob input binding might look like: `[BlobInput("samples-workitems/{queueTrigger}")] string myBlob`

**Output Bindings:**
Output bindings write data to external services when your function completes. Common output binding types include:
- **Blob Storage**: Write files to blob containers
- **Queue Storage**: Add messages to queues
- **Cosmos DB**: Insert or update documents
- **SendGrid**: Send emails
- **Event Hubs**: Publish events

Output bindings are configured similarly to input bindings. You can use the return value or output parameters: `[BlobOutput("output-container/{name}")] out string outputBlob`

**Configuration Methods:**
1. **function.json**: Define bindings declaratively in JSON format
2. **Attributes/Decorators**: Use language-specific annotations in C#, Python, or Java
3. **Binding expressions**: Use dynamic values like `{queueTrigger}` to create flexible paths

**Key Benefits:**
- Reduces boilerplate code for connecting to services
- Handles connection management automatically
- Supports multiple bindings per function
- Enables clean separation of concerns

Bindings simplify development by abstracting away the complexity of service integration, letting developers focus on business logic rather than infrastructure connectivity code.

Implement function triggers using data operations, timers, and webhooks

Azure Functions triggers are mechanisms that cause a function to execute. Understanding how to implement triggers using data operations, timers, and webhooks is essential for Azure developers.

**Data Operation Triggers:**
These triggers respond to changes in data storage services. The most common include:

1. **Blob Storage Trigger**: Activates when a blob is added or modified in Azure Blob Storage. You configure the connection string and container path in function.json or through bindings.

2. **Cosmos DB Trigger**: Monitors a Cosmos DB container for inserts and updates using the change feed. It processes documents in batches and maintains lease information for checkpointing.

3. **Queue Storage Trigger**: Fires when messages arrive in an Azure Storage Queue, enabling asynchronous message processing.

**Timer Triggers:**
Timer triggers execute functions on a schedule using CRON expressions. The schedule is defined in the function.json file or through the TimerTrigger attribute. For example, "0 */5 * * * *" runs every five minutes. Timer triggers are ideal for scheduled tasks like cleanup jobs, report generation, or periodic data synchronization. The TimerInfo parameter provides information about the current execution, including whether the function is running late.

**Webhook Triggers:**
Webhook triggers are HTTP triggers configured to respond to webhook payloads from external services. They support:

1. **Generic Webhooks**: Accept POST requests with JSON or form data payloads.

2. **GitHub Webhooks**: Process repository events like pushes, pull requests, or issues.

3. **Slack Webhooks**: Handle slash commands and interactive components.

Webhooks require proper authentication configuration, including API keys or Azure AD integration for security.

**Implementation Considerations:**
- Configure bindings in function.json or use attributes in C#
- Handle connection strings through application settings
- Implement proper error handling and retry policies
- Consider scaling implications for high-volume triggers

These trigger types enable event-driven architectures, allowing functions to respond dynamically to data changes, scheduled events, and external service notifications.

More Develop Azure compute solutions questions
780 questions (total)