Design infrastructure solutions
Design compute, application architecture, migrations, and network solutions.
Design infrastructure solutions is a critical domain for Azure Solutions Architect Expert certification that focuses on creating robust, scalable, and secure cloud architectures on Microsoft Azure. This competency encompasses several key areas that architects must master. First, architects must un…
Concepts covered: Specify components of a compute solution based on workload requirements, Recommend a virtual machine-based solution, Recommend a container-based solution, Recommend a serverless-based solution, Recommend a compute solution for batch processing, Recommend a messaging architecture, Recommend an event-driven architecture, Recommend a solution for API integration, Recommend a caching solution for applications, Recommend an application configuration management solution, Recommend an automated deployment solution for applications, Evaluate a migration solution that leverages the Microsoft Cloud Adoption Framework for Azure, Evaluate on-premises servers, data, and applications for migration, Recommend a solution for migrating workloads to IaaS and PaaS, Recommend a solution for migrating databases, Recommend a solution for migrating unstructured data, Recommend a connectivity solution that connects Azure resources to the internet, Recommend a connectivity solution that connects Azure resources to on-premises networks, Recommend a solution to optimize network performance, Recommend a solution to optimize network security, Recommend a load-balancing and routing solution
AZ-305 - Design infrastructure solutions Example Questions
Test your knowledge of Design infrastructure solutions
Question 1
An insurance claims processing company is deploying a new fraud investigation platform on Azure that analyzes suspicious claims using containerized microservices. The platform consists of seven services: claims intake API, document scanning service, data enrichment service, ML-based fraud detection engine, case management service, evidence archival service, and reporting dashboard. The claims intake API receives 2,000-8,000 requests daily with unpredictable submission patterns. The document scanning service must process PDF and image files stored in Azure Blob Storage, with each scan taking 5-15 minutes and requiring 2 vCPUs with 8GB RAM. The data enrichment service calls external third-party APIs to validate claimant information and needs consistent outbound IP addresses for API provider whitelisting. The fraud detection engine runs TensorFlow models requiring GPU acceleration (NVIDIA T4 GPUs) and must scale from 2 to 12 instances based on the depth of the Azure Storage Queue containing pending analysis jobs. The case management service maintains long-running WebSocket connections for investigator dashboards and requires session persistence. The evidence archival service runs scheduled jobs every 6 hours to compress and transfer completed case files to cold storage. The company requires granular network segmentation between services using subnet-level isolation and custom NSG rules to meet SOC 2 Type II compliance. The operations team needs the ability to deploy specific services to dedicated node pools with different VM SKUs (CPU-optimized for APIs, GPU-enabled for ML, memory-optimized for case management). The platform must support Azure Policy integration for enforcing resource tags and deployment guardrails. The development team uses CI/CD pipelines with Helm charts and requires support for Init Containers to perform database schema migrations before service startup. The company wants managed control plane operations, automated Kubernetes version upgrades, and integration with Azure Monitor Container Insights for performance monitoring. Which Azure container platform should the solutions architect recommend for this fraud investigation system?
Question 2
A digital publishing company operates a content delivery platform on Azure that serves 12 million readers across 35 countries with personalized article recommendations. The application uses Azure SQL Database to store author profiles, article metadata, content categories, and editorial guidelines totaling 450GB. The recommendation engine runs on Azure Container Instances and generates personalized content feeds by querying reader preference data combined with editorial policy rules. Each recommendation request retrieves content filtering policies based on reader geographic location, subscription tier, and content maturity ratings by executing queries across 14 database tables that take 1,100-1,350ms to complete. The platform processes 480,000 recommendation requests hourly during morning reading peaks (6 AM - 10 AM across global time zones), with analytics revealing that 94% of requests involve identical filtering for 410 standard reader profile combinations (country-tier-rating permutations). Editorial policy rules are revised semi-annually during June and December editorial board meetings when content standards are reviewed, with mid-cycle policy adjustments occurring 7-9 times annually when regulatory requirements change in specific jurisdictions. Database performance metrics show that content filtering queries account for 76% of total database resource consumption during peak reading hours, creating resource contention with article publishing workflows that require transactional consistency for content versioning and audit trails. The editorial team accepts policy data reflecting changes within 60 days since content moderators verify current policies through the editorial management system before approving sensitive content. Reader experience standards mandate that personalized feeds must render within 320ms to prevent reader abandonment and maintain engagement metrics. The infrastructure team must achieve a 72% reduction in database resource utilization while supporting the globally distributed reader access patterns and ensuring cache consistency across container instance scale-out events. Which caching strategy should the Azure Solutions Architect implement for this content delivery platform?
Question 3
A multinational pharmaceutical company operates a clinical trial data management platform that processes patient data from 150 hospitals across 8 countries. The architecture includes Azure Container Apps for data processing microservices (12 services), Azure API Management for hospital integrations, Azure Database for PostgreSQL Flexible Server for clinical data, and Azure Front Door for global traffic distribution. The engineering team uses GitLab with 7 repositories: microservices code, API policies, database migration scripts, infrastructure templates, monitoring configurations, security policies, and compliance documentation. Microservices deploy 6-8 times weekly, API policies update bi-weekly, database migrations execute monthly, and infrastructure changes occur quarterly. The regulatory team requires 21 CFR Part 11 and GDPR compliance with complete deployment lineage tracking retained for 25 years. Database schema migrations must execute in a maintenance window (Sunday 1 AM - 3 AM UTC) and complete before microservice deployments. The clinical operations team mandates that deployments affecting patient data processing must include validation gates checking data processing accuracy against 1,000 test cases (requiring 90 minutes) and referential integrity checks across 45 database tables. If validation fails, the deployment must pause and require approval from both the data quality manager and the clinical informatics lead before proceeding or rolling back. Each country operates isolated PostgreSQL instances, Container Apps environments, and API Management instances due to data sovereignty regulations. The platform must support deploying emergency security patches to individual countries during business hours while scheduled feature releases deploy globally during off-peak hours. The security team requires workload identity federation with Azure AD, Key Vault integration with customer-managed keys per country, and deployment audit logs forwarded to country-specific Log Analytics workspaces with 25-year retention. The solution must handle repository-specific schedules since infrastructure updates should not trigger microservice recompilation, and database migrations must occur before application deployments. Which automated deployment solution architecture should you design?