124 Practice Questions & Answers
A company is designing a multi-region disaster recovery solution for a mission-critical application. They need RPO of 1 hour and RTO of 15 minutes. Which combination of services best meets these requirements?
-
A
AWS Backup with cross-region copies and manual EC2 recovery procedures
-
B
S3 cross-region replication with point-in-time recovery enabled
-
C
Amazon Aurora Global Database with automated failover and read replicas in secondary regions
✓ Correct
-
D
DynamoDB with on-demand billing and eventual consistency across regions
Explanation
Aurora Global Database provides sub-second replication for RPO and automatic failover achieving the 15-minute RTO requirement, making it ideal for mission-critical applications with strict recovery requirements.
An organization needs to implement a tagging strategy for cost allocation across 50+ AWS accounts in an AWS Organization. What is the most scalable approach?
-
A
Deploy CloudFormation StackSets with tagging parameters across all accounts
-
B
Configure AWS Config rules to enforce tagging and remediate non-compliant resources
-
C
Use AWS Systems Manager to apply tags via automation documents to all resources
-
D
Implement tag policies in AWS Organizations and use Cost Allocation Tags for billing reports
✓ Correct
Explanation
Tag policies in AWS Organizations provide centralized enforcement of tagging standards across all accounts, while Cost Allocation Tags enable billing and cost center tracking at scale without manual intervention.
A financial services company must maintain strict data residency requirements for customer PII. Which architectural approach best ensures compliance?
-
A
Deploy NAT gateways in each region to control outbound traffic to compliant endpoints
-
B
Configure Route 53 geolocation routing to automatically route requests to compliant regions
-
C
Implement data classification, use region-specific S3 buckets with bucket policies denying cross-region replication, and leverage SCP policies to restrict service usage by region
✓ Correct
-
D
Use VPC endpoints to route all traffic through a single region with encryption in transit
Explanation
This multi-layered approach combines data classification, S3 bucket policies for data residency control, and Service Control Policies (SCPs) to enforce regional restrictions at the AWS account level, providing the strongest compliance assurance.
A company is experiencing unpredictable traffic spikes to their web application hosted on EC2 instances behind an Application Load Balancer. Which strategy minimizes costs while maintaining performance?
-
A
Migrate to AWS Lambda with API Gateway for automatic scaling without infrastructure management
-
B
Implement AWS Savings Plans and manually increase instance count during expected traffic periods
-
C
Purchase 3-year Reserved Instances for baseline load and add On-Demand instances for spikes
-
D
Use a mix of On-Demand and Spot instances with Auto Scaling groups and target tracking scaling policies
✓ Correct
Explanation
Combining Spot instances (for cost savings) with On-Demand instances and Auto Scaling with target tracking policies provides flexibility, cost optimization, and automatic scaling to handle unpredictable spikes.
You are designing a hybrid cloud solution where on-premises systems must access AWS resources with minimal latency. Which AWS service provides the lowest latency connection?
-
A
AWS Direct Connect with a dedicated network connection from your data center
✓ Correct
-
B
AWS Global Accelerator to route traffic through the AWS backbone network
-
C
VPC endpoints for private access to AWS services over the internet
-
D
AWS Site-to-Site VPN with BGP dynamic routing
Explanation
AWS Direct Connect provides a dedicated physical connection between your on-premises infrastructure and AWS, eliminating internet variability and offering the lowest and most consistent latency.
A startup is building a serverless microservices architecture with Lambda functions that must process 100,000 events per second during peak load. What is the primary architectural concern?
-
A
DynamoDB write throttling and the requirement for on-demand billing mode
-
B
API Gateway request rate limits and the need for caching strategies
-
C
CloudWatch Logs ingestion rates and the cost of storing logs for all invocations
-
D
Lambda concurrent execution limits and the need for quota increase requests with AWS Support
✓ Correct
Explanation
AWS Lambda has default concurrent execution limits (1,000 per account/region), and processing 100,000 events/second would require a significantly higher limit via AWS Support quota increase, making this the primary architectural concern.
Your organization uses AWS Organizations with multiple member accounts. You need to implement a centralized logging solution that captures CloudTrail, VPC Flow Logs, and application logs. What is the most efficient approach?
-
A
Create an Amazon CloudWatch Logs group in each account and use cross-account log subscription filters to forward to a central account
-
B
Deploy CloudWatch agent to all EC2 instances configured to ship logs to a central account via IAM roles
-
C
Configure CloudTrail to log to a central S3 bucket in the master account with bucket policies allowing cross-account access
✓ Correct
-
D
Set up AWS Systems Manager OpsCenter to aggregate logs from all accounts and regions automatically
Explanation
CloudTrail's native cross-account, cross-region S3 bucket logging capability is the most efficient for centralized logging at organization scale, requiring minimal configuration compared to other solutions.
A company has a legacy monolithic application that processes large XML files and generates reports. The application runs on EC2 and has unpredictable compute requirements. Which refactoring approach provides the best cost optimization?
-
A
Break the XML parsing into Lambda functions triggered by S3 events and use Step Functions for orchestration
✓ Correct
-
B
Migrate to AWS Batch for scheduled processing with Spot instances for compute
-
C
Refactor to microservices on ECS with Application Load Balancer and reserved capacity
-
D
Containerize the application using ECS Fargate and use Auto Scaling based on SQS queue depth
Explanation
Breaking the monolith into Lambda functions triggered by S3 events (serverless, event-driven) with Step Functions for orchestration provides automatic scaling, pay-per-invocation pricing, and eliminates idle compute costs for unpredictable workloads.
You are designing a solution for a healthcare organization that must encrypt patient data both at rest and in transit, and maintain audit trails. Which combination of services is most appropriate?
-
A
Glacier with vault lock policies and S3 bucket versioning enabled for audit trails
-
B
S3 with SSE-S3 encryption, TLS for transport, and CloudTrail for auditing
-
C
EBS volumes with default encryption, VPC with security groups, and CloudWatch monitoring
-
D
RDS with AWS KMS encryption, SSL/TLS connections, CloudTrail logging, and optionally database activity monitoring
✓ Correct
Explanation
RDS with KMS encryption provides encryption at rest with customer-managed keys, SSL/TLS ensures encrypted transport, CloudTrail tracks API calls, and enhanced monitoring/database activity monitoring provides detailed audit trails for compliance.
A distributed application processes IoT sensor data from millions of devices worldwide. Latency must be minimized for each regional deployment. What architectural pattern is most suitable?
-
A
Implement DynamoDB global tables with eventual consistency and on-demand billing
-
B
Use Amazon Kinesis Data Streams in each region with cross-region replication to a central analytics cluster
-
C
Lambda@Edge with CloudFront to process data at edge locations closest to users
-
D
Deploy regional microservices using ECS in each AWS region with Route 53 latency-based routing
✓ Correct
Explanation
Deploying regional microservices (ECS) in multiple AWS regions with Route 53 latency-based routing ensures requests are served from the geographically nearest region, minimizing latency while keeping data processing local.
An enterprise must implement a compliance solution where all data modifications to critical databases are tracked and immutable. Which AWS service should be the primary component?
-
A
RDS with automated backups and AWS Backup with vault lock
-
B
DynamoDB Streams with Lambda for event capture and S3 for immutable storage
-
C
EventBridge with SQS for capturing database change events and archiving to Glacier
-
D
Amazon QLDB (Quantum Ledger Database) with cryptographic verification
✓ Correct
Explanation
Amazon QLDB provides a cryptographically verifiable, immutable ledger with complete history of all transactions, making it ideal for compliance requirements where data modification tracking and immutability are mandatory.
A company wants to implement automated remediation for non-compliant resources detected by AWS Config. What is the most efficient approach?
-
A
Deploy AWS Systems Manager Automation documents that run on a schedule to fix compliance violations
-
B
Write custom Lambda functions triggered by Config rules and manually invoke them when violations occur
-
C
Use AWS Config remediation actions with SSM Documents to automatically remediate non-compliant resources
✓ Correct
-
D
Create CloudWatch Events rules that trigger SNS notifications for manual remediation by the operations team
Explanation
AWS Config's native remediation actions feature allows automatic, immediate remediation using Systems Manager (SSM) documents when violations are detected, eliminating manual intervention and reducing compliance risk.
You are designing a database solution for a SaaS application serving multiple tenants with data isolation requirements. Which approach best balances isolation, performance, and operational overhead?
-
A
Separate RDS instances for each tenant with consolidated backups and patching via AWS Backup
-
B
Single DynamoDB table with tenant ID as partition key and resource-based policies for access control
-
C
Aurora serverless clusters per tenant with auto-scaling and on-demand pricing
-
D
Single Aurora cluster with separate schemas per tenant and row-level security policies
✓ Correct
Explanation
A single Aurora cluster with separate schemas per tenant and row-level security (RLS) provides strong data isolation, better resource utilization, simplified operations, and cost efficiency compared to dedicated instances per tenant.
A company's application experiences variable traffic with peak loads requiring 10x normal capacity. Which instance purchasing strategy minimizes overall costs?
-
A
Purchase Reserved Instances for 100% baseline capacity and use On-Demand for peak spikes
-
B
Use Spot instances for 80% of peak capacity and On-Demand for guaranteed baseline
-
C
Purchase Reserved Instances for 30% baseline, use Spot for variable load, and On-Demand for remaining requirements
✓ Correct
-
D
Use Compute Savings Plans for all capacity with maximum upfront payment
Explanation
This tiered approach optimizes cost by using cheaper Reserved Instances for predictable baseline (30%), Spot instances for the majority of variable load (cost-effective), and On-Demand for the remaining guaranteed capacity, minimizing overall expenditure.
You need to implement a solution where on-premises applications can securely access secrets stored in AWS Secrets Manager. What is the most appropriate method?
-
A
Use AWS Systems Manager Session Manager to access Secrets Manager through EC2 instances
-
B
Configure AWS Secrets Manager via AWS Direct Connect and use IAM roles with resource-based policies
✓ Correct
-
C
Rotate access keys frequently and store them in a configuration file on-premises with encryption
-
D
Create IAM users for each on-premises application and distribute long-term access keys
Explanation
Using AWS Direct Connect provides a dedicated, secure network connection to Secrets Manager, and IAM roles with resource-based policies enforce principle of least privilege without requiring long-term credentials on-premises.
A global company must ensure that customer data never leaves specific geographic regions due to regulatory requirements. Which service combination enforces this at the architecture level?
-
A
VPC endpoints with resource policies and CloudFront with geographic restrictions
-
B
Route 53 geolocation routing and Application Load Balancer security groups filtering by IP CIDR blocks
-
C
IAM policies with conditions restricting ec2:RunInstances to specific regions and S3 encryption with region-specific keys
-
D
S3 bucket policies with explicit deny for PutObject actions from other regions and AWS SCPs restricting service usage by region
✓ Correct
Explanation
Combining S3 bucket policies that deny cross-region operations with AWS Service Control Policies (SCPs) that restrict regional service usage provides enforcement at both the resource and account levels for regulatory compliance.
An application requires real-time analytics on streaming data with complex windowing and aggregations. Which service is most appropriate?
-
A
DynamoDB Streams with Lambda for event processing and ElastiCache for caching aggregations
-
B
AWS Glue for batch ETL processing with Athena for analytics
-
C
EventBridge with SQS and SNS for fan-out and manual aggregation processing
-
D
Amazon Kinesis Data Analytics with SQL queries for real-time processing
✓ Correct
Explanation
Amazon Kinesis Data Analytics is purpose-built for real-time streaming analytics with native support for windowing, aggregations, and continuous SQL queries without infrastructure management.
You are designing a disaster recovery solution for a critical application with a 4-hour RTO and 1-hour RPO. The primary region has an RDS database with 500 GB of data. What is the most cost-effective approach?
-
A
DMS (Database Migration Service) for continuous replication to secondary RDS with application-level failover
-
B
RDS with automated backups to S3 and periodic snapshots for restore in secondary region
✓ Correct
-
C
RDS Multi-AZ in primary region with manual read replica creation in secondary region for failover
-
D
Aurora Global Database with continuous replication and automated failover
Explanation
Given the 4-hour RTO (not critical like minutes) and 1-hour RPO, RDS automated backups with periodic snapshots provides cost-effective DR that meets requirements without the premium of Global Database or DMS.
A company processes sensitive documents using an ML pipeline that requires GPU compute. How should they architect this for security and compliance?
-
A
Use EC2 instances in private subnets with VPC endpoints for S3 and services, encrypted EBS volumes, instance store encryption, and VPC Flow Logs for monitoring
✓ Correct
-
B
Launch GPU EC2 instances in public subnets with security groups restricting inbound traffic and use EBS encryption
-
C
Deploy SageMaker notebook instances with attached IAM role and S3 bucket policies for data access control
-
D
Use AWS Batch with GPU compute environments in private subnets and CloudWatch monitoring for all activities
Explanation
Private subnets with VPC endpoints eliminate internet exposure, EBS encryption protects data at rest, instance store encryption secures temporary processing, and VPC Flow Logs provide audit trails for compliance requirements.
An organization wants to implement fine-grained access control for AWS resources across 100+ accounts. What is the most scalable solution?
-
A
Create cross-account IAM roles in each account with trust relationships to a central account
-
B
Use AWS Identity Center (SSO) with permission sets and AWS Organizations for centralized access management
✓ Correct
-
C
Implement federation with an external identity provider and maintain separate IAM policies per account
-
D
Use Cognito user pools for authentication and IAM roles for authorization across accounts
Explanation
AWS Identity Center (successor to AWS SSO) with permission sets provides centralized identity and access management across multiple accounts at scale without maintaining separate IAM configurations per account.
A financial services company needs to implement multi-factor authentication (MFA) for all AWS API calls while maintaining programmatic access for applications. Which approach is most secure?
-
A
Enable MFA for all IAM users and require physical MFA devices for console access and programmatic API calls
-
B
Implement MFA at the VPC edge using a bastion host and SSH key pairs for application authentication
-
C
Use temporary security credentials from STS with MFA required, and applications use the temporary credentials from an STS session with MFA validation
✓ Correct
-
D
Use AWS Secrets Manager to store API credentials with automatic rotation and require MFA for secret retrieval
Explanation
Using AWS STS (Security Token Service) to generate temporary credentials with MFA validation allows both secure programmatic access (without requiring physical devices) and strong authentication for human users.
You are designing an architecture for a machine learning pipeline that must process 10 terabytes of training data monthly with auto-scaling. What service orchestrates this most efficiently?
-
A
AWS Step Functions with SageMaker training jobs and parallelized data preprocessing using Glue
✓ Correct
-
B
AWS Batch with containerized training jobs running on spot instances managed by a job queue
-
C
Amazon EMR clusters with Spark for data processing and SageMaker for model training
-
D
EC2 Auto Scaling groups with custom training scripts and manual job submission to queue
Explanation
Step Functions orchestrates the entire ML pipeline, integrating Glue for distributed data preprocessing and SageMaker for training with automatic resource scaling, providing the most managed and efficient approach.
A company must implement zero-trust security for accessing on-premises applications from AWS. Which architectural approach best implements this principle?
-
A
EC2 instances with public IPs and restrictive security groups allowing only specific source IPs
-
B
VPN connection to on-premises network with IP whitelisting rules in security groups
-
C
AWS AppConfig for managing access policies and CloudWatch for monitoring all connections
-
D
AWS Systems Manager Session Manager for SSH/RDP sessions with identity verification and activity logging
✓ Correct
Explanation
Systems Manager Session Manager enforces zero-trust by verifying identity through IAM, encrypting sessions, disabling direct SSH/RDP access, and logging all activity for audit compliance without network-level trust.
You need to design a solution where developers can deploy applications to production without direct AWS console access. What is the most secure approach?
-
A
Create temporary IAM credentials for developers with 4-hour expiration and require approval via SNS notifications
-
B
Grant developers IAM power-user policies and track usage through CloudTrail for compliance auditing
-
C
Deploy applications through AWS Service Catalog with pre-approved CloudFormation templates and resource policies
-
D
Use AWS CodePipeline with CodeCommit, CodeBuild, and CodeDeploy with IAM roles limiting permissions to required resources only
✓ Correct
Explanation
Using AWS CodePipeline with CI/CD services removes need for direct console access, applies principle of least privilege through IAM roles, maintains audit trails, and enables approval gates for production deployments.
A startup wants to migrate a legacy on-premises application to AWS with minimal code changes. The application uses traditional file systems and requires POSIX compliance. Which storage solution is most appropriate?
-
A
S3 with a FUSE-based mount point and CloudFront for caching
-
B
Amazon EFS (Elastic File System) mounted to EC2 instances in a VPC
✓ Correct
-
C
EBS volumes configured as a network-attached storage cluster
-
D
Amazon FSx for Lustre for high-performance, distributed file access
Explanation
Amazon EFS provides fully managed, POSIX-compliant shared file system that mounts to EC2 instances with NFS, supporting traditional file-based applications with minimal code changes required.
An organization processes confidential customer data and must ensure that data is deleted securely when requested (right to be forgotten). Which service combination best implements this?
-
A
S3 with object locking disabled, versioning suspended, and lifecycle policies for immediate deletion or secure erasure
-
B
DynamoDB with point-in-time recovery disabled, AWS KMS key deletion, and lifecycle policies for secure deletion
-
C
RDS with encrypted storage, automated backups with retention set to 0 days, and encrypted snapshots deletion on request
✓ Correct
-
D
EBS volumes with encryption, snapshots management, and deletion workflows triggered by Lambda functions
Explanation
RDS with encryption provides data protection, setting automated backup retention to 0 days ensures backups are immediately deleted, and encrypted snapshots can be deleted on request, fully supporting GDPR right-to-be-forgotten requirements.
A company has a monolithic application running on EC2 with weekly deployments causing 30 minutes of downtime. How should they architect for zero-downtime deployments?
-
A
Deploy on AWS Lambda with API Gateway and use alias-based routing for traffic shifting between versions
-
B
Containerize the application with ECS and implement canary deployments with CloudFormation change sets
-
C
Use Application Load Balancer with Auto Scaling groups and blue-green deployment using CodeDeploy with traffic shifting
✓ Correct
-
D
Use Route 53 weighted routing with health checks and manual instance replacement during deployments
Explanation
ALB with Auto Scaling groups and CodeDeploy's blue-green deployment strategy enables zero-downtime deployments by gradually shifting traffic between old and new instances while monitoring health.
A company runs a multi-tier application across three AWS regions. They need to ensure that if one region fails completely, traffic automatically routes to another region within 60 seconds. Which combination of services would BEST support this requirement?
-
A
Route 53 with health checks and multi-region Auto Scaling groups
✓ Correct
-
B
Application Load Balancer with cross-zone load balancing
-
C
VPC peering with BGP dynamic routing
-
D
CloudFront with origin failover and Lambda@Edge
Explanation
Route 53 health checks can detect region failures and redirect traffic within seconds, while Auto Scaling groups ensure capacity in standby regions. CloudFront is for content distribution, not application failover, and ALB/VPC peering don't provide automated regional failover.
Your organization needs to migrate a legacy on-premises database with strict compliance requirements to AWS. The database must remain encrypted at rest, support cross-region replication, and integrate with AWS Secrets Manager for credential rotation. Which RDS configuration meets all requirements?
-
A
RDS with AWS Database Migration Service and point-in-time restore enabled
-
B
RDS Aurora with storage auto-scaling, Secrets Manager integration, and read-only replicas
-
C
Single-AZ RDS instance with KMS encryption and read replicas in another region
-
D
Multi-AZ RDS deployment with automated backups, KMS encryption, and cross-region read replicas
✓ Correct
Explanation
Multi-AZ provides high availability, KMS encryption satisfies encryption requirements, and cross-region read replicas enable disaster recovery. Single-AZ lacks HA, Aurora requires specific licensing considerations, and DMS is a migration tool rather than an operational configuration.
A financial services company processes real-time streaming data that must be analyzed with sub-second latency. Data comes from multiple sources and must be deduplicated before analysis. Which architecture would be MOST suitable?
-
A
S3 with Athena for batch processing
-
B
Kinesis Data Streams with Kinesis Data Analytics and Lambda for deduplication
✓ Correct
-
C
SQS FIFO queues with SNS for multi-destination publishing
-
D
EventBridge with Step Functions for orchestration
Explanation
Kinesis Data Streams provides sub-second latency for streaming data, Kinesis Data Analytics enables real-time SQL processing, and Lambda can implement deduplication logic. SQS/SNS introduces latency, EventBridge is event-driven not streaming-optimized, and S3/Athena is for batch analysis.
An organization implements a hub-and-spoke VPC architecture with centralized security appliances in the hub. They need to ensure all traffic between spoke VPCs routes through the hub without manual route updates. What is the BEST approach?
-
A
Deploy a bastion host in the hub and configure SSH tunneling between spokes
-
B
Configure static routes in each spoke VPC pointing to the hub's NAT Gateway
-
C
Use VPC peering with custom route tables in each VPC
-
D
Implement AWS Transit Gateway with route propagation and centralized appliance VPC attachment
✓ Correct
Explanation
AWS Transit Gateway automatically propagates routes and handles traffic between multiple VPCs without manual route management, while appliance routes ensure traffic flows through security controls. VPC peering requires manual route updates, static routes don't scale, and SSH tunneling is not a production network solution.
A company wants to implement a serverless solution for processing uploaded documents. Documents must be scanned for sensitive data before storage, and results must be queryable. Which combination of services provides this capability?
-
A
S3, EventBridge, Textract, and Elasticsearch
-
B
S3, Step Functions, Rekognition, and DynamoDB
-
C
CloudFront, Lambda@Edge, S3, and Redshift
-
D
S3, Lambda, Macie, and Athena
✓ Correct
Explanation
Macie automatically detects sensitive data in S3, Lambda processes the uploads, and Athena enables SQL queries on scan results. Rekognition handles image/video analysis, Textract is for document extraction, and Redshift is for data warehouse queries rather than serverless compliance scanning.
Your organization has strict requirements that all data must be encrypted with customer-managed keys and audit logging must be immutable. Which services combination would satisfy these compliance requirements?
-
A
Certificate Manager with private CA and Systems Manager Parameter Store
-
B
S3 default encryption with bucket versioning and CloudWatch Logs
-
C
AWS KMS with custom key policies and CloudTrail with S3 Object Lock
✓ Correct
-
D
Secrets Manager with automatic rotation and Config rules
Explanation
KMS allows customer-managed keys with audit trails, and S3 Object Lock with CloudTrail ensures immutable audit logging for compliance. S3 default encryption doesn't use customer keys, CloudWatch Logs can be deleted, and other services don't provide immutable audit trails.
A startup experiences unpredictable traffic spikes and wants to minimize operational overhead while maintaining cost efficiency. Which auto-scaling strategy is BEST?
-
A
Target tracking scaling with EC2 Spot Instances and mixed instance types
✓ Correct
-
B
Step scaling with CloudWatch alarms and on-demand instances
-
C
Manual scaling with reserved instances for baseline capacity
-
D
Scheduled scaling based on historical traffic patterns
Explanation
Target tracking automatically adjusts capacity based on real-time metrics, and Spot Instances provide cost savings. Scheduled scaling doesn't handle unpredictable spikes, manual scaling lacks automation, and step scaling is less efficient than target tracking for variable workloads.
A company needs to provide secure cross-account access to specific S3 buckets for a partner organization. The partner has their own AWS account and should have time-limited access. Which approach is MOST secure and scalable?
-
A
Use cross-account IAM roles with external ID, trust relationship, and STS AssumeRole
✓ Correct
-
B
Create IAM users in the partner account and grant S3 bucket access directly
-
C
Implement S3 pre-signed URLs generated by a Lambda function
-
D
Share S3 bucket credentials via email and establish an S3 bucket policy
Explanation
Cross-account roles with external ID and trust relationships provide secure, auditable access without sharing credentials. Creating IAM users in the partner account violates account separation, sharing credentials is insecure, and pre-signed URLs are for temporary access rather than account-level integration.
A global retail company wants to minimize latency for their API endpoints served across multiple regions. They need to route users to the nearest region and handle failover automatically. What is the RECOMMENDED solution?
-
A
CloudFront with regional origin failover and geo-routing policies
-
B
Global Accelerator with automatic health checks and traffic dials
✓ Correct
-
C
API Gateway with Lambda functions in each region and VPC endpoints
-
D
Route 53 with geolocation routing and health check-based failover
Explanation
Global Accelerator optimizes routing to nearby regions, provides automatic failover via health checks, and uses anycast for consistent routing. Route 53 geolocation is DNS-based with slower updates, CloudFront is primarily for content delivery, and Lambda-based solutions require manual failover logic.
An organization must implement a disaster recovery strategy for a critical application with an RTO of 15 minutes and RPO of 1 minute. The application uses a database and file storage. Which approach BEST meets these requirements?
-
A
Automated backups to another region with manual restoration procedures
-
B
DynamoDB global tables with S3 cross-region replication and standby compute
-
C
AWS Backup service with daily snapshots and on-demand recovery
-
D
Multi-region RDS with continuous replication and EC2 instances in warm standby
✓ Correct
Explanation
RDS continuous replication achieves sub-minute RPO, warm standby compute enables 15-minute RTO activation, and combined S3 replication meets file storage requirements. Backups have longer RPO, DynamoDB applies only to NoSQL, and daily snapshots can't achieve 1-minute RPO.
A company processes sensitive customer data and must ensure that encryption keys never leave a hardware security module. Which AWS service provides this capability?
-
A
AWS CloudHSM with customer-owned HSM cluster
✓ Correct
-
B
AWS Secrets Manager with automatic rotation
-
C
AWS KMS with Standard key store
-
D
AWS Certificate Manager with import option
Explanation
CloudHSM gives customers exclusive control over HSM hardware and keys that never leave the device. KMS Standard key store uses AWS-managed infrastructure, Secrets Manager manages credentials not cryptographic keys, and Certificate Manager manages certificates not HSM operations.
Your application requires a distributed cache with sub-millisecond latency and automatic failover. Which ElastiCache configuration is MOST appropriate?
-
A
Memcached with multi-node and automatic discovery
-
B
Redis cluster mode disabled with automatic failover and read replicas
-
C
Redis cluster mode enabled with sharding and automatic failover
✓ Correct
-
D
Redis with persistence enabled and manual failover
Explanation
Redis cluster mode provides sharding for scalability, automatic failover for HA, and sub-millisecond latency. Cluster mode disabled limits to single-shard failover, Memcached lacks automatic failover, and manual failover introduces unacceptable downtime.
A healthcare provider must implement HIPAA-compliant infrastructure with end-to-end encryption and detailed audit trails. Which architectural decision supports these requirements?
-
A
Use AWS CloudHSM, KMS, CloudTrail, and HIPAA-eligible services throughout
✓ Correct
-
B
Implement application-level encryption and rely on S3 default encryption
-
C
Use AWS services that are HIPAA-eligible and enable all encryption and logging options
-
D
Build custom encryption on top of AWS services and store audit logs locally
Explanation
CloudHSM and KMS provide cryptographic controls, CloudTrail enables audit trails, and HIPAA-eligible services ensure compliance. Custom encryption introduces gaps, local logging violates cloud requirements, and S3 default encryption alone is insufficient for HIPAA.
A company wants to migrate a monolithic application to microservices and needs service discovery that automatically registers/deregisters instances. Which solution is BEST?
-
A
ECS with AWS Cloud Map service discovery and health checks
✓ Correct
-
B
Route 53 with custom health checks and manual DNS updates
-
C
Application Load Balancer with target group auto-registration
-
D
Consul running on EC2 instances with custom agents
Explanation
Cloud Map automatically registers/deregisters services based on health, integrates with ECS, and provides DNS and SRV record discovery. Route 53 requires manual updates, ALB doesn't provide service discovery, and Consul requires operational overhead.
A financial institution needs to implement a data governance solution that enforces column-level access control across multiple data lakes in different AWS accounts. Which approach is MOST suitable?
-
A
Implement Lake Formation with cross-account access and fine-grained permissions
✓ Correct
-
B
Deploy row-level security in Athena with separate databases per access level
-
C
Use IAM policies with resource-based conditions and S3 encryption
-
D
Use S3 bucket policies with principal conditions and tags
Explanation
Lake Formation provides fine-grained column and row-level access control, supports cross-account access, and centralizes governance. S3 bucket policies don't support column-level control, Athena row-level security requires application logic, and IAM alone can't enforce column-level permissions.
An e-commerce platform experiences variable load and wants to optimize costs while maintaining performance. They already use EC2 instances with predictable baseline usage. Which strategy provides the BEST cost optimization?
-
A
Use Dedicated Hosts for long-term cost commitment
-
B
Implement Savings Plans with on-demand instances
-
C
Replace all instances with Spot Instances for maximum savings
-
D
Use Reserved Instances for baseline and Spot Instances for variable capacity
✓ Correct
Explanation
Combining Reserved Instances for baseline (predictable) with Spot for variable load optimizes cost and maintains performance. Only Spot doesn't cover baseline needs, Savings Plans are less cost-effective than RI+Spot combo, and Dedicated Hosts are for licensing, not cost optimization.
A company must ensure all S3 buckets remain private and cannot be modified to become public, even by accident. Which solution provides this enforcement?
-
A
S3 bucket policies with explicit deny statements and SCPs
✓ Correct
-
B
S3 Object Lock with governance mode and IAM policies
-
C
S3 Block Public Access with CloudWatch monitoring
-
D
AWS Config rules for S3 bucket public access compliance
Explanation
Service Control Policies (SCPs) at the organization level prevent any account from changing bucket public access settings. Block Public Access prevents access but allows policy changes, Object Lock is for data retention, and Config Rules only report non-compliance.
An organization wants to implement least-privilege access for temporary credentials used by applications running on EC2. Which approach is MOST secure?
-
A
Deploy Secrets Manager to store IAM user credentials with automatic rotation
-
B
Create long-term IAM users and embed access keys in application configuration
-
C
Configure EC2 instance security groups to restrict API calls to IAM
-
D
Use IAM instance profiles with AssumeRole to temporary STS credentials
✓ Correct
Explanation
Instance profiles automatically provide temporary STS credentials with defined expiration, removing the need for permanent credentials. Long-term keys are security risks, embedding credentials violates best practices, and security groups don't provide credential management.
A data engineering team needs to build a pipeline that processes petabyte-scale data with complex transformations and conditional logic. Which service provides the BEST balance of flexibility and managed infrastructure?
-
A
AWS Glue with PySpark jobs and Glue Workflows
✓ Correct
-
B
Lambda with Step Functions for orchestration
-
C
EMR with Spark for custom processing logic
-
D
Data Pipeline with EC2 instances running custom scripts
Explanation
Glue provides managed infrastructure for petabyte-scale processing, PySpark for complex transformations, and Workflows for orchestration without managing clusters. EMR requires cluster management, Data Pipeline is legacy, and Lambda has memory/timeout limitations for big data.
A company operates a multi-region application and wants to ensure consistent application behavior across regions using infrastructure as code. Which tool set provides the BEST governance?
-
A
CloudFormation StackSets with AWS Config rules and Systems Manager
✓ Correct
-
B
Terraform modules with AWS CLI for manual validation
-
C
SAM templates with CodePipeline and manual approvals
-
D
CDK with custom Python scripts for validation
Explanation
CloudFormation StackSets deploy consistent templates across regions, Config rules enforce compliance, and Systems Manager enables governance at scale. Terraform lacks native AWS governance, SAM is Lambda-focused, and CDK requires custom validation logic.
A machine learning team needs to train models on sensitive healthcare data without exposing raw data to data scientists. Which approach provides secure access?
-
A
Deploy on-premises clusters and transfer encrypted data via VPN
-
B
Use QuickSight for data visualization instead of direct access
-
C
Use SageMaker with VPC isolation, Secrets Manager, and data encryption
✓ Correct
-
D
Store data in S3 with public access temporarily for training
Explanation
SageMaker with VPC isolation prevents internet exposure, Secrets Manager manages credentials, and encryption protects data at rest. Public access violates healthcare compliance, VPN adds operational complexity, and QuickSight doesn't enable model training.
An organization needs to consolidate logs from multiple AWS accounts into a central account for analysis. Which solution provides centralized logging with minimal operational overhead?
-
A
Use CloudWatch Logs with subscription filters and cross-account Kinesis delivery
-
B
Implement CloudTrail with multi-account and multi-region aggregation to central S3
✓ Correct
-
C
Create S3 bucket in each account and manually aggregate to central bucket
-
D
Configure Systems Manager Session Manager for centralized log collection
Explanation
CloudTrail's multi-account aggregation feature automatically consolidates API logs to a central bucket with minimal configuration. CloudWatch subscription filters require manual setup per log group, manual aggregation scales poorly, and Session Manager logs aren't comprehensive for audit trails.
A company runs containerized applications on ECS Fargate and needs persistent storage that supports simultaneous access from multiple tasks. Which storage option is MOST appropriate?
-
A
S3 with FUSE mount for file-like access
-
B
EFS mounted directly to Fargate tasks with standard performance mode
✓ Correct
-
C
FSx for Windows File Server with Fargate integration
-
D
EC2 instance store volumes mounted via EBS
Explanation
EFS provides shared persistent storage directly compatible with Fargate without requiring EC2 instances. Instance store is not persistent, S3 FUSE mounts have performance limitations, and FSx for Windows requires Windows-based tasks.
Your organization requires that all database backups be immutable and retained for 7 years for compliance. Which combination of services meets this requirement?
-
A
AWS Backup with backup vaults, Object Lock, and retention policies
✓ Correct
-
B
DynamoDB point-in-time recovery with S3 replication
-
C
RDS manual snapshots copied to S3 with versioning enabled
-
D
RDS automated backups with S3 Object Lock and lifecycle policies
Explanation
AWS Backup provides centralized backup management, Object Lock enforces immutability, and retention policies ensure 7-year retention across services. RDS backups to S3 requires manual copy management, versioning allows deletion, and DynamoDB recovery doesn't support long-term compliance retention.
A company uses multiple AWS accounts for different business units and needs to enforce consistent tagging across all resources. Which approach provides automated enforcement?
-
A
Use AWS Config rules with custom Lambda functions to check tags
-
B
Implement service control policies to require tags on resource creation
✓ Correct
-
C
Deploy Systems Manager to manually audit tags quarterly
-
D
Use CloudFormation to tag resources at stack creation time only
Explanation
Service Control Policies can prevent resource creation without required tags across all accounts. Config rules only report non-compliance, CloudFormation applies only to stack resources, and manual audits lack enforcement. SCPs provide preventive controls at the organization level.
An application requires consistent sub-100ms latency globally for API requests. The application cannot tolerate cold starts. Which architecture is MOST appropriate?
-
A
CloudFront with Lambda@Edge and DynamoDB global tables
-
B
Route 53 with geolocation routing to Lambda functions
-
C
Global Accelerator with regional EC2 instances and ElastiCache
✓ Correct
-
D
API Gateway with regional endpoints and Application Load Balancers
Explanation
Global Accelerator optimizes routing, EC2 instances eliminate cold starts, and ElastiCache provides sub-100ms latency. Lambda@Edge has cold start issues, regional endpoints lack global optimization, and Route 53 routing doesn't guarantee latency SLAs.
A company is running a multi-tier application across multiple AWS regions. They need to ensure that database writes are synchronized across regions while maintaining high availability. Which approach provides the lowest RPO and RTO?
-
A
Configure S3 cross-region replication with periodic database snapshots
-
B
Use RDS Multi-AZ with read replicas in another region and application-level failover logic
-
C
Implement DMS (Database Migration Service) with continuous replication and manual failover
-
D
Use Amazon Aurora Global Database with cross-region read replicas and promote on failover
✓ Correct
Explanation
Aurora Global Database provides RPO of typically <1 second and RTO measured in seconds, as it maintains fully synchronized read-only replicas across regions with automatic detection and failover capabilities built-in.
An organization wants to minimize data transfer costs when moving large datasets between an on-premises data center and AWS. Which solution provides the most cost-effective approach for recurring monthly transfers of 500 GB?
-
A
Set up a VPN connection and use S3 Transfer Acceleration for all data movement
-
B
Use AWS DataSync with automatic scheduling and bandwidth optimization
-
C
Implement AWS Snowball Edge devices for initial transfer, then use AWS Direct Connect for ongoing syncs
✓ Correct
-
D
Configure AWS Storage Gateway in cached volume mode with scheduled data transfers
Explanation
For 500 GB monthly transfers, AWS Direct Connect provides dedicated network connectivity that eliminates per-GB data transfer charges and offers consistent throughput, making it more cost-effective than internet-based transfer options.
A solutions architect is designing a system that requires strong consistency guarantees for financial transactions while also needing to scale to handle millions of requests per second. What is the primary trade-off consideration?
-
A
Strong consistency only affects read performance, not write scalability
-
B
Distributed systems can always achieve both strong consistency and unlimited horizontal scalability without compromise
-
C
Strong consistency guarantees inherently limit horizontal scalability due to synchronization overhead
✓ Correct
-
D
Using NoSQL databases automatically resolves the consistency-scalability trade-off
Explanation
This reflects the CAP theorem principle: achieving strong consistency across distributed systems typically requires coordination that limits horizontal scalability. Financial systems often accept this limitation by using sharding strategies or accepting weaker consistency models for certain operations.
A company uses AWS Lambda functions to process images uploaded to S3. They observe that Lambda cold starts are causing unacceptable latency spikes during traffic surges. Which combination of strategies would best address this issue?
-
A
Implement Lambda Provisioned Concurrency and use SQS to decouple uploads from processing
-
B
Convert Lambda functions to EC2 instances and use Auto Scaling groups for traffic management
-
C
Add more Lambda concurrent execution quota and implement CloudWatch alarms for auto-scaling
-
D
Increase Lambda memory allocation and use Provisioned Concurrency with CloudFront caching
✓ Correct
Explanation
Provisioned Concurrency pre-initializes Lambda functions to eliminate cold starts, while increasing memory improves execution speed. CloudFront caching reduces origin requests. Lambda execution quota is managed automatically and doesn't prevent cold starts.
An organization is implementing a hub-and-spoke network architecture across AWS regions using AWS Transit Gateway. They want to enforce consistent security policies across all spokes. Which approach provides the most centralized control?
-
A
Use AWS Firewall Manager with AWS WAF and Network Firewall policies applied through Transit Gateway
✓ Correct
-
B
Deploy third-party virtual appliances in each spoke VPC for traffic inspection
-
C
Implement Security Groups on all resources with a centralized tagging strategy
-
D
Configure Network ACLs on each VPC and manually update them for policy changes
Explanation
AWS Firewall Manager provides centralized policy management across organizations and regions, allowing you to define security policies once and automatically apply them to Transit Gateway and Network Firewall resources consistently.
A company migrates a legacy application that uses local file system operations to AWS. The application frequently performs small random read/write operations on large files. Which storage solution minimizes latency and cost?
-
A
Amazon EBS with gp3 volume type configured for high IOPS
✓ Correct
-
B
Amazon S3 with CloudFront distribution for caching
-
C
AWS Storage Gateway in file gateway mode with local caching
-
D
Amazon FSx for Windows File Server with performance-optimized configuration
Explanation
For local file system operations with random I/O patterns, EBS gp3 volumes provide consistent sub-millisecond latency and configurable IOPS/throughput that exceeds S3 latency and is more cost-effective than FSx for this workload pattern.
A financial services company requires audit trails showing who accessed what data and when, with immutable records. They need to query audit logs efficiently across 5 years of historical data. Which AWS service combination best meets these requirements?
-
A
CloudTrail for API logging with S3 Object Lock, queried using CloudTrail Lake
✓ Correct
-
B
AWS Config for change tracking with manual log exports to S3 Glacier
-
C
VPC Flow Logs stored in CloudWatch Logs with retention policies and queried via Logs Insights
-
D
Application-level logging to DynamoDB with point-in-time recovery enabled
Explanation
CloudTrail provides comprehensive API audit trails with S3 Object Lock ensuring immutability, while CloudTrail Lake enables SQL-based querying of audit data at scale, making it purpose-built for compliance audit requirements.
A development team wants to implement infrastructure as code for a complex multi-account AWS environment with dependencies across stacks. They need to manage cross-account resources. Which tool provides the best native support for this scenario?
-
A
AWS SAM for serverless application deployment only
-
B
AWS CloudFormation StackSets with service-managed permissions
✓ Correct
-
C
AWS CDK with cross-stack references and role assumptions
-
D
Terraform with remote state stored in S3
Explanation
CloudFormation StackSets specifically enables deploying and managing stacks across multiple AWS accounts and regions with a single template, including service-managed permissions that automatically handle cross-account access.
An organization experiences performance degradation when running analytical queries on their production RDS MySQL database. They need to isolate read traffic without modifying application code. What is the best architectural approach?
-
A
Use S3 Select to query exported data instead of querying the database directly
-
B
Implement database proxy using Amazon RDS Proxy to route read queries to replicas automatically
✓ Correct
-
C
Create read replicas and use Route 53 weighted routing with custom application connection logic
-
D
Configure MySQL replication to a separate read-only instance and update application connection strings
Explanation
RDS Proxy can automatically route read queries to read replicas without application changes, while managing connection pooling and reducing latency. This requires no code modifications unlike Route 53 routing or connection string updates.
A company processes sensitive health data on AWS and must comply with HIPAA regulations. They need to ensure encryption in transit and at rest, with key management fully under their control. Which encryption strategy is most appropriate?
-
A
Enable default S3 encryption with AWS managed keys (SSE-S3) for all buckets
-
B
Use AWS KMS customer-managed keys with CloudHSM integration for key storage and operations
✓ Correct
-
C
Use AWS Secrets Manager to encrypt sensitive data attributes within the application
-
D
Implement application-level encryption before sending data to AWS services
Explanation
For HIPAA compliance requiring full key control, AWS KMS with CloudHSM provides customer-managed encryption keys stored in a hardware security module you control, meeting regulatory requirements for key custody and operational control.
Your organization requires that all data stored in Amazon S3 must be encrypted using customer-managed keys. You need to implement a solution that enforces this requirement across all S3 buckets in multiple AWS accounts. Which approach best meets this requirement?
-
A
Use S3 Bucket Keys with AWS KMS and configure a bucket policy that denies any PutObject requests without x-amz-server-side-encryption header
-
B
Create an SCP that denies s3:PutObject unless the request includes a specific KMS key ARN, combined with AWS Config for monitoring
✓ Correct
-
C
Enable S3 Block Public Access and require all buckets to use default encryption with AWS managed keys
-
D
Use AWS Config rules with a custom Lambda function to evaluate S3 bucket encryption settings and remediate non-compliant buckets automatically
Explanation
Service Control Policies (SCPs) at the organization level can enforce encryption requirements across accounts by denying actions that don't meet the criteria. Combined with AWS Config for monitoring, this provides both preventive and detective controls for multi-account environments.
A company is designing a disaster recovery strategy for a critical application that requires an RTO of 15 minutes and RPO of 5 minutes. The application uses Amazon RDS Multi-AZ with synchronous replication. What additional capability should be implemented to meet the RPO requirement?
-
A
Implement RDS Aurora with read replicas in a secondary region and enable binary logging
✓ Correct
-
B
Enable automatic backups with a retention period of 5 minutes
-
C
Enable RDS automated backups and create snapshots every 5 minutes using Lambda
-
D
Configure RDS backup window to occur every 5 minutes with enhanced monitoring
Explanation
Aurora's synchronous replication across availability zones with binary logging provides near-continuous data protection with RPO measured in seconds. The read replicas can be promoted quickly to meet the 15-minute RTO requirement.
You are tasked with optimizing costs for a workload that runs batch processing jobs 8 hours per day, 5 days per week. The compute instances are currently On-Demand. Which combination of purchasing options would provide the most cost savings?
-
A
Purchase Compute Savings Plans covering 40% of the weekly compute hours and use Spot Instances for remaining capacity
✓ Correct
-
B
Use a mix of 1-year Reserved Instances for baseline and Spot Instances for variable capacity above baseline
-
C
Use Spot Instances for the entire workload with On-Demand instances as fallback
-
D
Reserve instances for 40 hours per week and use On-Demand for additional capacity
Explanation
Compute Savings Plans provide flexibility across instance families and regions (approximately 30% discount), while Spot Instances offer up to 90% discount for interruptible workloads. Combining both optimizes cost for predictable baseline load with variable overflow.
An organization needs to implement cross-region disaster recovery for a multi-tier application. The primary region is us-east-1 and the DR region is us-west-2. What is the most appropriate solution for achieving sub-second replication latency for the database layer?
-
A
Implement DynamoDB Global Tables for all data storage with automatic replication
-
B
Configure RDS Multi-AZ spanning both regions with enhanced networking
-
C
Use Amazon Aurora Global Database with synchronous cross-region replication
✓ Correct
-
D
Set up continuous replication using AWS Database Migration Service (DMS) with change data capture
Explanation
Aurora Global Database provides sub-100ms replication latency across regions with RPO near zero and enables fast failover (typically 1-2 minutes). This is purpose-built for multi-region disaster recovery with minimal replication lag.
A financial services company requires that sensitive data be encrypted both in transit and at rest, with encryption keys never leaving AWS regions. Which encryption approach violates these requirements?
-
A
Using AWS KMS regional keys with key material imported from an external HSM
-
B
Exporting encrypted data with customer-managed keys to an external HSM for key management
✓ Correct
-
C
Using customer-managed keys stored in AWS Secrets Manager encrypted with a KMS key
-
D
Using AWS KMS with CloudHSM backing in each region
Explanation
Exporting encrypted data to external systems means encryption keys leave AWS regions, violating the requirement that keys never leave AWS regions. AWS KMS and CloudHSM both maintain keys within AWS infrastructure.
You are designing a solution for a SaaS application that needs to support multi-tenancy with strict data isolation. Each tenant's data must be isolated at the database level. Which architecture best achieves this requirement?
-
A
Provision a separate RDS instance for each tenant with dedicated credentials and network isolation
✓ Correct
-
B
Use Amazon DynamoDB with a tenant ID partition key and DynamoDB Streams for cross-tenant event processing
-
C
Use a single RDS instance with separate schemas per tenant and row-level security (RLS) policies
-
D
Implement a single Aurora cluster with multiple databases, one per tenant, with separate encryption keys per database
Explanation
Database-level isolation with separate RDS instances per tenant provides the strongest isolation guarantee and prevents accidental cross-tenant data leakage. While more costly, this meets strict data isolation requirements in regulated industries.
A company is migrating a legacy application to AWS that requires consistent IP addresses for firewall rules. The application is deployed across multiple availability zones using Auto Scaling groups. What solution best addresses this requirement?
-
A
Place instances behind an Application Load Balancer and register the ALB's IP addresses with the firewall
-
B
Configure Auto Scaling with Elastic IPs using launch templates and lifecycle hooks
-
C
Assign Elastic IPs to each instance and manage them through a custom Lambda function
-
D
Use Network Load Balancer with fixed Elastic IPs and configure Auto Scaling to maintain instances in the target group
✓ Correct
Explanation
A Network Load Balancer can be assigned Elastic IPs that remain constant, and the instances behind it can be managed by Auto Scaling without needing individual IPs. The firewall rules reference the NLB's fixed IPs, not instance IPs.
An organization is implementing AWS Control Tower for multi-account management. A requirement exists to prevent users from deleting CloudTrail logs. Where should this preventive control be implemented?
-
A
As an SCP attached to the organization root to deny cloudtrail:DeleteTrail and s3:DeleteObject on log buckets
✓ Correct
-
B
As a CloudTrail organization trail with MFA delete enabled on the S3 bucket storing logs
-
C
As an AWS Config rule that detects non-compliant configurations and uses automated remediation
-
D
As a CloudTrail event-based rule in EventBridge to notify administrators
Explanation
Service Control Policies provide preventive controls that deny actions before they occur, making them ideal for compliance requirements. This prevents deletion at the source rather than detecting violations after the fact.
A solutions architect is designing a hybrid network architecture connecting on-premises data centers to AWS. The company has two on-premises locations that need to communicate through AWS. Which AWS service provides the most cost-effective and operationally simple solution?
-
A
AWS Direct Connect with virtual interfaces for each location
-
B
AWS Site-to-Site VPN with VPN CloudHub for multi-site connectivity
-
C
AWS Transit Gateway with Site-to-Site VPN attachments for each location
✓ Correct
-
D
AWS App Mesh with virtual nodes representing on-premises services
Explanation
AWS Transit Gateway acts as a central hub that simplifies multi-site connectivity. VPN attachments provide cost-effective connectivity compared to Direct Connect, while the Transit Gateway handles routing between all sites automatically.
You need to implement a solution that provides real-time insights into API usage patterns across a large microservices architecture. The solution must support complex analytics queries on billions of events. Which service combination is most appropriate?
-
A
Amazon OpenSearch with Kinesis Data Firehose for real-time log ingestion and analysis
✓ Correct
-
B
CloudWatch Logs with CloudWatch Insights and Athena for long-term analysis
-
C
AWS X-Ray for tracing with CloudWatch metrics for aggregation
-
D
API Gateway logging to CloudWatch, then use Amazon EMR for batch processing
Explanation
OpenSearch provides powerful full-text and analytics capabilities for billions of events, while Kinesis Data Firehose handles real-time data ingestion and transformation at scale. This combination supports both real-time insights and complex analytics queries.
A company is deploying a containerized application on Amazon EKS that requires persistent storage shared across pods in multiple AZs. The storage must support concurrent read-write access. Which solution is most appropriate?
-
A
Use Amazon FSx for Lustre as the shared filesystem with EKS cluster in the same VPC
-
B
Configure EBS snapshots replicated across AZs and attach to each pod's node
-
C
Deploy Amazon EFS with dynamic provisioning through the EKS storage class and mount it to pods
✓ Correct
-
D
Use Amazon EBS volumes with EBS multi-attach enabled and a cluster autoscaler
Explanation
Amazon EFS provides NFS-based file sharing accessible from multiple pods across different AZs with native EKS integration through storage classes. It supports concurrent read-write access without the complexity of multi-attach or snapshot management.
An organization needs to implement automated remediation for non-compliant EC2 instances that lack required security group rules. The solution must scale to thousands of instances across multiple accounts. What is the most efficient approach?
-
A
Use Systems Manager Session Manager to manually verify and update each instance's security groups
-
B
Implement EventBridge rules that trigger on EC2 state changes and execute SSM automation documents
-
C
Create AWS Config rules with custom Lambda remediation functions and aggregate in AWS Config aggregator
✓ Correct
-
D
Deploy an AWS Lambda function triggered by CloudTrail events to modify security groups in non-compliant instances
Explanation
AWS Config with aggregators allows you to create rules and remediation actions across multiple accounts and regions. Custom Lambda remediation functions can automatically correct security group configurations based on compliance rules.
A financial institution requires compliance with regulations that mandate encryption key rotation every 90 days. The institution uses AWS KMS for key management. How should automatic key rotation be configured?
-
A
Enable KMS automatic key rotation, but note it only rotates the key material, not the key ID itself
✓ Correct
-
B
Use CloudTrail to monitor key usage and trigger manual rotation through AWS Systems Manager
-
C
Enable automatic key rotation in KMS and create a CloudWatch alarm to verify rotation completion
-
D
Disable automatic rotation and implement a Lambda function to manually create new key versions every 90 days
Explanation
KMS automatic key rotation (when enabled) rotates the key material annually by default and can be customized, but the key ID remains constant. This maintains backward compatibility while meeting rotation requirements. Note: KMS annual rotation may need to be supplemented with manual rotation for stricter 90-day requirements.
A company is designing a global application that serves customers in multiple regions. The application uses Amazon CloudFront with origin failover. What is the primary benefit of using origin failover in this architecture?
-
A
Automatically selects the fastest origin based on real-time latency measurements
-
B
Reduces bandwidth costs by routing requests to the cheapest origin in each region
-
C
Provides automatic failover to a secondary origin when the primary origin becomes unavailable
✓ Correct
-
D
Enables A/B testing by directing users to different origins based on cookies
Explanation
CloudFront origin failover automatically switches to a secondary origin group when health checks fail on the primary origin, ensuring high availability without manual intervention.
You are architecting a solution for processing streaming data from IoT devices. The data must be persisted, analyzed in real-time, and made available for batch processing. Which service combination best addresses all requirements with minimal operational overhead?
-
A
Amazon Kinesis Data Streams with Kinesis Data Analytics for real-time processing and S3 for batch processing
✓ Correct
-
B
Amazon SQS with consumer Lambda functions for processing and DynamoDB for storage
-
C
AWS Lambda triggered by API Gateway to process events directly and store in DynamoDB
-
D
Amazon MSK (Managed Streaming for Apache Kafka) with Flink for real-time processing and Spark for batch
Explanation
Kinesis Data Streams handles streaming ingestion, Kinesis Data Analytics provides real-time SQL analytics without infrastructure management, and automatic S3 export through Firehose enables batch processing. This minimizes operational complexity compared to managing Kafka or Lambda-based solutions.
A company operates a multi-region active-active application with databases in multiple regions. When a regional failure occurs, the application must automatically switch to the other region without data loss. Which database solution best supports this requirement?
-
A
Aurora MySQL with cross-region read replicas and application-level read/write routing logic
-
B
Amazon DynamoDB Global Tables with on-demand billing and point-in-time recovery
✓ Correct
-
C
RDS with read replicas in each region and automatic failover enabled
-
D
Aurora Global Database configured as active-active with DynamoDB for session storage
Explanation
DynamoDB Global Tables provide fully managed, active-active multi-region replication with automatic conflict resolution. All regions can handle reads and writes, eliminating the need for application-level failover logic and ensuring no data loss.
An organization is implementing AWS Organizations to manage multiple accounts. A specific organizational unit (OU) contains development accounts where developers need broad permissions. What is the recommended approach to grant these permissions safely?
-
A
Attach AdministratorAccess policy directly to an IAM group in each development account
-
B
Create a custom IAM policy that allows broad permissions and attach it via SCP to the development OU
-
C
Create permission boundaries that allow broad permissions while denying specific sensitive actions like IAM or Organizations modifications
✓ Correct
-
D
Use IAM roles with broad permissions and attach them to developers through session duration limits
Explanation
Permission boundaries provide guardrails that prevent privilege escalation by limiting the maximum permissions a developer can grant themselves. This allows broad development permissions while protecting critical AWS infrastructure from accidental or malicious changes.
A company needs to implement a solution for automatically scaling an RDS database based on query performance metrics. The database occasionally experiences spikes in CPU utilization above 80%. What is the most appropriate AWS service to address this requirement?
-
A
Enable RDS auto-scaling with read replica management and configure CloudWatch alarms for manual intervention
-
B
Configure RDS performance insights with automatic instance class upgrading based on CPU threshold
-
C
Implement Aurora auto-scaling for compute with Aurora Serverless for variable workloads and manual instance resizing
✓ Correct
-
D
Use Amazon DevOps Guru to analyze RDS metrics and recommend scaling actions through automated SMS notifications
Explanation
Aurora Serverless automatically scales compute capacity based on actual workload without manual intervention. For traditional RDS, this would require application changes or manual scaling, but Aurora Serverless manages this automatically with predictable performance and cost.
An organization is designing a solution for real-time collaboration where users upload files that must be processed by a backend service and made immediately available to other users. Latency must be under 500ms. Which architecture best meets these requirements?
-
A
Amazon AppSync with DynamoDB for file metadata and S3 for file storage with real-time subscriptions
✓ Correct
-
B
Elastic Load Balancer with EC2 instances running WebSocket servers connecting to ElastiCache
-
C
API Gateway with Lambda backend storing files in S3 and using SQS for asynchronous processing
-
D
S3 with CloudFront distribution triggering Lambda through S3 events for processing
Explanation
AppSync provides real-time subscriptions through WebSockets, enabling instant updates when files are uploaded. DynamoDB stores metadata with millisecond latency, S3 stores file content, and GraphQL subscriptions notify other users immediately of changes.
A solutions architect is designing a backup and recovery solution for a critical database workload. The solution must support point-in-time recovery (PITR) for 35 days. Which RDS configuration best meets this requirement?
-
A
Use RDS backup export to S3 with versioning enabled and restore capability for up to 35 days
-
B
Enable automated backups with retention set to maximum 35 days, backed by transaction logs for PITR
✓ Correct
-
C
Enable automated backups with a retention period of 35 days and enable binary logging
-
D
Configure snapshots to run daily and retain them for 35 days using a Lambda lifecycle policy
Explanation
RDS automated backups with retention set to 35 days combined with transaction logs automatically provide point-in-time recovery throughout the entire retention period. This is the native RDS feature designed for this purpose.
A company is migrating a monolithic application to microservices on Amazon ECS. Each microservice needs to communicate securely with other services and requires service discovery. What is the most appropriate AWS-native solution?
-
A
Deploy AWS App Mesh with service discovery to handle service-to-service communication and observability
✓ Correct
-
B
Use AWS Cloud Map for service discovery with security groups for network isolation between tasks
-
C
Use Amazon ECS service discovery with Consul and implement TLS certificates for encryption
-
D
Configure Application Load Balancer with target groups per microservice and custom DNS entries in Route 53
Explanation
AWS App Mesh provides service mesh capabilities including automatic service discovery, traffic management, security policies, and observability across microservices without requiring application code changes or external tools.
An organization requires that all AWS API calls be logged and analyzed for compliance purposes. The logs must be immutable and queryable. Which service combination best addresses this requirement?
-
A
VPC Flow Logs with CloudWatch Logs and custom Lambda functions for parsing and analysis
-
B
Config recorder with S3 backend and AWS Glue for data cataloging and analysis
-
C
CloudTrail with S3 backend, MFA delete enabled, and AWS Athena for querying
✓ Correct
-
D
CloudTrail with CloudWatch Logs destination and CloudWatch Insights for analysis
Explanation
CloudTrail logs all AWS API calls, S3 with MFA delete prevents tampering (immutability requirement), and Athena enables complex SQL queries on the logs. This combination provides compliance-grade audit logging with strong integrity controls.
A startup is building a mobile application with occasional spikes in traffic. The backend processes user requests through a REST API. Which serverless architecture minimizes operational overhead and costs?
-
A
AppSync GraphQL API with Lambda resolvers and DynamoDB with DAX for caching
-
B
Lightsail instances with auto-scaling and managed database service for predictable costs
-
C
API Gateway with Lambda backend, DynamoDB for storage, and SQS for asynchronous processing
✓ Correct
-
D
Application Load Balancer with ECS Fargate tasks and RDS Aurora Serverless for database
Explanation
API Gateway and Lambda scale automatically to handle traffic spikes with pay-per-request pricing. DynamoDB on-demand mode eliminates capacity planning, and SQS decouples components. This combination minimizes operational overhead and optimizes costs for variable workloads.
An organization needs to implement cost allocation and chargeback across multiple business units using AWS. Each business unit operates in separate AWS accounts. What is the most effective approach?
-
A
Use AWS Cost Explorer with cost allocation tags and generate reports per business unit
-
B
Create separate AWS billing accounts per business unit and track costs manually through billing reports
-
C
Use AWS Budgets to set spending limits per business unit and track overspending with alerts
-
D
Enable consolidated billing in AWS Organizations and configure cost allocation tags across all accounts
✓ Correct
Explanation
Consolidated billing in AWS Organizations combined with consistent cost allocation tagging enables automated cost tracking and chargeback across accounts. This provides accurate cost attribution and enables detailed analysis through Cost Explorer.
A company is running a mission-critical application on Amazon EC2 instances across three Availability Zones. The application requires sub-millisecond latency for database operations. Which database solution would best meet these requirements while maintaining high availability?
-
A
Amazon ElastiCache for Redis with cluster mode enabled across three AZs
-
B
Amazon RDS Multi-AZ with read replicas in each AZ
-
C
Amazon DynamoDB with global tables and DAX caching
✓ Correct
-
D
Amazon Aurora with synchronous replication across AZs and local caching
Explanation
DynamoDB with DAX provides sub-millisecond latency through in-memory caching while maintaining high availability across AZs. Global tables ensure replication without application complexity.
Your organization needs to migrate a large on-premises data warehouse to AWS while minimizing disruption. The warehouse contains 500 TB of data and receives constant updates. What approach should you recommend?
-
A
AWS Snowball Edge for initial transfer and AWS Glue for ETL transformation
-
B
Provision AWS Storage Gateway and perform nightly snapshots to S3
-
C
Use AWS Snowball to transfer data, then set up AWS DMS for ongoing replication
✓ Correct
-
D
Direct internet transfer using AWS DataSync with scheduled incremental syncs
Explanation
AWS Snowball efficiently handles 500 TB transfers, while AWS DMS provides continuous replication of updates, minimizing downtime and data loss during the migration.
An organization requires encryption at rest for all S3 objects, with the ability to manage keys centrally and audit key usage across the entire AWS account. Which encryption approach best satisfies these requirements?
-
A
AWS KMS with customer-managed keys and CloudTrail logging
✓ Correct
-
B
Client-side encryption with application-managed keys before upload
-
C
S3 default encryption with AWS managed keys (SSE-S3)
-
D
S3 server-side encryption with customer-provided keys (SSE-C)
Explanation
AWS KMS with customer-managed keys provides centralized key management and enables full audit trails through CloudTrail, meeting both control and compliance requirements.
A startup is designing a multi-tenant SaaS application where each customer's data must be completely isolated. The application needs to scale to thousands of customers with minimal operational overhead. Which architecture should be recommended?
-
A
Separate RDS instance per customer with individual VPCs
-
B
Amazon Aurora with separate schemas per customer and application-enforced tenant isolation
-
C
Multiple DynamoDB tables, one per customer, within shared VPC with IAM tenant policies
-
D
Single DynamoDB table with customer ID as partition key and row-level security policies
✓ Correct
Explanation
DynamoDB with partition key-based isolation provides automatic scaling, minimal operational overhead, and cost-effective isolation for thousands of tenants compared to per-customer infrastructure.
Your company has a Windows Server 2019 application that requires persistent state and runs on EC2 instances in an Auto Scaling group. How should you ensure data persistence across instance replacements?
-
A
Use Amazon FSx for Windows File Server with persistent storage across instances
✓ Correct
-
B
Use EBS volumes with 'DeleteOnTermination' set to false and attach to new instances
-
C
Configure Amazon S3 with VPC endpoint and write state objects periodically
-
D
Store state in Amazon EFS mounted to all instances in the Auto Scaling group
Explanation
Amazon FSx for Windows File Server provides persistent, highly available file storage optimized for Windows applications, automatically handling failover without manual reattachment of volumes.
An organization processes sensitive healthcare data and must comply with HIPAA. The company wants to establish a hybrid cloud architecture connecting on-premises servers to AWS. Which network solution provides the required encryption and compliance controls?
-
A
AWS Site-to-Site VPN with customer gateway and AWS VPN CloudWatch metrics
-
B
AWS PrivateLink endpoints with VPC peering and application-level TLS
-
C
NAT Gateway with security groups and network ACLs for traffic filtering
-
D
AWS Direct Connect with VLANs for traffic isolation and MACsec encryption
✓ Correct
Explanation
AWS Direct Connect with MACsec provides dedicated, encrypted connectivity meeting HIPAA's strict encryption and audit requirements better than internet-based VPN solutions.
A company runs a web application that experiences unpredictable traffic spikes. The application uses RDS MySQL, and database performance degrades significantly during peaks. What solution addresses the root cause?
-
A
Enable RDS read replicas and route read traffic away from the primary instance
-
B
Implement Amazon ElastiCache in front of the database to reduce query load
✓ Correct
-
C
Increase RDS instance size to accommodate peak loads
-
D
Distribute traffic using Application Load Balancer with connection draining
Explanation
ElastiCache caching layer reduces database queries by caching frequently accessed data, providing the most cost-effective solution for handling traffic spikes without overprovisioning the database.
An organization has established AWS Control Tower for multi-account governance. They need to enforce a policy requiring all EC2 instances to have monitoring enabled. What is the most scalable approach?
-
A
Create a custom Config rule that identifies non-compliant instances and send SNS notifications
-
B
Implement SCPs in AWS Organizations to deny EC2 launch without CloudWatch monitoring
-
C
Use AWS Systems Manager to deploy CloudWatch agent across all instances via documents
-
D
Develop a Control Tower control (preventive) that blocks non-compliant instance launches
✓ Correct
Explanation
Control Tower controls provide automated, preventive enforcement across all accounts in the organization, preventing non-compliant resources from being created rather than detecting violations after the fact.
A media company streams video content globally and needs to optimize delivery while reducing bandwidth costs. The content is accessed from multiple geographic regions with varying popularity patterns. Which combination of services is most suitable?
-
A
CloudFront with S3 origin, S3 Intelligent-Tiering, and transfer acceleration
-
B
Application Load Balancer with geographic routing and S3 acceleration
-
C
Global Accelerator with Application Load Balancer endpoints in each region
-
D
CloudFront distribution with S3 origin and origin access identity
✓ Correct
Explanation
CloudFront with S3 origin and origin access identity provides cost-effective global content delivery with caching benefits, while the origin access identity restricts direct S3 access.
A solutions architect is designing a disaster recovery strategy for a critical database. The RTO is 15 minutes and RPO is 1 minute. Current daily backup takes 4 hours. Which approach best meets these requirements?
-
A
Automated hourly RDS snapshots with continuous log backups to S3
-
B
RDS Multi-AZ with synchronous replication and automated failover to standby
✓ Correct
-
C
Amazon Aurora with Multi-AZ deployment and backup to cross-region read replica
-
D
Daily snapshots with restore testing and manual failover procedures
Explanation
RDS Multi-AZ with automated failover provides RTO of 1-2 minutes and RPO near zero through synchronous replication, easily meeting the 15-minute RTO and 1-minute RPO requirements.
A company uses AWS Lambda functions that process files from S3. During peak usage, Lambda functions frequently timeout. The processing logic cannot be modified, but execution time must be improved. What solution should be recommended?
-
A
Implement Lambda@Edge to process files closer to users
-
B
Use AWS Step Functions to orchestrate multiple Lambda invocations
-
C
Increase Lambda memory allocation to improve CPU performance
✓ Correct
-
D
Enable Lambda Provisioned Concurrency to reduce cold start overhead
Explanation
Increasing Lambda memory proportionally increases CPU allocation, which directly improves execution performance without modifying the code, addressing the timeout issue.
An organization must implement cross-account access for developers to assume roles in sandbox accounts. The solution must support single sign-on and enforce multi-factor authentication. Which implementation best addresses these requirements?
-
A
Create IAM users in each account with shared passwords and CLI access
-
B
Cross-account IAM roles with trust relationships and root account API keys
-
C
AWS IAM Identity Center configured with SAML federation and MFA enforcement
✓ Correct
-
D
Cognito user pools with cross-account role assumption and STS GetSessionToken
Explanation
AWS IAM Identity Center provides centralized single sign-on management with native MFA support and simplified cross-account role assumption for developers.
A solutions architect is designing an application that must achieve RPO of 5 seconds for a critical database. What technical limitation makes this difficult to achieve with traditional RDS?
-
A
Synchronous replication introduces network latency preventing 5-second RPO targets
-
B
RDS does not support sub-minute continuous backup intervals by design
✓ Correct
-
C
Binary logging overhead exceeds 50% of database performance capacity
-
D
RDS backup window must be at least 1 hour long regardless of database size
Explanation
RDS backup frequency is limited by automated backup intervals; achieving 5-second RPO requires continuous backup mechanisms like Aurora with backtrack or external replication solutions.
A company stores confidential documents in S3 and must ensure only specific corporate IP addresses can access the bucket. The company also needs to allow access from a partner company through their VPN. Which S3 bucket policy approach is most appropriate?
-
A
Add IpAddress condition with both corporate and partner IP ranges in single statement
✓ Correct
-
B
Use origin-based access control with CloudFront to filter by IP
-
C
Create separate bucket policies for each IP range with Deny conditions
-
D
Implement S3 Object Lambda to validate source IP on object retrieval
Explanation
An S3 bucket policy with an IpAddress condition using an array of allowed IP CIDR blocks efficiently restricts access to both corporate and partner networks in a single policy statement.
An application requires a distributed cache with automatic failover and persistence. The application needs strong consistency guarantees and the ability to scale horizontally with multiple read replicas. Which ElastiCache engine is most suitable?
-
A
Redis (cluster mode enabled) with automatic sharding and replication
✓ Correct
-
B
Memcached with cluster mode for distributed caching
-
C
Memcached with replication groups and automated failover
-
D
Redis (cluster mode disabled) with Multi-AZ for high availability
Explanation
Redis cluster mode enabled provides horizontal scaling with automatic sharding, persistence, strong consistency, and built-in replication with failover capabilities that Memcached cannot match.
A company uses AWS Elastic Beanstalk to deploy a Java web application across multiple environments (dev, staging, production). They need to ensure configuration differences between environments are easily manageable. What is the recommended approach?
-
A
Store environment-specific configuration in RDS and query during application startup
-
B
Create separate Beanstalk applications for each environment
-
C
Use parameter store for all configuration and reference variables in environment variables
-
D
Use a single application with multiple environments and environment-specific .ebextensions
✓ Correct
Explanation
A single Elastic Beanstalk application with multiple environments allows configuration management through environment-specific .ebextensions files and environment properties, reducing duplication and maintenance overhead.
An organization implements AWS Config to monitor compliance across its AWS account. They want to receive notifications whenever a non-compliant resource is detected and automatically remediate it. Which service combination achieves this?
-
A
AWS Config with SNS notifications and manual approval workflow for changes
-
B
AWS Config with Config rules and AWS Systems Manager remediation documents
✓ Correct
-
C
AWS Config with automatic remediation actions and CloudWatch alarms
-
D
AWS Config with EventBridge rules triggering Lambda for custom remediation
Explanation
AWS Config supports automated remediation through Systems Manager documents, providing built-in compliance monitoring and automatic remediation without custom code.
A company migrates a legacy application to microservices on Amazon ECS. Different microservices require different resource allocations and scaling patterns. What is the most flexible and cost-effective architecture?
-
A
ECS with AWS Fargate using application auto scaling based on custom metrics
✓ Correct
-
B
EC2-based ECS cluster with instance replacement and manual task management
-
C
Multiple ECS clusters, one per microservice environment with dedicated EC2 instances
-
D
Single ECS cluster with service-level auto scaling and task-level CPU/memory reservation
Explanation
ECS with Fargate eliminates infrastructure management, allows fine-grained service auto scaling with custom metrics, and provides cost efficiency through per-task billing.
An organization needs to establish a network connectivity between AWS and on-premises data center for hybrid workloads. The connection must support failover and be highly available. Traffic includes both sensitive data and non-sensitive applications. Which design provides the best balance of redundancy and cost?
-
A
AWS Direct Connect with redundant VPN and equal-cost multi-path routing
-
B
Two AWS Direct Connect connections with automatic failover to VPN backup
✓ Correct
-
C
Dual VPN connections with active-active routing and connection monitoring
-
D
Single AWS Direct Connect connection with dedicated VPN backup connection
Explanation
Two Direct Connect connections provide high-bandwidth redundancy for critical traffic, while VPN serves as a cost-effective failover option, ensuring business continuity with optimal cost management.
A developer accidentally deletes a critical DynamoDB table. The table contains real-time transaction data from the past 7 days. Which recovery option is most likely to succeed?
-
A
Use AWS Glue to recover data from S3 export snapshots
-
B
Restore from an on-demand backup created yesterday
-
C
Query CloudTrail logs to retrieve deleted item details
-
D
Restore from a continuous backup using point-in-time recovery
✓ Correct
Explanation
DynamoDB point-in-time recovery retains backups for up to 35 days, allowing recovery of the deleted table to any point in time within the past 7 days.
A company implements AWS Organizations with multiple member accounts and requires that all member accounts use specific VPC configurations. They want to enforce this requirement programmatically across new and existing accounts. Which solution is most appropriate?
-
A
AWS CloudFormation StackSets with organization-level deployment permissions
✓ Correct
-
B
AWS Control Tower controls to enforce VPC standards and guide account setup
-
C
SCPs to restrict VPC creation and manually deploy approved VPC configurations
-
D
AWS Systems Manager to deploy VPC CloudFormation templates to all accounts
Explanation
CloudFormation StackSets with organization permissions automatically deploy and manage consistent VPC configurations across all accounts, scaling as new accounts are added.
A solutions architect is designing a highly available API that processes real-time stock market data. The API must serve requests with sub-second latency from global locations. Which architecture component is essential?
-
A
CloudFront distribution with regional origin caching
-
B
Global Accelerator for intelligent routing to nearest endpoint
✓ Correct
-
C
Route 53 with geolocation routing and health checks
-
D
Network Load Balancer with cross-zone load balancing enabled
Explanation
AWS Global Accelerator provides optimized routing paths and automatic failover for real-time applications, ensuring sub-second latency from global locations better than standard Route 53 geolocation routing.
An organization runs batch processing jobs on EC2 instances that consume large CSV files from S3. Job execution time varies from 30 minutes to 4 hours. The company wants to minimize costs while maintaining reasonable completion times. Which approach is most cost-effective?
-
A
Use AWS Batch with Spot instances and on-demand backup for critical jobs
✓ Correct
-
B
EC2 Auto Scaling group with Spot instances and dedicated fallback capacity
-
C
AWS Glue for ETL processing with automatic scaling and Spot pricing
-
D
AWS Lambda for parallel processing of CSV chunks with concurrent execution
Explanation
AWS Batch with Spot instances automatically manages instance lifecycle and cost optimization for variable-duration workloads, while on-demand fallback ensures reliability.
A company uses Amazon Redshift for data warehousing and experiences slow query performance during peak business hours. The cluster is sized for average load and queries compete for resources. What architectural change best resolves this issue?
-
A
Increase cluster node count to provide additional compute resources
✓ Correct
-
B
Add a second Redshift cluster and distribute queries using application logic
-
C
Implement materialized views and sort keys for frequently accessed data patterns
-
D
Enable result caching and query optimization within Redshift Spectrum
Explanation
Increasing cluster nodes directly improves query throughput and resource availability, addressing peak load issues, though query optimization (option B) should also be pursued as a complementary measure.
An organization must prevent developers from launching expensive instance types in development accounts while allowing team leads full control. The solution must be dynamic and adjustable as business needs change. Which approach is most flexible?
-
A
Create custom IAM policies that list allowed instance types for each developer role
-
B
Use Service Control Policies to deny expensive instance types for developer IAM roles
✓ Correct
-
C
Use AWS Budgets to send alerts when spending thresholds are exceeded
-
D
Implement AWS Config rules that terminate non-compliant instances automatically
Explanation
SCPs applied to OUs or accounts prevent expensive instance launches at the API level regardless of IAM permissions, providing flexible enforcement that can be adjusted centrally without modifying multiple policies.
A startup is building a serverless application using Lambda, API Gateway, and DynamoDB. The application experiences database throttling errors during traffic spikes despite on-demand DynamoDB billing. What is the likely root cause?
-
A
API Gateway throttling limits preventing requests from reaching DynamoDB
-
B
Insufficient Lambda concurrent execution quota causing queued requests to timeout
-
C
DynamoDB on-demand mode has burst capacity limits that are being exceeded
✓ Correct
-
D
DynamoDB on-demand mode does not support auto-scaling of write capacity
Explanation
DynamoDB on-demand mode provides unlimited throughput but with burst capacity limits based on account history; sustained high traffic exceeding burst capacity causes throttling.
A company implements AWS Secrets Manager to store database credentials for a production RDS instance. The database team needs read-only access to view secrets but cannot retrieve values. How should this be configured?
-
A
Create an IAM policy denying secretsmanager:GetSecretValue but allowing DescribeSecret
✓ Correct
-
B
Enable secrets Manager read-only user role with console-only access
-
C
Configure secrets Manager audit logging to prevent credential exposure by design
-
D
Use Secrets Manager resource-based policy with GetSecretValue denied in principal policy
Explanation
An IAM policy denying GetSecretValue while allowing DescribeSecret and ListSecrets allows viewing metadata and secret existence without exposing actual credential values.
A company is migrating a legacy monolithic application to AWS. The application currently uses a shared file system that stores documents accessed by multiple servers. Which solution provides the best scalability and fault tolerance while minimizing refactoring?
-
A
Deploy the application on a single large EC2 instance with additional EBS storage
-
B
Use Amazon S3 with a custom application layer to manage file locking and consistency
-
C
Migrate to Amazon EFS, which automatically scales and provides high availability across multiple AZs
✓ Correct
-
D
Use Amazon EBS volumes attached to multiple EC2 instances with RAID configuration
Explanation
Amazon EFS is purpose-built for shared file access across multiple EC2 instances with automatic scaling, built-in redundancy across AZs, and minimal application changes required compared to S3 or RAID solutions.
An organization needs to implement cross-account access for a Lambda function in Account A to read objects from an S3 bucket in Account B. Which combination of configurations is required?
-
A
Cross-account IAM role assumption with a trust relationship, plus an S3 bucket policy granting the assumed role access to the bucket
-
B
S3 bucket policy in Account B granting access to the root account of Account A only, without role-based permissions
-
C
Lambda execution role in Account A with S3 permissions, and an S3 bucket policy in Account B that trusts the Lambda role ARN
✓ Correct
-
D
VPC endpoint configuration in Account B and a security group rule allowing Account A traffic
Explanation
The Lambda execution role in Account A needs S3 permissions, and the S3 bucket policy in Account B must explicitly allow the principal (Lambda role ARN) from Account A to access the bucket resources.
A financial services company requires that all data at rest be encrypted with customer-managed keys and that encryption keys be rotated automatically every 90 days. Which AWS service combination best meets these requirements?
-
A
AWS CloudHSM for key generation combined with AWS KMS for centralized key management and automated rotation
-
B
AWS Secrets Manager for key storage with automatic rotation policies
-
C
AWS KMS with customer master keys (CMKs) configured for automatic key rotation every 90 days, plus application-level encryption
✓ Correct
-
D
Amazon S3 server-side encryption with AES-256 and AWS Certificate Manager for key management
Explanation
AWS KMS customer master keys support automatic annual key rotation, and while the company specifies 90 days, KMS is the primary AWS service for managing customer-controlled encryption keys with rotation policies for compliance requirements.
An enterprise application experiences variable traffic patterns with peak loads occurring during business hours. The application uses a Network Load Balancer (NLB) distributing traffic to Auto Scaling groups across three AZs. Network performance degrades during peak hours despite sufficient EC2 capacity. What is the most likely cause?
-
A
The security groups are misconfigured and blocking legitimate traffic during peak hours
-
B
EC2 instances lack sufficient network interface bandwidth, and the NLB is not distributing connections evenly across AZs
-
C
The Auto Scaling group is not scaling fast enough to accommodate peak traffic
-
D
The NLB connection tracking table is exhausted due to connection reuse patterns from clients
✓ Correct
Explanation
NLBs can experience performance degradation when clients reuse connections intensively, causing the connection tracking table to become a bottleneck even with sufficient EC2 capacity, as the NLB connection state must be maintained for each flow.
A healthcare organization must comply with HIPAA regulations and needs to ensure that PHI (Protected Health Information) stored in Amazon RDS is encrypted both in transit and at rest. Which configuration satisfies this requirement?
-
A
Use RDS read replicas in multiple regions with automatic failover and AWS Certificate Manager certificates
-
B
Enable RDS encryption at rest using AWS KMS and configure the DB security group to enforce SSL/TLS connections only
-
C
Enable RDS encryption at rest using AWS KMS, modify the DB parameter group to enforce SSL/TLS, and require SSL certificates for client connections
✓ Correct
-
D
Deploy RDS in a private subnet, use VPC encryption, and enable automated backups with encryption
Explanation
HIPAA compliance requires both encryption at rest (RDS KMS encryption) and encryption in transit (enforcing SSL/TLS via DB parameter groups and client certificate requirements), making this the comprehensive solution.
A SaaS provider deploys containerized microservices on Amazon ECS using Fargate. They require the ability to store sensitive credentials for database access that can be rotated without redeploying containers. What is the recommended approach?
-
A
Embed credentials directly in the Docker image and use Systems Manager Parameter Store for rotation
-
B
Use environment variables in the CloudFormation template and configure ECS to refresh every hour
-
C
Store credentials in ECS task role environment variables and use Lambda to update them daily
-
D
Store credentials in Amazon Secrets Manager, reference them in the ECS task definition, and use IAM roles to grant ECS task access
✓ Correct
Explanation
AWS Secrets Manager is purpose-built for this use case, integrating with ECS task definitions through IAM roles, enabling automatic rotation without container redeployment, and providing audit logging for compliance.
An organization operates a multi-region disaster recovery strategy with a standby RDS database in a secondary region. The primary database experiences a catastrophic failure. Recovery time objective (RTO) is 15 minutes. Which approach provides the fastest recovery?
-
A
Use AWS DMS to replicate data from a tertiary backup location and restore to the primary region
-
B
Promote the read replica in the secondary region to a standalone database and update application connection strings
✓ Correct
-
C
Restore from the latest automated backup to a new RDS instance in the primary region
-
D
Manually create a new RDS instance and restore from S3-exported snapshots
Explanation
Promoting a cross-region read replica is the fastest recovery method, typically completing within minutes, which aligns with the 15-minute RTO, whereas restoring from backups adds significant overhead.
A company uses AWS CloudFormation to manage infrastructure as code. During a stack update, a critical parameter requires changing, but the change would force replacement of a database instance containing production data. How should the architect prevent accidental data loss?
-
A
Set the 'DeletionPolicy' attribute to 'Snapshot' for the database resource and use 'CreationPolicy' to validate the update
-
B
Use the 'AWS::CloudFormation::Stack' update policy with 'DisableApiTermination' set to true
-
C
Configure the stack retention policy and use RDS automated backups as the primary protection mechanism
-
D
Use 'UpdateReplacePolicy' set to 'Snapshot' and implement manual approval in the CloudFormation change set review process
✓ Correct
Explanation
The 'UpdateReplacePolicy' attribute (available in newer CloudFormation versions) automatically creates a snapshot before replacement, combined with change set review procedures that prevent accidental destructive updates.
A media streaming company processes large video files using a batch processing pipeline. Jobs are submitted to an SQS queue and processed by EC2 instances. During off-peak hours, no jobs run. Which optimization reduces costs without sacrificing service availability?
-
A
Use EC2 Spot Instances with an Auto Scaling group that scales to zero during off-peak hours and maintains a minimum of one on-demand instance
-
B
Replace EC2 instances with AWS Batch, which automatically manages compute resources and scales based on job queue depth
✓ Correct
-
C
Implement AWS Lambda to process jobs directly from SQS, eliminating the need for EC2 instances entirely
-
D
Use reserved EC2 instances with scheduled scaling to reduce costs during predictable off-peak periods
Explanation
AWS Batch is purpose-built for batch processing workloads, automatically provisioning and scaling compute resources based on job demand, eliminating idle capacity during off-peak hours while maintaining high availability.
An architect is designing a solution for a global application requiring low-latency data access across multiple regions. The data is frequently read but infrequently updated. Current design uses multi-master RDS with high replication lag causing consistency issues. What is the recommended alternative?
-
A
Use Amazon Aurora Global Database with read-only replicas in secondary regions, accepting eventual consistency for read-heavy workloads
✓ Correct
-
B
Deploy ElastiCache with cross-region replication and configure the application to use read-through caching patterns with TTL-based invalidation
-
C
Implement a custom multi-region solution using S3 cross-region replication with CloudFront and Lambda for consistency management
-
D
Implement Amazon DynamoDB Global Tables for automatic multi-region replication with eventual consistency, and use DynamoDB Streams for event-driven updates
Explanation
Aurora Global Database provides the best balance for read-heavy, infrequently-updated data with automatic replication to read-only secondaries across regions and RPO of approximately 1 second, superior to DynamoDB for relational data patterns.