61 Practice Questions & Answers
You are designing a multi-region Amazon RDS deployment for a globally distributed application. Which of the following approaches provides the lowest read latency for users across multiple geographic regions?
-
A
Create independent RDS instances in each region and use application-level routing to direct reads to the nearest database
-
B
Deploy RDS Global Database which automatically replicates data across regions with local read-only databases
✓ Correct
-
C
Deploy a single RDS instance in us-east-1 with read replicas in each target region
-
D
Use RDS Multi-AZ in each region with cross-region read replicas for disaster recovery only
Explanation
RDS Global Database provides the lowest read latency by maintaining synchronized read-only copies in secondary regions that users can query locally. This eliminates cross-region network latency for read operations.
Your team is migrating a legacy on-premises Oracle database to Amazon RDS for Oracle. During the assessment phase, you discover the database uses Oracle-specific features like Advanced Security Option (ASO) for encryption. What should you consider?
-
A
ASO can only be used if the database is deployed as a custom RDS option group with manual configuration
-
B
ASO is fully supported in RDS for Oracle and can be enabled through the AWS Management Console
-
C
ASO requires a license migration and is available only in the Bring Your Own License (BYOL) model
✓ Correct
-
D
RDS for Oracle does not support ASO; you must use AWS KMS encryption instead
Explanation
Oracle Advanced Security Option must be licensed separately and is available in the BYOL licensing model for RDS for Oracle. Standard RDS license includes only basic encryption via AWS KMS.
You need to analyze slow query logs in Amazon RDS for MySQL to identify performance bottlenecks. Which AWS service should you use to correlate database metrics with application logs?
-
A
Amazon RDS Performance Insights combined with Enhanced Monitoring
✓ Correct
-
B
Amazon Athena querying RDS exported slow query logs from S3
-
C
AWS X-Ray service traces correlated with RDS slow query logs
-
D
Amazon CloudWatch Logs Insights with CloudWatch Metrics
Explanation
RDS Performance Insights provides database load monitoring while Enhanced Monitoring offers OS-level metrics. Together they correlate database activity with system resource utilization to identify bottlenecks. CloudWatch Logs Insights is primarily for log analysis, not real-time correlation.
Your company runs a MySQL database on RDS with automated backups enabled. A developer accidentally deletes critical data at 14:00 UTC. You need to recover the database to 13:50 UTC. What is the fastest recovery method?
-
A
Create a read replica from the latest backup and manually delete transactions after 13:50 UTC
-
B
Restore from the latest snapshot and then use binary logs to replay transactions up to 13:50 UTC
-
C
Use point-in-time restore (PITR) to recover the database to 13:50 UTC
✓ Correct
-
D
Export the database to S3 using a snapshot taken before 14:00 and restore selectively
Explanation
Point-in-time restore allows you to recover the database to any specific time within the backup retention period (up to 35 days). This is the fastest and most straightforward method for recovering to 13:50 UTC without manual intervention.
You are designing a database solution for an IoT application that ingests millions of sensor readings per second. Traditional relational databases are showing performance degradation. Which AWS database service is most suitable?
-
A
Amazon DocumentDB for flexible schema and high write throughput
-
B
Amazon RDS with very large instance type and high IOPS provisioned storage
-
C
Amazon DynamoDB with on-demand billing mode and appropriate partition key design
✓ Correct
-
D
Amazon Aurora with auto-scaling write capacity and read replicas
Explanation
DynamoDB is purpose-built for handling massive write throughput (millions of requests/second) with on-demand scaling. The partition key design ensures even distribution across partitions for IoT sensor data ingestion patterns.
Your organization requires encryption of data in transit for all database connections to comply with regulatory requirements. Which database service option provides transparent TLS encryption enforcement?
-
A
Aurora PostgreSQL with the rds.force_ssl parameter set to 1 in the DB cluster parameter group
✓ Correct
-
B
RDS with the require_secure_transport parameter set to ON for MySQL or PostgreSQL
-
C
Neptune with mandatory certificate validation through the Neptune IAM database authentication
-
D
DynamoDB with SSL/TLS connection requirement in the IAM policy for the endpoint
Explanation
Aurora PostgreSQL uses the rds.force_ssl parameter in the cluster parameter group to enforce SSL/TLS encryption for all client connections. This ensures transparent encryption without application changes.
You discover that your RDS PostgreSQL database is experiencing connection pool exhaustion during peak traffic. The application uses a standard PostgreSQL JDBC driver. What is the most appropriate solution?
-
A
Deploy a third-party connection pooling tool like PgBouncer on an EC2 instance between the application and database
-
B
Switch to Aurora PostgreSQL which has a higher default connection limit
-
C
Increase the max_connections parameter in the DB parameter group to handle more concurrent connections
-
D
Implement Amazon RDS Proxy to manage connection pooling and reduce the number of direct database connections
✓ Correct
Explanation
RDS Proxy is an AWS-managed database proxy that provides connection pooling, reducing the number of database connections needed. It improves scalability without increasing the DB parameter group's max_connections setting.
Your DynamoDB table experiences sudden spikes in traffic that cause throttling during certain times of the day. You want to avoid capacity planning delays. Which billing and auto-scaling strategy should you implement?
-
A
Implement a reserved capacity plan combined with on-demand burst capacity
-
B
Use provisioned capacity with CloudWatch alarms triggering manual capacity updates via Lambda
-
C
Use on-demand billing mode which automatically scales to accommodate request volume
✓ Correct
-
D
Use provisioned capacity mode with DynamoDB auto-scaling configured to increase capacity gradually
Explanation
On-demand billing mode is ideal for unpredictable traffic spikes as it automatically scales read and write capacity without delays or configuration. You pay per request rather than provisioning capacity in advance.
You are migrating a large SQL Server database from on-premises to AWS. The source database has full-text search (FTS) indexes that are heavily used. What is the recommended approach?
-
A
Use AWS Database Migration Service (DMS) with continuous replication and reconfigure FTS indexes on RDS for SQL Server
✓ Correct
-
B
Use AWS DMS with the Continuous Load feature and migrate FTS functionality to Amazon OpenSearch for search queries
-
C
Use native SQL Server backup/restore to S3, then restore to RDS for SQL Server preserving all indexes
-
D
Migrate using DMS in full-load mode, then manually rebuild all FTS indexes after the migration completes
Explanation
RDS for SQL Server supports full-text search indexes. AWS DMS will migrate the database, but FTS indexes need to be reconfigured since they may reference system catalogs differently in the RDS environment.
Your analytics team requires ad-hoc SQL queries against data stored in Amazon S3 without maintaining a separate data warehouse. Which AWS service provides the best balance of performance and cost?
-
A
Amazon RDS for PostgreSQL with external tables via FDW extension
-
B
Amazon Athena for serverless SQL queries directly against S3 data
✓ Correct
-
C
Amazon EMR with Hive for distributed SQL processing across S3 data
-
D
Amazon Redshift for high-performance analytics with Redshift Spectrum
Explanation
Amazon Athena is purpose-built for ad-hoc SQL queries against S3 data without requiring infrastructure management. It provides excellent cost-effectiveness for sporadic query patterns and is easier to set up than Redshift Spectrum.
You need to ensure that your Amazon Aurora database can tolerate an entire AWS region failure. Which combination of features should you implement?
-
A
Aurora Multi-AZ deployment with automated backups and cross-region read replicas
-
B
Aurora Global Database with primary region in us-east-1 and secondary regions in us-west-2
✓ Correct
-
C
Aurora provisioned capacity with read replicas across all availability zones in a single region
-
D
Aurora Multi-AZ with enhanced backup retention and manual cross-region snapshots
Explanation
Aurora Global Database is specifically designed for region-level disaster recovery. It maintains a read-only secondary database in another region with near-zero RPO and enables fast failover if the primary region fails.
Your DynamoDB table uses a composite primary key (partition key + sort key) and you need to query items where the sort key begins with a specific prefix. What is the most efficient query operation?
-
A
Use Query with KeyConditionExpression using begins_with() function on the sort key
✓ Correct
-
B
Use GSI with the sort key as the partition key and query by begins_with()
-
C
Use Scan with FilterExpression to find items matching the prefix pattern
-
D
Use Query with a FilterExpression that applies begins_with() to narrow results
Explanation
The Query operation with begins_with() in the KeyConditionExpression is the most efficient approach. It directly uses the sort key condition to retrieve only matching items without scanning the entire table or using filters.
You are implementing cross-region replication for a production DynamoDB table. During testing, you notice replication latency of 2-3 seconds between regions. Your application requires near-instantaneous consistency. What should you do?
-
A
Change replication to use DynamoDB Streams with Lambda for immediate global updates
-
B
Switch to a single-region DynamoDB table and use CloudFront for caching to reduce latency
-
C
Implement an application-level caching layer that reads from the local region and handles replication delays
✓ Correct
-
D
Increase the DynamoDB write capacity in all regions to improve replication speed
Explanation
DynamoDB Global Tables have inherent replication latency (typically under 1 second, but can reach 2-3 seconds during peak load). Applications requiring stronger consistency should implement local caching or read-your-writes patterns.
Your company runs a multi-tenant SaaS application using RDS for PostgreSQL. Each customer's data must be isolated and encrypted separately. What is the recommended approach?
-
A
Use RDS with customer-managed encryption keys and partition customer data by schema
-
B
Implement application-level encryption before storing data and use a single RDS cluster with logical separation
-
C
Use separate RDS instances for each customer with independent encryption keys managed in AWS KMS
-
D
Use a single RDS instance with row-level security (RLS) policies and application-level encryption per customer
✓ Correct
Explanation
PostgreSQL row-level security (RLS) provides tenant isolation at the database level while application-level encryption ensures per-customer data protection. This is more cost-effective than separate instances while maintaining security.
You need to migrate a large MongoDB database to Amazon DocumentDB. The migration requires zero downtime. Which AWS service should you use?
-
A
AWS DataPipeline to export MongoDB to S3 and import to DocumentDB
-
B
AWS Database Migration Service (DMS) with continuous replication and CDC (Change Data Capture)
✓ Correct
-
C
MongoDB's built-in mongodump and mongorestore utilities with manual synchronization
-
D
AWS Snowball for bulk data transfer followed by AWS DMS for incremental sync
Explanation
AWS DMS supports zero-downtime MongoDB to DocumentDB migrations using continuous replication and Change Data Capture. This keeps the databases synchronized throughout the migration process.
Your application frequently performs complex joins across multiple DynamoDB tables. You are experiencing performance issues and high costs. What architectural change should you consider?
-
A
Denormalize data by storing related information in a single DynamoDB item using nested attributes
-
B
Create additional Global Secondary Indexes to improve join performance
-
C
Implement caching in ElastiCache to reduce the number of database queries
-
D
Migrate to Amazon RDS which is better optimized for multi-table joins
✓ Correct
Explanation
DynamoDB is optimized for single-item operations and queries on a single table. Complex multi-table joins are a sign that a relational database like RDS is better suited for the workload.
You configure automated backups for your RDS for MySQL database with a 7-day retention period. However, you need to retain a specific backup for compliance purposes beyond the retention window. What is the appropriate action?
-
A
Increase the backup retention period retroactively to keep the required backup
-
B
Export the automated backup to S3 using the RDS Export feature
-
C
Copy the automated backup to another region for long-term retention
-
D
Create a manual snapshot of the automated backup before it expires
✓ Correct
Explanation
Manual snapshots are retained indefinitely until explicitly deleted, independent of the automated backup retention period. Creating a manual snapshot from an automated backup preserves it beyond the retention window.
Your Neptune graph database is used for recommendation engine queries that require traversing relationships across millions of nodes. Queries are running slowly. What optimization strategy should you implement?
-
A
Create property indexes on frequently queried node properties and relationship predicates
✓ Correct
-
B
Switch to DynamoDB with a custom graph implementation using adjacency lists
-
C
Partition the graph into multiple Neptune databases by geographic region
-
D
Increase Neptune instance size and enable Neptune DB cluster caching for frequently accessed paths
Explanation
Property indexes on frequently queried attributes significantly improve Neptune traversal performance by reducing the number of nodes that must be scanned during graph queries. This is more efficient than scaling instance size.
You are designing a database solution for a real-time analytics dashboard that processes streaming IoT data. The dashboard must display results with sub-second latency. Which combination of services is most appropriate?
-
A
Kinesis Data Firehose → S3 → Athena for ad-hoc analysis
-
B
EventBridge → RDS for relational storage → API Gateway for dashboards
-
C
Kinesis Data Streams → Timestream for time-series storage → CloudWatch dashboards
✓ Correct
-
D
Kinesis Data Streams → Lambda → DynamoDB → CloudFront for caching
Explanation
Amazon Timestream is optimized for time-series data with high-speed ingestion and sub-second query latency. It integrates well with Kinesis Data Streams for IoT data processing and CloudWatch for visualization.
Your RDS for PostgreSQL database experiences high memory usage causing Out-Of-Memory (OOM) errors. The issue is traced to inefficient query plans. What is the best remediation approach?
-
A
Enable query result caching in Aurora to prevent redundant query execution
-
B
Use EXPLAIN ANALYZE to identify the query plan issues and add appropriate indexes or rewrite queries
✓ Correct
-
C
Increase the DB instance size to provide more memory and re-run the problematic queries
-
D
Reduce the work_mem parameter in the parameter group to limit memory consumption per operation
Explanation
EXPLAIN ANALYZE provides insight into query execution plans. Adding indexes or rewriting queries to use better plans is the proper solution rather than increasing memory, which masks the underlying issue.
You need to audit all data modifications in your DynamoDB table for compliance purposes. Which approach provides a complete audit trail?
-
A
Enable CloudTrail logging for all DynamoDB API calls made by applications
-
B
Use DynamoDB point-in-time recovery to track changes over time
-
C
Enable DynamoDB Streams and process stream records with Lambda to store audit logs in S3
✓ Correct
-
D
Configure CloudWatch detailed monitoring to track all put and update operations
Explanation
DynamoDB Streams capture item-level modifications and can be processed by Lambda to create an audit trail in S3. This provides a complete record of data changes. CloudTrail only logs API calls, not actual data modifications.
Your Aurora MySQL cluster experiences a spike in read latency during peak hours even with read replicas. You have already optimized indexes and query plans. What should you investigate?
-
A
Connection pool exhaustion and increase the max_connections parameter
-
B
Network bandwidth saturation between read replicas and the application tier
-
C
Read replica lag and binlog replication throttling from the primary instance
✓ Correct
-
D
DB cluster cache invalidation and increase the buffer pool size on read replicas
Explanation
High read latency during peak hours with read replicas often indicates that the primary instance cannot keep up with replication throughput, causing replica lag. This reduces data freshness on replicas and increases read latency.
You are implementing a distributed ledger application on Amazon QLDB. The application requires long-term retention of the complete transaction history. What is the recommended approach for archival?
-
A
Stream QLDB changes to Kinesis and archive to Glacier using Kinesis Firehose
-
B
Export QLDB data to S3 using Parquet format with periodic exports based on QLDB journal
-
C
Use QLDB backup and restore functionality to create periodic snapshots in S3
-
D
Configure QLDB to automatically export committed journal data to S3 using QLDB Ledger Export
✓ Correct
Explanation
QLDB Ledger Export automatically exports all committed journal entries to S3, providing a complete and verifiable transaction history. This is the recommended approach for long-term archival of QLDB data.
Your organization runs a PostgreSQL database with extensions like PostGIS for spatial queries. You plan to migrate to RDS for PostgreSQL. What should you verify?
-
A
RDS for PostgreSQL supports all PostgreSQL extensions without additional configuration or licensing
-
B
Custom extensions must be installed using RDS custom for database where you have full OS access
-
C
RDS for PostgreSQL supports a curated list of extensions and requires rds_superuser privilege to install them
✓ Correct
-
D
RDS for PostgreSQL does not support extensions; functionality must be implemented in the application
Explanation
RDS for PostgreSQL supports a curated list of approved extensions. PostGIS is among the supported extensions and installation requires rds_superuser privilege. RDS Custom may be needed for unsupported extensions.
You need to implement disaster recovery for a critical MySQL database with RPO of 1 hour and RTO of 30 minutes. Which approach best meets these requirements?
-
A
Cross-region read replicas with automated promotion to primary on detection of primary failure
-
B
Multi-AZ deployment with automated backups and regional failover using read replicas
✓ Correct
-
C
Hourly automated backups with read replica in standby mode for rapid failover
-
D
Daily snapshots with manual restore process to achieve RTO within 30 minutes
Explanation
Multi-AZ provides near-instantaneous failover (typically under 1 minute) within the same region, while hourly automated backups provide a 1-hour RPO. Cross-region failover (option D) has longer RTO due to replication lag.
Your DynamoDB table has a very large partition due to a non-uniform distribution of data across partition keys. This causes hot partitions and throttling. Which immediate remediation should you implement?
-
A
Switch to on-demand billing mode to automatically handle the increased throughput
-
B
Increase provisioned capacity to accommodate the hot partition's throughput requirement
-
C
Create a Global Secondary Index with a different partition key that distributes data better
-
D
Add a random suffix to the partition key to distribute data more evenly across partitions
✓ Correct
Explanation
Adding a random suffix (sharding) to the partition key distributes requests across multiple partitions, eliminating the hot partition issue. This is more effective than capacity increases which don't solve the distribution problem.
You are designing a database solution for a financial services company that requires sub-millisecond latency and high throughput. The application needs to perform complex joins across multiple tables. Which AWS database service would be most appropriate?
-
A
Amazon DynamoDB with global secondary indexes
-
B
Amazon Neptune with property graph queries
-
C
Amazon Aurora MySQL with read replicas
✓ Correct
-
D
Amazon ElastiCache for Redis combined with RDS
Explanation
Aurora MySQL provides ACID compliance, complex join support, and sub-millisecond latency through its distributed architecture and read replicas for scaling read operations.
What is the maximum number of read replicas that can be created for an Amazon RDS MySQL instance in a single region?
-
A
3
-
B
15
✓ Correct
-
C
10
-
D
5
Explanation
Amazon RDS allows up to 15 read replicas per database instance within the same region, providing flexibility for scaling read-heavy workloads.
You are migrating a 50 GB on-premises PostgreSQL database to AWS. The network has limited bandwidth, and the database cannot be offline for more than 2 hours. Which migration approach would be most suitable?
-
A
Export the database dump and import it directly using psql over VPN
-
B
Use AWS DMS with full load and CDC (change data capture) with a parallel full load of 4 tasks
✓ Correct
-
C
Create an Aurora read replica first, then failover to Aurora
-
D
Use AWS Snowball with AWS DataSync for the initial data transfer
Explanation
AWS DMS with CDC enables continuous replication of changes after the initial full load, minimizing downtime while respecting bandwidth constraints through configurable parallel task settings.
Your application uses Amazon DynamoDB with strongly consistent reads. You notice that read capacity is becoming a bottleneck. What is the most cost-effective solution to improve read performance without changing application code?
-
A
Increase the provisioned read capacity units (RCUs)
-
B
Migrate to global secondary indexes with eventual consistency
-
C
Enable Point-in-Time Recovery (PITR) for read optimization
-
D
Enable DynamoDB Accelerator (DAX) to cache read queries
✓ Correct
Explanation
DAX provides microsecond-level latency and reduces read traffic to DynamoDB tables, making it cost-effective by reducing actual RCU consumption while improving performance.
You are configuring high availability for a mission-critical Amazon Aurora PostgreSQL cluster spanning three availability zones. Which configuration provides the highest availability with automatic failover?
-
A
Aurora cluster with 1 primary instance, 1 read replica, and automated backups every 5 minutes
-
B
Aurora cluster with 3 instances of mixed types in different AZs with auto-scaling enabled
-
C
Aurora cluster with 1 primary instance and 2 read replicas distributed across 3 AZs
✓ Correct
-
D
Aurora cluster with 2 primary instances and 1 read replica with multi-master enabled
Explanation
An Aurora cluster with one primary and two read replicas across three AZs ensures automatic failover capability, data redundancy, and read scaling while maintaining a single write endpoint.
A compliance requirement mandates that your RDS database must be encrypted at rest using a customer-managed KMS key. Your database is currently running on an unencrypted RDS instance. How should you implement this requirement?
-
A
Create a snapshot of the unencrypted database, encrypt the snapshot with a customer-managed key, and restore to a new encrypted instance
-
B
Modify the existing RDS instance encryption settings and select a customer-managed KMS key
-
C
Use AWS DMS to migrate data to a new encrypted RDS instance with a customer-managed KMS key
✓ Correct
-
D
Enable encryption at rest through the RDS console using AWS-managed keys
Explanation
RDS encryption cannot be enabled on an existing unencrypted instance. The most practical approaches are snapshot restore (option B) or migration via DMS. Option C (DMS) allows minimal downtime with CDC for a live production database.
You need to design a database for a real-time analytics application that ingests 100,000 events per second. The data model requires complex aggregations and time-series analysis. Which AWS database is most suitable?
-
A
Amazon DynamoDB with on-demand billing
-
B
Amazon ElastiCache Memcached for event streaming
-
C
Amazon Redshift with RA3 nodes and managed storage
✓ Correct
-
D
Amazon RDS MySQL with partitioning
Explanation
Amazon Redshift with RA3 nodes excels at high-volume data ingestion and complex analytical queries, providing better columnar performance and cost efficiency for time-series analytics compared to transactional databases.
Your organization is experiencing unexpected billing increases from Amazon RDS. Upon investigation, you notice a significant spike in CPU usage during off-peak hours. Which approach would help identify the root cause?
-
A
Review CloudWatch metrics and correlate with application error logs
-
B
Enable Enhanced Monitoring and check OS-level metrics for resource contention
-
C
Increase the instance class size to handle the unexpected load
-
D
Enable Performance Insights and use the database load chart to correlate with the CPU spike
✓ Correct
Explanation
Performance Insights provides a visual breakdown of active sessions and wait events correlated with CPU utilization, making it the best tool to identify which queries are causing the spike during off-peak hours.
You are designing a multi-tenant SaaS application with 500 customers, each requiring isolated databases. What is the most operationally efficient approach for managing these databases on AWS?
-
A
Implement Amazon Aurora MySQL with Aurora global database for disaster recovery
-
B
Deploy DynamoDB with a composite partition key containing tenant ID and entity ID
-
C
Create 500 separate RDS instances for complete isolation and compliance
-
D
Use a single RDS instance with a separate schema per tenant and row-level security policies
✓ Correct
Explanation
A single RDS instance with schema-per-tenant and row-level security provides operational efficiency, cost optimization, and easier management while maintaining logical isolation and compliance requirements.
When restoring an Amazon RDS MySQL database from a snapshot to a point-in-time that occurred 5 days ago, what factor most significantly affects the time required for restoration?
-
A
The current level of I/O activity on the source database
-
B
The number of read replicas attached to the original database instance
-
C
The total size of the database snapshot
-
D
The number of binary logs that need to be applied from the snapshot to the target point-in-time
✓ Correct
Explanation
Point-in-time recovery requires applying binary logs sequentially from the snapshot to reach the target time. The number of logs and their size directly impact restoration duration, while snapshot size matters less for the recovery process itself.
You need to implement cross-region disaster recovery for an Amazon Aurora PostgreSQL cluster. The recovery time objective (RTO) is 1 minute, and recovery point objective (RPO) is 1 second. Which solution meets these requirements?
-
A
Aurora global database with secondary region
✓ Correct
-
B
AWS DMS continuous replication to a standby Aurora cluster
-
C
Automated daily snapshots with cross-region copying
-
D
Multi-region read replicas with manual failover scripts
Explanation
Aurora global database replicates data with sub-second latency (meeting RPO) and enables failover to a read-only secondary in under 1 minute (meeting RTO), making it the purpose-built solution for these requirements.
Your application requires sub-second response times for queries but generates unpredictable traffic patterns ranging from 100 to 10,000 requests per second. Which RDS configuration best handles this scenario?
-
A
DynamoDB on-demand mode with global secondary indexes
-
B
Provisioned capacity with manual scaling policies
-
C
Aurora MySQL with Aurora Auto Scaling enabled
✓ Correct
-
D
RDS MySQL with Read Replica Auto Scaling and elastic compute
Explanation
Aurora Auto Scaling automatically adjusts read and write capacity within seconds based on traffic patterns, and Aurora Serverless can handle unpredictable workloads without manual intervention or cold starts.
You are implementing a backup strategy for a critical Amazon RDS database. The business requires the ability to restore to any point within the last 30 days. What is the recommended approach?
-
A
Enable automated backups with a 30-day retention period and test restoration monthly
-
B
Use AWS Backup service with a backup plan configured for daily backups and 30-day retention
-
C
Enable automated backups with 35-day retention, enable binary logging, and verify backup integrity weekly
✓ Correct
-
D
Create daily snapshots manually and store them in an S3 bucket with versioning enabled
Explanation
RDS automated backups with extended retention (35 days) combined with binary logging enable point-in-time recovery for any moment within 30 days, with the extra 5 days as a safety buffer. Regular integrity testing ensures reliability.
A developer has created an application that performs 10,000 queries per second against a DynamoDB table using a single partition key. The table is throttling despite adequate provisioned capacity. What is the most likely cause?
-
A
The table has too many global secondary indexes, limiting write throughput
-
B
The provisioned write capacity is insufficient for the query volume
-
C
DynamoDB is rate-limiting requests due to exceeding the account quota
-
D
All queries are directed to a single partition, creating a hot partition scenario
✓ Correct
Explanation
If all 10,000 queries per second use the same partition key value, DynamoDB cannot distribute the load across partitions. This hot partition problem causes throttling even with adequate overall capacity because each partition has limits.
What is the primary benefit of using Amazon RDS Proxy for a web application with connection pooling challenges?
-
A
It encrypts all traffic between the application and RDS database
-
B
It automatically scales the RDS instance based on connection count
-
C
It increases the maximum IOPS available to the RDS database
-
D
It reduces database connection overhead by managing a pool of database connections and multiplexing application connections
✓ Correct
Explanation
RDS Proxy acts as a connection pooler, allowing many application connections to share a smaller number of database connections, reducing memory overhead and improving performance under high connection volume.
You are migrating a large SQL Server database (2 TB) from on-premises to RDS for SQL Server. The network bandwidth between your data center and AWS is 100 Mbps. Using AWS DMS with full load replication, approximately how long would the initial full load take (assuming 80% network utilization)?
-
A
Approximately 20-25 hours
✓ Correct
-
B
Approximately 6-8 hours
-
C
Approximately 3-4 hours
-
D
Approximately 12-15 hours
Explanation
2 TB = 2,000,000 MB. With 100 Mbps bandwidth at 80% utilization = 80 Mbps effective throughput = 10 MB/s. Transfer time = 2,000,000 MB / 10 MB/s = 200,000 seconds ≈ 56 hours. However, with parallel tasks and compression, realistic time is 20-25 hours.
Your application uses Amazon DynamoDB with a write throttle occurring during peak hours. You have already verified that your application code is efficient. What is the most direct solution to eliminate throttling?
-
A
Enable DynamoDB Accelerator (DAX) to cache write operations
-
B
Enable TTL (Time to Live) on all attributes to reduce data size
-
C
Increase provisioned write capacity units (WCUs) or switch to on-demand billing mode
✓ Correct
-
D
Create additional global secondary indexes to distribute write load
Explanation
Write throttling occurs when provisioned WCUs are insufficient. The direct solution is to increase provisioned WCUs or switch to on-demand mode for automatic capacity adjustment. DAX caches reads, not writes.
You need to replicate a PostgreSQL database from on-premises to Amazon Aurora PostgreSQL for real-time analytics. The source database cannot tolerate significant performance impact. Which replication method is most suitable?
-
A
AWS DMS with CDC (Change Data Capture) using a logical replication slot
✓ Correct
-
B
Manual SQL dump export with incremental INSERT statements
-
C
Native PostgreSQL pg_upgrade utility with binary mode
-
D
AWS Database Migration Accelerator (DMA) with full load only
Explanation
AWS DMS with CDC using logical replication slots minimizes source impact by asynchronously capturing changes and replicating them to Aurora, allowing the source database to continue normal operations.
Your organization's compliance policy requires encryption in transit for all database connections. You are using RDS for MySQL. How can you enforce SSL/TLS encryption for all client connections?
-
A
Update the security group to allow only port 3307 (encrypted MySQL port)
-
B
Create a new DB subnet group with encryption enabled and migrate the database
-
C
Modify the RDS parameter group to set 'require_secure_transport' to ON
✓ Correct
-
D
Enable Enhanced Encryption in the RDS console under Security settings
Explanation
Setting the 'require_secure_transport' parameter to ON forces all connections to use SSL/TLS, rejecting unencrypted connections at the database level.
You are designing a database solution for a mobile application that must work both online and offline. When online, data must synchronize with a central database. Which AWS service combination is most appropriate?
-
A
Amazon Redshift with S3 for snapshot-based synchronization
-
B
Amazon AppSync with DynamoDB for real-time synchronization and offline capabilities
✓ Correct
-
C
Amazon DynamoDB with DynamoDB Accelerator (DAX) for offline caching
-
D
Amazon RDS with AWS Glue for offline synchronization
Explanation
AWS AppSync with DynamoDB provides built-in offline data synchronization, automatic conflict resolution, and real-time updates when the application comes online, making it purpose-built for mobile app scenarios.
When using Amazon Aurora with Read Replicas, how is the write throughput of the database affected by adding more read replicas?
-
A
Write throughput increases proportionally with the number of read replicas
-
B
Write throughput depends entirely on the instance class of the read replicas
-
C
Write throughput decreases slightly due to replication overhead
✓ Correct
-
D
Write throughput remains unchanged because replicas do not handle write operations
Explanation
Read replicas do not directly handle writes, but there is minimal replication overhead on the primary instance to propagate changes. In Aurora, this overhead is negligible due to its distributed architecture.
You are implementing a solution to prevent accidental deletion of critical data in Amazon RDS. What is the most comprehensive protection strategy?
-
A
Implement database-level transaction logs and enable binary logging for point-in-time recovery
-
B
Use AWS Backup for compliance-enforced retention policies combined with RDS deletion protection and IAM policies restricting delete operations
✓ Correct
-
C
Create read replicas in multiple regions and configure cross-region snapshots
-
D
Enable automated backups and configure deletion protection at the instance level
Explanation
A comprehensive approach combines AWS Backup with compliance policies that prevent modification, RDS deletion protection at the instance level, and IAM policies that restrict who can perform delete operations, providing multiple layers of protection.
Your organization is consolidating 50 MySQL databases from different business units into a single RDS instance using separate schemas. What is the primary consideration for monitoring performance in this scenario?
-
A
Use CloudWatch metrics to track database size growth per schema and alert when any schema exceeds 10 GB
-
B
Monitor the total number of connections across all schemas to prevent connection pool exhaustion
-
C
Enable slow query log for each schema separately and review quarterly for optimization opportunities
-
D
Implement schema-level resource quotas and use Performance Insights to identify cross-schema resource contention
✓ Correct
Explanation
Performance Insights can correlate resource consumption with specific schemas, revealing if one schema is consuming excessive resources and impacting others. This is critical in a consolidated multi-schema environment for identifying noisy neighbors.
When using AWS DMS to migrate from Oracle to Amazon Aurora PostgreSQL, what is the primary challenge related to data type compatibility?
-
A
Oracle's CHAR data type has different padding behavior than Aurora PostgreSQL's character type, affecting comparisons
-
B
Aurora PostgreSQL does not support Oracle's DATE data type and requires manual schema modification before migration
-
C
Oracle's NUMBER data type maps inconsistently to Aurora PostgreSQL's numeric type, potentially losing precision
✓ Correct
-
D
Oracle's BLOB data type is not supported in Aurora PostgreSQL and must be converted to BYTEA manually
Explanation
Oracle's NUMBER type with variable precision/scale can map to Aurora PostgreSQL in multiple ways (numeric, float, double), and DMS may choose a mapping that loses precision. This requires careful schema review and custom conversion rules.
You need to implement automatic failover for a mission-critical RDS database with an RTO of 30 seconds. Which configuration provides this capability?
-
A
RDS with AWS DMS continuous replication to a standby instance
-
B
RDS single-AZ with automated backups and point-in-time recovery configuration
-
C
RDS with read replicas and a custom failover script triggered by CloudWatch alarms
-
D
RDS Multi-AZ deployment with automated failover enabled
✓ Correct
Explanation
RDS Multi-AZ with automatic failover provides synchronous replication to a standby instance in another AZ and automatic failover within 30-120 seconds when the primary fails, meeting the RTO requirement.
You are managing an Amazon RDS Multi-AZ deployment with synchronous replication. A network partition occurs between the primary and secondary instances. What happens to write operations on the primary database?
-
A
Write operations are blocked until the network partition is resolved
-
B
Write operations continue to the primary and are acknowledged without waiting for secondary replication
✓ Correct
-
C
Write operations are automatically redirected to the secondary instance
-
D
The primary automatically promotes the secondary and demotes itself
Explanation
In RDS Multi-AZ, the primary database continues to accept writes during a network partition. The synchronous replication may lag or fail, but the primary does not block writes; this is by design to maintain availability. A failover is only triggered if the primary becomes unavailable.
Your company requires encryption of data at rest for all Amazon DynamoDB tables containing personally identifiable information (PII). Which encryption option provides the most control over key management?
-
A
AWS owned keys managed by DynamoDB
-
B
Customer managed keys stored in AWS Key Management Service (KMS)
✓ Correct
-
C
AWS managed keys (aws/dynamodb)
-
D
Customer managed keys stored in AWS Secrets Manager
Explanation
Customer managed keys in AWS KMS provide the highest level of control, allowing you to manage key rotation, access policies, and audit key usage. AWS owned and managed keys offer less control and audit visibility.
You are optimizing an Amazon Aurora MySQL cluster for read-heavy workloads. Currently, you have one primary instance and two read replicas. Query performance is still suboptimal. What should you investigate first?
-
A
Immediately add three more read replicas to distribute load
-
B
Migrate the entire cluster to Aurora PostgreSQL for better read performance
-
C
Increase the instance class size of all replicas
-
D
The query execution plans and ensure proper indexing on frequently queried columns
✓ Correct
Explanation
Query optimization through execution plan analysis and proper indexing is the most cost-effective first step before scaling horizontally or vertically. Adding more replicas without addressing inefficient queries wastes resources.
An application using Amazon DocumentDB requires guaranteed consistency across all reads. Which read preference should be configured?
-
A
secondaryPreferred - read from secondary, fallback to primary if unavailable
-
B
primary - read from the primary instance only
✓ Correct
-
C
primaryPreferred - read from primary, fallback to secondary if unavailable
-
D
secondary - read from secondary replicas only for eventual consistency
Explanation
The 'primary' read preference ensures strong consistency by reading exclusively from the primary instance. All other preferences may return stale data from replicas, which have eventual consistency.
You are migrating a legacy SQL Server database to AWS. The source database uses SQL Server Agent jobs for scheduled maintenance tasks. How should you replicate this functionality in AWS?
-
A
Create AWS Lambda functions triggered by EventBridge rules to execute the equivalent tasks
✓ Correct
-
B
Use RDS SQL Server native job scheduling through SQL Server Agent
-
C
Implement AWS Systems Manager Automation documents for all scheduled maintenance
-
D
Migrate to Amazon RDS for MySQL and rewrite jobs as cron tasks
Explanation
Amazon RDS for SQL Server supports SQL Server Agent jobs, allowing native job scheduling. However, the most flexible cloud-native approach is using Lambda with EventBridge, which works across all RDS database engines.
Your Amazon RDS PostgreSQL database is experiencing connection pool exhaustion during peak hours. The application uses a connection per thread approach. What is the recommended solution?
-
A
Upgrade to a larger RDS instance class with more memory
-
B
Refactor the application to use connection pooling within the application tier
-
C
Increase max_connections parameter on the RDS instance
-
D
Implement Amazon RDS Proxy between the application and database
✓ Correct
Explanation
Amazon RDS Proxy manages database connections efficiently by multiplexing application connections, reducing the number of actual database connections needed. This is the recommended AWS solution for connection pool exhaustion.
You need to migrate data from an on-premises Oracle database to Amazon Aurora PostgreSQL using AWS Database Migration Service (DMS). The source database contains PL/SQL stored procedures. What should you plan for?
-
A
Manual conversion of PL/SQL procedures to PL/pgSQL, as automatic conversion has limitations
✓ Correct
-
B
Use Oracle compatibility mode in Aurora PostgreSQL to run PL/SQL natively
-
C
Exclude stored procedures from migration and rewrite them as application logic
-
D
DMS automatically converts all PL/SQL procedures to PostgreSQL PL/pgSQL without manual intervention
Explanation
While DMS can migrate some PL/SQL code, complex procedures often require manual conversion to PostgreSQL's PL/pgSQL due to syntax and functional differences. Aurora PostgreSQL does not have full PL/SQL compatibility.
An Amazon RDS MySQL instance is approaching its storage limit. Which action can be performed without downtime?
-
A
Modify the allocated storage parameter in the DB parameter group
-
B
Enable automatic storage scaling and allow RDS to expand automatically
✓ Correct
-
C
Create a snapshot and restore to an instance with larger allocated storage
-
D
Perform a read replica promotion to a larger instance type
Explanation
Amazon RDS for MySQL supports automatic storage scaling without downtime, automatically increasing storage when needed. Snapshots and restoration cause downtime, and modifying the parameter group alone does not expand storage.
You are designing a multi-region active-active database architecture on AWS for disaster recovery. Which database service is most suitable for this requirement?
-
A
Amazon Aurora Global Database with read-only secondary regions
-
B
Amazon DynamoDB Global Tables with active-active replication across regions
✓ Correct
-
C
Amazon RDS with manual cross-region read replicas and application-level failover
-
D
Amazon DocumentDB with cross-region backup and restore procedures
Explanation
DynamoDB Global Tables provide true active-active multi-region replication where applications can read and write in any region. Aurora Global Database is read-only in secondary regions. RDS and DocumentDB do not provide built-in active-active support.
A company runs batch analytics queries on an Amazon Redshift cluster during off-peak hours. Current costs are high due to continuous cluster availability. How can costs be optimized while maintaining query performance for batch workloads?
-
A
Implement Redshift Reserved Instances and purchase multi-year commitments
-
B
Migrate to Amazon Athena for all analytics queries to eliminate cluster costs
-
C
Use Redshift Spectrum to query data directly in Amazon S3 with temporary clusters
-
D
Pause the Redshift cluster when not in use and resume it for batch processing
✓ Correct
Explanation
Redshift supports pause and resume functionality for RA3 node types, allowing you to stop clusters when not needed and resume them for batch jobs, significantly reducing costs. Spectrum and Athena have different use cases, and Reserved Instances do not reduce costs as much as pausing.