59 Practice Questions & Answers
You are designing a multi-region disaster recovery solution for a critical application. Which Google Cloud service should you use to replicate data across regions with minimal latency and automatic failover capabilities?
-
A
Firestore with multi-region replication
-
B
Cloud Storage with cross-region replication
-
C
BigQuery with scheduled exports
-
D
Cloud Spanner with multi-region configuration
✓ Correct
Explanation
Cloud Spanner is specifically designed for multi-region deployments with strong consistency and automatic failover, making it ideal for critical applications requiring high availability across regions.
Your organization requires audit logging for compliance purposes across all GCP projects. What is the most efficient way to centralize and manage audit logs from multiple projects?
-
A
Set up custom monitoring alerts in each project
-
B
Use Cloud Logging sinks to route logs to a centralized Cloud Logging bucket in an audit project
✓ Correct
-
C
Export logs from each project separately to Cloud Storage
-
D
Configure Cloud Audit Logs in each project individually
Explanation
Cloud Logging sinks provide a centralized way to aggregate and manage audit logs from multiple projects into a single audit project, enabling efficient compliance management and long-term retention.
You need to implement network segmentation for a healthcare application handling sensitive patient data. Which Google Cloud networking feature best supports zero-trust security principles?
-
A
Cloud VPN with IPSec encryption
-
B
VPC Flow Logs
-
C
VPC Service Controls with access levels and access policies
✓ Correct
-
D
Cloud Firewall rules with priority ordering
Explanation
VPC Service Controls implements a zero-trust model by creating a security perimeter around Google Cloud services, controlling access based on identity and context rather than network location.
When designing a Kubernetes deployment on GKE, you need to ensure that workloads can access Google Cloud APIs securely without storing credentials. What should you configure?
-
A
Workload Identity binding between Kubernetes service accounts and Google service accounts
✓ Correct
-
B
API keys attached to each pod
-
C
OAuth 2.0 credentials in environment variables
-
D
Service account keys stored in Secret Manager
Explanation
Workload Identity enables secure, credential-free access to Google Cloud APIs by binding Kubernetes service accounts to Google service accounts, eliminating the need to manage and distribute credentials.
Your organization is migrating a large on-premises data warehouse to Google Cloud. You need to minimize downtime during the migration. Which approach combines data transfer with minimal application changes?
-
A
Use BigQuery Data Transfer Service with scheduled imports from on-premises
-
B
Set up Datastream for continuous replication with BigQuery as the target
✓ Correct
-
C
Export data using gsutil and load into BigQuery with batch jobs
-
D
Manually copy data using Cloud Storage Transfer Service
Explanation
Datastream provides continuous data replication from on-premises sources to BigQuery, enabling near-zero-downtime migrations by maintaining data synchronization throughout the migration process.
You are designing a cost optimization strategy for a variable workload that spikes during business hours. Which combination of compute resources would be most cost-effective?
-
A
Only preemptible VMs with auto-scaling
-
B
On-demand instances with manual scaling
-
C
Mix of committed use discounts for baseline load and preemptible VMs for spikes
✓ Correct
-
D
Only committed use discounts on Compute Engine
Explanation
Combining committed use discounts for predictable baseline capacity with preemptible VMs for variable spikes provides optimal cost efficiency while maintaining reliability and performance.
You need to implement fine-grained access control where different teams can only access specific datasets in BigQuery. What is the recommended approach?
-
A
Create separate BigQuery projects for each team
-
B
Grant each team member individual permissions at the table level
-
C
Use BigQuery datasets with IAM roles and custom roles for team-based access control
✓ Correct
-
D
Use row-level security policies on all tables
Explanation
BigQuery IAM roles at the dataset level combined with custom roles provide scalable, maintainable fine-grained access control that aligns with team structures and security requirements.
Your application requires real-time event processing with exactly-once semantics. Which Google Cloud solution is best suited for this requirement?
-
A
Cloud Events with Cloud Run
-
B
Cloud Pub/Sub with Cloud Dataflow
✓ Correct
-
C
Cloud Tasks with Cloud Scheduler
-
D
Cloud Monitoring with alerting policies
Explanation
Cloud Dataflow provides exactly-once processing semantics with Pub/Sub, ensuring reliable event processing without duplication or data loss in real-time pipelines.
You are architecting a solution where on-premises systems need to communicate with Google Cloud resources over a private connection. Which networking solution provides the most secure and reliable connection?
-
A
Cloud VPN with dynamic routing
-
B
Internet-based connection with Cloud Armor protection
-
C
Cloud NAT with firewall rules
-
D
Dedicated Interconnect with redundant connections
✓ Correct
Explanation
Dedicated Interconnect provides a private, high-bandwidth, low-latency connection with redundancy, offering superior security and reliability compared to VPN for hybrid cloud architectures.
You need to store sensitive encryption keys for your applications. Which Google Cloud service provides hardware security module (HSM) backed key management with automated rotation?
-
A
Secret Manager with local encryption
-
B
Cloud Key Management Service with Cloud HSM
✓ Correct
-
C
Cloud Storage with encryption at rest
-
D
Firestore with field-level encryption
Explanation
Cloud KMS with Cloud HSM provides cryptographic key management with hardware-backed security and automated rotation, meeting strict compliance requirements for sensitive key protection.
Your organization uses multiple Google Cloud projects across different departments. You need to implement centralized billing and cost tracking. What should you configure?
-
A
Cloud Scheduler jobs to export billing data
-
B
Separate billing accounts for each project
-
C
Manual monthly reconciliation across projects
-
D
A single billing account linked to multiple projects with cost allocation tags
✓ Correct
Explanation
A single billing account with cost allocation tags enables centralized billing management while allowing departments to track their specific costs and resource usage across projects.
You are designing a CI/CD pipeline for microservices deployed on GKE. Which Google Cloud service should orchestrate builds, tests, and deployments?
-
A
Cloud Deployment Manager with JSON templates
-
B
Cloud Build with Kubernetes manifests
✓ Correct
-
C
Cloud Run with custom scripts
-
D
Cloud Composer with Airflow DAGs
Explanation
Cloud Build is purpose-built for CI/CD pipelines with native Kubernetes support, enabling seamless building, testing, and deployment of containerized microservices to GKE.
Your application needs to process large amounts of unstructured data including images and PDFs. Which Google Cloud service combines storage with built-in AI/ML capabilities for document analysis?
-
A
BigQuery with ML Engine
-
B
Vertex AI with custom training models
-
C
Cloud Storage with Cloud Vision API
-
D
Document AI with Cloud Storage integration
✓ Correct
Explanation
Document AI provides specialized capabilities for processing unstructured documents with pre-built models and integrates directly with Cloud Storage, making it ideal for document analysis workloads.
You need to ensure that data cannot be modified or deleted for compliance reasons, even by administrators. Which Cloud Storage feature enforces this requirement?
-
A
Compliance mode with soft delete
-
B
Immutable backups with Cross-Region Replication
-
C
Bucket versioning with access controls
-
D
Retention policies with Object Lock
✓ Correct
Explanation
Cloud Storage retention policies in compliance mode enforce immutability, preventing deletion or modification of objects even by project owners, meeting strict compliance requirements like WORM storage.
You are optimizing costs for a batch processing workload that can tolerate interruptions. Which compute option minimizes expenses while maintaining acceptable performance?
-
A
App Engine flexible environment with automatic scaling
-
B
Committed use discounts with reserved capacity
-
C
Standard Compute Engine instances with sustained use discounts
-
D
Preemptible or Spot VMs with auto-scaling
✓ Correct
Explanation
Preemptible or Spot VMs offer 60-90% cost savings for interruptible batch workloads, making them the most cost-effective option when tolerance for interruptions exists.
You need to implement a solution where end-users authenticate through their corporate identity provider. Which Google Cloud service enables federated identity management?
-
A
OAuth 2.0 with Cloud API Gateway
-
B
Firebase Authentication with custom claims
-
C
Cloud Identity with SAML integration
✓ Correct
-
D
Identity-Aware Proxy with service accounts
Explanation
Cloud Identity with SAML integration enables federated authentication, allowing users to authenticate using existing corporate identity providers like Active Directory or Okta.
Your organization requires disaster recovery with a Recovery Time Objective (RTO) of less than 1 hour. Which backup and recovery strategy best meets this requirement?
-
A
Weekly full backups to Cloud Storage with manual restoration
-
B
Monthly snapshots of persistent disks
-
C
Continuous replication with hot standby instances in another region
✓ Correct
-
D
Daily incremental backups with recovery automation
Explanation
Continuous replication with hot standby provides sub-hour RTO by maintaining synchronized copies ready for immediate failover, meeting aggressive recovery objectives.
You are designing a data lake solution in Google Cloud. Which service provides cost-effective storage with integrated analytics capabilities?
-
A
Cloud Spanner with Data Studio integration
-
B
Cloud Storage with BigQuery for analytics
✓ Correct
-
C
Firestore with Cloud Dataflow processing
-
D
Cloud SQL with BigLake for external tables
Explanation
Cloud Storage provides cost-effective data lake storage while BigQuery enables powerful analytics on data without requiring expensive data movement or transformation.
You need to deploy containerized applications with minimal operational overhead. The workload pattern is unpredictable and occasionally very spiky. Which service is most appropriate?
-
A
Compute Engine with managed instance groups
-
B
GKE with cluster autoscaling
-
C
Cloud Run with automatic scaling
✓ Correct
-
D
App Engine standard environment
Explanation
Cloud Run provides fully managed, serverless container execution with automatic scaling to zero, making it ideal for unpredictable, spiky workloads while minimizing operational overhead.
Your organization needs to monitor compliance with security policies across multiple projects. Which Google Cloud service provides centralized security posture management?
-
A
Cloud Security Scanner with vulnerability detection
-
B
Security Command Center with findings and recommendations
✓ Correct
-
C
Cloud Monitoring with security alerts
-
D
Cloud Audit Logs with custom queries
Explanation
Security Command Center provides centralized visibility and management of security posture across projects, offering findings, recommendations, and compliance assessments.
You need to implement a solution that automatically scales Kubernetes workloads based on custom business metrics beyond standard CPU and memory. What should you configure?
-
A
Vertical Pod Autoscaler for resource optimization
-
B
Horizontal Pod Autoscaler with custom metrics from Cloud Monitoring
✓ Correct
-
C
Manual pod replicas with service mesh
-
D
Cluster autoscaling with node pools
Explanation
Horizontal Pod Autoscaler with custom metrics from Cloud Monitoring enables scaling based on business-specific metrics like transaction rates or queue depth, not just CPU/memory.
You are designing infrastructure for a multi-tenant SaaS application. Which approach best ensures data isolation between customers?
-
A
Dedicated Cloud SQL instances for each tenant with separate databases
-
B
Shared resources with row-level security in databases and firewall rules
✓ Correct
-
C
Multi-tenant databases with application-level access controls only
-
D
Separate VPCs for each customer with independent resources
Explanation
Combining shared resources with strong database-level row-level security and network firewalling provides efficient cost-effective isolation while maintaining data security boundaries.
You need to migrate a legacy monolithic application to Google Cloud while minimizing refactoring. Which approach is most appropriate?
-
A
Decompose into microservices before migration
-
B
Refactor for Cloud Run serverless deployment
-
C
Use Compute Engine to lift-and-shift the application with minimal changes
✓ Correct
-
D
Containerize the application and deploy to GKE
Explanation
Lift-and-shift migration to Compute Engine minimizes refactoring for legacy monolithic applications while providing immediate cloud benefits, with modernization possible later.
Your application requires global load balancing with automatic failover to the nearest healthy backend. Which Google Cloud load balancer should you use?
-
A
HTTP(S) Load Balancer with Cloud CDN and Cloud Armor
✓ Correct
-
B
Regional Load Balancer for multi-zone distribution
-
C
Network Load Balancer for Layer 4 traffic
-
D
Internal Load Balancer for private networks
Explanation
HTTP(S) Load Balancer provides global load balancing with automatic failover to healthy backends, CDN caching, and security features, ideal for modern web applications.
You are designing a multi-region application on Google Cloud. Users in different geographic regions experience latency issues. What is the most effective approach to reduce latency for global users?
-
A
Replicate all data to every region and use eventual consistency
-
B
Increase the machine type size across all regions
-
C
Use Cloud Interconnect for all user connections
-
D
Deploy Cloud CDN and use Cloud Load Balancing with geo-routing
✓ Correct
Explanation
Cloud CDN caches content at edge locations, and Cloud Load Balancing with geo-routing directs users to the nearest backend, reducing latency effectively. This is the standard approach for global applications.
Your organization requires that all data at rest must be encrypted with customer-managed encryption keys (CMEK). Which Google Cloud services support CMEK natively?
-
A
All Google Cloud services without exception
-
B
Cloud Storage and App Engine standard environment only
-
C
Firestore, Cloud Pub/Sub, and Cloud Bigtable exclusively
-
D
Cloud Storage, Cloud SQL, Compute Engine persistent disks, and BigQuery
✓ Correct
Explanation
Cloud Storage, Cloud SQL, Compute Engine persistent disks, and BigQuery are key services that support CMEK. While many services support it, not all do (e.g., App Engine standard environment does not).
You need to ensure that a critical application maintains high availability during a regional outage. The application uses Cloud Spanner for its database. What is the recommended configuration?
-
A
Deploy Firestore in multi-region mode with eventual consistency
-
B
Set up read replicas in a secondary region and manually switch during outages
-
C
Use Cloud SQL with cross-region replication and promote the replica when needed
-
D
Use Cloud Spanner with multi-region configuration and implement automatic failover policies
✓ Correct
Explanation
Cloud Spanner with multi-region configuration provides automatic failover and strong consistency across regions, making it ideal for high-availability critical applications.
Your organization has strict compliance requirements that prohibit data from leaving a specific geographic region. Which of the following best addresses this requirement?
-
A
Deploy resources only in a single zone and use Cloud VPN
-
B
Use Cloud Storage with a bucket policy that restricts data location and configure VPC Service Controls to enforce data residency
✓ Correct
-
C
Implement Cloud Identity-Aware Proxy without data encryption
-
D
Use BigQuery with dataset location restrictions and disable cross-region queries only
Explanation
Cloud Storage location restrictions combined with VPC Service Controls provide comprehensive data residency enforcement. VPC Service Controls creates security perimeters that prevent data exfiltration across regions.
You are architecting a solution where users upload files that must be processed asynchronously. The processing is unpredictable in duration and volume. Which design pattern is most appropriate?
-
A
Cloud Storage with Cloud Tasks for sequential processing in a single worker
-
B
Cloud Storage for uploads, Cloud Pub/Sub for event notification, and Cloud Run for processing with auto-scaling
✓ Correct
-
C
Direct processing via Cloud Functions with synchronous HTTP requests
-
D
Cloud Storage for uploads and Compute Engine instances with fixed scaling policies
Explanation
This architecture decouples upload, notification, and processing layers. Cloud Pub/Sub handles variable loads, and Cloud Run auto-scales based on demand, making it ideal for unpredictable workloads.
Your team has migrated applications to Google Cloud but needs to maintain compatibility with on-premises Active Directory for user authentication. What is the recommended approach?
-
A
Use only Cloud Identity native users and disable on-premises integration
-
B
Use Compute Engine VMs running Active Directory replicas
-
C
Implement API Gateway with custom authentication middleware
-
D
Configure Cloud Identity with AD FS bridge or use Google Cloud Directory Sync to sync users, then set up single sign-on
✓ Correct
Explanation
Cloud Identity with AD FS bridge or Cloud Directory Sync maintains on-premises Active Directory synchronization while enabling cloud-based SSO. This is the enterprise-standard approach for hybrid authentication.
An application requires sub-millisecond response times for frequently accessed data. The data changes occasionally but reads are extremely high-frequency. Which architecture is optimal?
-
A
Store all data in Cloud Bigtable with eventual consistency
-
B
Use Cloud SQL with connection pooling and no caching
-
C
Use Memorystore for Redis as a caching layer in front of Cloud Datastore, with appropriate TTLs
✓ Correct
-
D
Replicate data to multiple Firestore instances across regions
Explanation
Memorystore for Redis provides sub-millisecond latency for read-heavy workloads. Placing it in front of the primary data store with appropriate cache invalidation strategies is the standard pattern for high-performance caching.
You need to implement fine-grained access control where different teams can access different datasets in BigQuery. What is the most scalable solution?
-
A
Create separate BigQuery instances for each team
-
B
Use BigQuery Authorized Views with IAM roles to control dataset access per team
✓ Correct
-
C
Implement custom authorization logic in your application using service accounts
-
D
Use BigQuery dataset-level IAM roles and create separate projects for each team
Explanation
BigQuery Authorized Views allow you to grant access to specific views based on IAM roles, providing fine-grained control without data duplication or complex application logic.
Your organization wants to ensure that infrastructure changes go through a proper review process before deployment. Which solution provides built-in approval workflows for infrastructure changes?
-
A
Manual SSH into servers for all changes
-
B
Cloud Build with manual approval steps configured in the CI/CD pipeline
✓ Correct
-
C
Terraform with local plan reviews
-
D
Cloud Deployment Manager with no review mechanism
Explanation
Cloud Build can be configured with manual approval steps that require human review before deploying infrastructure changes, enabling controlled Infrastructure as Code deployments.
A startup wants to minimize operational overhead while running containerized microservices. They don't want to manage Kubernetes control planes. What is the best service?
-
A
App Engine standard environment for all workloads
-
B
Cloud Run for serverless container execution with automatic scaling and no cluster management
✓ Correct
-
C
Google Kubernetes Engine with managed node pools
-
D
Compute Engine instances running Docker containers manually
Explanation
Cloud Run eliminates the need to manage Kubernetes control planes or infrastructure entirely, automatically scaling based on demand and charging only for execution time.
You are designing a system that ingests millions of IoT sensor readings per second and needs to perform real-time aggregations. Which technology stack is most suitable?
-
A
Cloud Pub/Sub for ingestion, Dataflow for real-time processing, and BigQuery for storage and analysis
✓ Correct
-
B
Cloud SQL with custom ingestion scripts
-
C
Cloud Storage for ingestion, Cloud Functions for processing, and Firestore for analysis
-
D
Compute Engine instances running Apache Kafka
Explanation
Cloud Pub/Sub handles massive ingestion rates, Dataflow provides real-time stream processing with exactly-once semantics, and BigQuery enables interactive analysis of results.
Your organization mandates that all cloud resources must be tagged with cost center information for billing purposes. How can you enforce this requirement?
-
A
Configure IAM policies that prevent resource creation without labels
-
B
Manually apply labels to each resource immediately after creation
-
C
Create a Cloud Build trigger that validates resource labels before deployment and use Cloud Asset Inventory to audit compliance
✓ Correct
-
D
Use Cloud Audit Logs to track resource creation and send alerts to administrators
Explanation
Cloud Build can validate labels as part of the deployment pipeline, and Cloud Asset Inventory provides continuous compliance monitoring. This enforces labeling at deployment time and audits adherence.
You need to migrate a large on-premises database (2TB+) to Google Cloud while minimizing downtime. Which approach is recommended?
-
A
Use BigQuery Data Transfer Service for all database migrations
-
B
Set up Cloud VPN and perform a one-time bulk copy with rsync
-
C
Export data using mysqldump and import into Cloud SQL manually
-
D
Use Database Migration Service with continuous replication, then perform a cutover when replication lag is minimal
✓ Correct
Explanation
Database Migration Service provides continuous replication and handles the complexity of database migrations, enabling minimal-downtime cutovers with validation and rollback capabilities.
A development team wants to quickly test code changes in an environment identical to production without manual setup. What should you implement?
-
A
Use separate static test and production environments
-
B
Infrastructure as Code templates with Cloud Build to automatically provision ephemeral environments on pull requests
✓ Correct
-
C
Clone the production environment manually for each test
-
D
Run all tests directly on production with a rollback plan
Explanation
Infrastructure as Code with Cloud Build enables automated, on-demand provisioning of identical test environments, allowing teams to validate changes safely and efficiently.
Your organization needs to export audit logs from Google Cloud to your on-premises SIEM system in real-time. What is the recommended architecture?
-
A
Use Cloud Audit Logs API with scheduled batch exports only
-
B
Download audit logs manually from Cloud Console and transfer via SFTP
-
C
Configure Cloud Logging to send logs to Cloud Pub/Sub, then use Cloud Dataflow or a custom subscriber to forward to your SIEM
✓ Correct
-
D
Store all logs locally using Cloud VPN and SSH
Explanation
Cloud Pub/Sub provides reliable delivery of log events, and Dataflow or custom subscribers can transform and forward them to external SIEM systems in real-time.
You are designing a solution for a regulated financial institution that requires strict network isolation. Which approach best satisfies this requirement?
-
A
Use VPC Service Controls to create security perimeters around sensitive services, combined with Private Service Connection and VPC isolation
✓ Correct
-
B
Use Cloud Interconnect without any firewall configuration
-
C
Implement a bastion host and allow all internal traffic without restrictions
-
D
Disable all public IP addresses and rely on firewall rules only
Explanation
VPC Service Controls provide comprehensive network and data isolation, restricting which identities can access specific services. Combined with Private Service Connections, this creates strong network boundaries.
Your application experiences variable traffic, with peaks during business hours and low traffic at night. How should you optimize costs while maintaining performance?
-
A
Overprovision for peak hours across all time zones
-
B
Use only on-demand instances without any reserved capacity
-
C
Use Compute Engine auto-scaling for variable resources and committed use discounts for baseline capacity
✓ Correct
-
D
Manually adjust instance counts every morning and evening
Explanation
Auto-scaling handles variable demand efficiently, while committed use discounts reduce costs for predictable baseline capacity. This combination optimizes both performance and cost.
You need to securely share a large dataset with external partners who should only access specific columns. What is the most secure approach?
-
A
Export data to Cloud Storage and provide download links to partners
-
B
Create a separate BigQuery project and manually copy relevant columns
-
C
Use BigQuery Authorized Views to expose only specific columns, with partner identities granted view-level access through IAM roles
✓ Correct
-
D
Share full dataset access and rely on partners to filter data themselves
Explanation
BigQuery Authorized Views provide column-level filtering and can be shared with external identities through IAM, ensuring partners access only approved data without requiring data export.
Your organization wants to run containerized applications with GPU acceleration for machine learning workloads. What is the recommended service?
-
A
Compute Engine instances only, without container orchestration
-
B
Google Kubernetes Engine with GPU node pools configured for machine learning workloads
✓ Correct
-
C
App Engine standard environment with GPU support
-
D
Cloud Functions with GPU attachments
Explanation
GKE with GPU node pools provides Kubernetes orchestration for containerized ML workloads, enabling automatic scaling and efficient resource management of expensive GPU resources.
You need to implement a disaster recovery solution with a target recovery time (RTO) of 1 hour and recovery point objective (RPO) of 15 minutes. Which strategy is most appropriate?
-
A
No backups, relying on application-level data recovery mechanisms only
-
B
Weekly backups with monthly disaster recovery tests
-
C
Continuous replication to a secondary region using Cloud SQL cross-region replicas, with automated failover configuration and regular disaster recovery drills
✓ Correct
-
D
Daily backups stored in Cloud Storage with manual restore procedures
Explanation
Continuous cross-region replication ensures RPO of 15 minutes and enables quick automated failover for RTO of 1 hour. Regular drills validate the recovery procedures.
A company wants to implement a machine learning pipeline that retrains models nightly using BigQuery data. What is the most efficient solution?
-
A
Export data from BigQuery to Cloud Storage, then manually start training jobs
-
B
Use Compute Engine with cron jobs to manage model training
-
C
Implement custom Python scripts that run on Cloud Functions with 15-minute timeout limits
-
D
Use Cloud Scheduler to trigger Vertex AI training pipelines nightly, reading data directly from BigQuery
✓ Correct
Explanation
Cloud Scheduler provides reliable scheduling, Vertex AI pipelines handle model training at scale, and BigQuery integration avoids data movement, making this the most efficient approach.
You are designing a solution to detect and prevent data exfiltration attempts from your Google Cloud environment. Which service is most appropriate?
-
A
Cloud Audit Logs for post-event detection only
-
B
Cloud IAM roles without additional network controls
-
C
Cloud Firewall rules only
-
D
VPC Service Controls with Access Context Manager to enforce context-based access policies and prevent unauthorized data access
✓ Correct
Explanation
VPC Service Controls creates security perimeters that prevent data movement across boundaries, while Access Context Manager enforces context-based policies, providing proactive exfiltration prevention.
Your legacy application requires a static IP address that must not change. How should you implement this in Google Cloud?
-
A
Use Cloud VPN with a fixed tunnel endpoint
-
B
Reserve a static external IP address and assign it to your Compute Engine instance or Cloud Load Balancer
✓ Correct
-
C
Configure IPv6 addresses which are inherently static in Google Cloud
-
D
Rely on the default dynamic IP assignment and update DNS records frequently
Explanation
Static external IP addresses in Google Cloud remain fixed and can be assigned to instances or load balancers, meeting the requirement for applications that depend on unchanging IPs.
You need to implement role-based access control (RBAC) where team members have different permissions based on their organizational roles. What is the best practice?
-
A
Use Google Cloud IAM predefined roles aligned with organizational functions, combined with custom roles for specific requirements, and audit with Cloud Audit Logs
✓ Correct
-
B
Use Cloud Identity groups but ignore IAM role differentiation
-
C
Grant all users Editor role for simplicity and revoke permissions individually when needed
-
D
Create individual IAM policies for each person manually
Explanation
Predefined IAM roles provide standard permission sets for common functions, custom roles allow fine-tuning, and Cloud Audit Logs enable compliance verification. This approach is scalable and maintainable.
Your organization needs to ensure that developers cannot accidentally delete production databases. What control mechanism should you implement?
-
A
Use Cloud Audit Logs to alert after accidental deletion
-
B
Implement only application-level delete confirmations
-
C
Disable delete operations entirely in all environments
-
D
Use IAM roles to restrict delete permissions to administrators only, combined with Cloud Resource Manager hierarchy and policy constraints
✓ Correct
Explanation
IAM roles with proper hierarchy and policy constraints provide infrastructure-level enforcement that prevents developers from having delete permissions on production resources, while administrators retain control.
Your organization needs to migrate a monolithic application to Google Cloud. The application has tightly coupled components with synchronous dependencies. Which decomposition strategy should you recommend first?
-
A
Identify service boundaries by analyzing data flow and dependencies, then plan incremental migration
✓ Correct
-
B
Break the application into microservices without analyzing current architecture
-
C
Lift and shift the entire monolith to Compute Engine, then refactor later
-
D
Immediately containerize all components and deploy to GKE
Explanation
A proper migration strategy requires understanding the application's current structure and dependencies before decomposing it. Analyzing data flow and service boundaries ensures a successful incremental migration path.
You are designing a multi-region disaster recovery solution for a critical database. Your RTO is 1 hour and RPO is 15 minutes. Which combination of Cloud SQL features best meets these requirements?
-
A
Cross-region read replicas with automated backups every 4 hours
-
B
Single region with on-demand snapshots taken every 30 minutes
-
C
Cross-region high availability configuration with automated backups and point-in-time recovery
✓ Correct
-
D
Read replicas in the same region with manual failover and daily backups
Explanation
Cross-region HA configuration provides automatic failover (meeting RTO), while automated backups combined with point-in-time recovery can achieve the 15-minute RPO requirement. This is the only option that addresses both RTO and RPO.
Your team is implementing Infrastructure as Code using Terraform for Google Cloud. What is the primary advantage of storing Terraform state in Cloud Storage with locking enabled?
-
A
Eliminates the need for version control systems
-
B
Prevents concurrent modifications and ensures consistency across team members
✓ Correct
-
C
Reduces the cost of Terraform executions
-
D
Automatically encrypts all resources created by Terraform
Explanation
State locking in Cloud Storage prevents race conditions when multiple team members or automated processes attempt to modify infrastructure simultaneously, ensuring state consistency and preventing conflicts.
You need to implement fine-grained access control for a team that manages multiple Google Cloud projects. The team members should have different permissions per project. Which IAM binding method is most efficient?
-
A
Assign custom roles to individual users in each project
-
B
Create security groups and bind them to predefined or custom roles at the appropriate resource hierarchy level
✓ Correct
-
C
Use predefined roles applied at the folder level and override with project-level bindings
-
D
Assign all permissions at the organization level and manage exceptions per user
Explanation
Using security groups enables scalable, maintainable access control. Groups can be managed centrally while IAM bindings at the folder or project level provide the fine-grained control needed for different permission requirements.
Your application experiences variable traffic patterns with peaks during specific hours. You've deployed it on GKE with Horizontal Pod Autoscaling (HPA). What metric should you prioritize for scaling decisions to ensure cost efficiency?
-
A
Custom metrics based on application-specific business logic such as request latency or queue depth
✓ Correct
-
B
Memory utilization alone
-
C
CPU utilization alone
-
D
Network bandwidth consumption
Explanation
Custom metrics tied to application behavior provide better cost efficiency than generic CPU/memory metrics. Scaling based on request latency or queue depth ensures resources match actual demand rather than arbitrary resource thresholds.
You are designing a data analytics pipeline that ingests streaming data from IoT devices, requires real-time processing, and stores results for historical analysis. Which Google Cloud services combination is most appropriate?
-
A
Cloud Pub/Sub, Cloud Dataflow, and BigQuery
✓ Correct
-
B
Cloud Storage, Cloud Dataproc, and Cloud SQL
-
C
Cloud Functions, Cloud Firestore, and Cloud Memorystore
-
D
Cloud Tasks, Cloud Run, and Cloud Spanner
Explanation
Cloud Pub/Sub handles streaming ingestion, Cloud Dataflow provides real-time processing capabilities, and BigQuery serves as the data warehouse for historical analysis. This combination is specifically designed for streaming analytics pipelines.
Your organization requires all resources to encrypt data in transit using mutual TLS. You're running services on GKE. How should you implement this requirement across all services?
-
A
Implement mutual TLS using Istio service mesh with automatic sidecar injection and strict mTLS policies
✓ Correct
-
B
Use Cloud Load Balancer with SSL certificates on all service-to-service communication
-
C
Configure TLS termination at the Ingress controller level only
-
D
Require each microservice to manually implement mTLS in application code
Explanation
Istio service mesh automates mTLS implementation across all services through sidecar proxies, enforces policies consistently, and eliminates the burden of manual implementation in application code. This is the standard approach for service mesh security in Kubernetes.
You need to set up network connectivity between your on-premises data center and Google Cloud VPC. The connection must support failover and require minimal latency. Which solution is best suited?
-
A
Single Cloud Interconnect connection for maximum throughput
-
B
Cloud VPN with multiple tunnels to the same peer gateway
-
C
Multiple VPN connections using different ISPs for redundancy and failover
-
D
Dedicated Cloud Interconnect with a Cloud VPN backup connection
✓ Correct
Explanation
Cloud Interconnect provides dedicated, low-latency connectivity, while the Cloud VPN backup ensures failover capability. This combination meets both the performance and redundancy requirements effectively.
Your organization is migating legacy applications that require specific OS-level configurations and kernel modules. Which compute option provides the most flexibility while maintaining cloud-native benefits?
-
A
GKE with custom node images and DaemonSets for system-level requirements
✓ Correct
-
B
Compute Engine with custom images and startup scripts for maximum control
-
C
App Engine standard for managed platform without OS customization
-
D
Cloud Run for serverless execution without OS control
Explanation
GKE allows OS-level customization through custom node images and DaemonSets while maintaining Kubernetes orchestration benefits. This is better than Compute Engine for cloud-native approaches while addressing legacy system requirements.
You are implementing a compliance requirement where audit logs must be immutable once written. Which Google Cloud service and configuration meets this requirement?
-
A
Storing audit logs in BigQuery with table snapshots created hourly
-
B
Cloud Audit Logs exported to Cloud Storage with Object Lock or Firestore with document-level security rules
-
C
Cloud Logging with retention policies set to indefinite duration
-
D
Cloud Audit Logs exported to Cloud Storage buckets configured with Object Lock enabled
✓ Correct
Explanation
Cloud Storage Object Lock prevents deletion or modification of objects after they are written, making it the appropriate storage mechanism for immutable audit logs. This ensures compliance with immutability requirements.