Google Cloud Certification

Associate — Cloud Engineer Google Cloud Study Guide

63 practice questions with correct answers and detailed explanations. Use this guide to review concepts before taking the practice exam.

▶ Take Practice Exam 63 questions  ·  Free  ·  No registration

About the Associate Exam

The Google Cloud Cloud Engineer Google Cloud (Associate) certification validates professional expertise in Google Cloud technologies. This study guide covers all 63 practice questions from our Associate practice test, complete with correct answers and explanations to help you understand each concept thoroughly.

Review each question and explanation below, then test yourself with the full interactive practice exam to measure your readiness.

63 Practice Questions & Answers

Q1 Medium

You need to deploy a containerized application that processes real-time data streams. Which Google Cloud service would be most appropriate for this use case?

  • A App Engine standard environment
  • B Compute Engine with manual scaling
  • C Cloud Run with Pub/Sub ✓ Correct
  • D Cloud Storage with Transfer Service
Explanation

Cloud Run paired with Pub/Sub is ideal for event-driven, real-time data processing workflows, offering automatic scaling and serverless architecture. App Engine is more suitable for traditional web applications, not streaming data.

Q2 Medium

Your organization requires that all data at rest in Cloud Storage be encrypted with customer-managed keys. Which encryption method should you implement?

  • A Server-side encryption with temporary session keys
  • B Application-layer encryption before uploading
  • C Cloud Key Management Service (KMS) with customer-managed keys ✓ Correct
  • D Google-managed default encryption
Explanation

Cloud KMS with customer-managed keys (CMEK) provides the required level of control and compliance for customer-managed encryption of data at rest. Google-managed encryption does not meet the customer-managed requirement.

Q3 Easy

You are configuring VPC Flow Logs for network troubleshooting. What is the primary benefit of enabling flow logs on a subnet?

  • A Encrypts all network traffic between instances
  • B Captures detailed information about IP traffic entering and leaving network interfaces ✓ Correct
  • C Automatically blocks malicious traffic in real-time
  • D Reduces bandwidth costs by compressing network packets
Explanation

VPC Flow Logs capture metadata about IP traffic (source, destination, protocol, bytes transferred) for monitoring and troubleshooting. They do not block traffic, encrypt connections, or reduce costs.

Q4 Hard

Your application requires a database with strong consistency guarantees and ACID transactions across multiple regions. Which Google Cloud database service best meets these requirements?

  • A Cloud SQL with read replicas
  • B Cloud Bigtable
  • C Cloud Firestore in datastore mode
  • D Cloud Spanner ✓ Correct
Explanation

Cloud Spanner provides strong consistency, ACID transactions, and horizontal scalability across regions, making it ideal for globally distributed transactional workloads. Cloud Bigtable lacks ACID guarantees, and Cloud SQL replicas provide eventual consistency.

Q5 Medium

You need to create a custom machine type with 12 vCPUs and 32 GB of memory in Compute Engine. How should you approach this?

  • A Request Google Cloud Support to manually configure hardware
  • B Create a custom machine type through the Google Cloud Console or gcloud CLI within the supported range ✓ Correct
  • C Use Terraform to specify arbitrary CPU and memory values
  • D Use the console to select predefined custom machine types only
Explanation

Custom machine types can be created directly through the Console or gcloud CLI within supported ranges (typically 0.25-6.5 vCPUs, 0.9 GB to 6.5 GB per vCPU). Support intervention is not needed for standard custom configurations.

Q6 Medium

Your organization's compliance policy requires that encryption keys never leave a specific geographic region. Which key management strategy should you use?

  • A Store keys in Cloud Datastore with regional replication disabled
  • B Use Application-Default Credentials for automatic key management
  • C Enable key rotation in Cloud KMS
  • D Create a Cloud KMS keyring in the target region with appropriate IAM bindings ✓ Correct
Explanation

Creating a Cloud KMS keyring in a specific region ensures keys remain in that geographic location. Key rotation addresses key lifecycle, not geographic constraints. Application-Default Credentials and Datastore are not appropriate for this compliance requirement.

Q7 Medium

You have deployed a Kubernetes cluster on GKE with Workload Identity enabled. What is the primary security benefit of this configuration?

  • A All pod-to-pod communication is encrypted with TLS
  • B It automatically patches all container vulnerabilities
  • C Pods can assume Google Cloud IAM roles without storing service account keys ✓ Correct
  • D Network policies are automatically enforced between pods
Explanation

Workload Identity allows Kubernetes service accounts to impersonate Google Cloud service accounts, eliminating the need to distribute and manage service account keys. This significantly improves the security posture of the cluster.

Q8 Medium

When configuring Cloud Load Balancing, you need to route traffic based on the request path (e.g., /api/* to backend A, /static/* to backend B). Which load balancer type should you use?

  • A Network Load Balancer (Layer 4)
  • B Internal Load Balancer for TCP/UDP
  • C HTTP(S) Load Balancer with URL maps and path rules ✓ Correct
  • D SSL Proxy Load Balancer
Explanation

HTTP(S) Load Balancer supports content-based routing through URL maps and path rules, enabling path-based routing decisions. Network Load Balancer operates at Layer 4 and cannot inspect application-layer paths.

Q9 Medium

Your application uses Cloud Pub/Sub for event streaming. You notice some subscribers are falling behind on processing. Which metric should you monitor to identify this issue?

  • A Publish request count
  • B Subscription oldest unacked message age ✓ Correct
  • C Topic message retention period
  • D Subscriber acknowledgment deadline
Explanation

The 'oldest unacked message age' metric indicates how far behind a subscriber is in processing messages. High values suggest the subscriber cannot keep pace with message delivery rate.

Q10 Easy

You need to ensure that a Compute Engine instance can access credentials securely without embedding secrets in code. Which approach is recommended?

  • A Use service account attached to the instance with appropriate IAM roles ✓ Correct
  • B Store credentials in /home/user/.config files
  • C Pass credentials as environment variables during instance startup
  • D Store credentials in a local text file and rotate manually monthly
Explanation

Attaching a service account to an instance and granting it specific IAM roles is the secure, recommended approach. Application code can then use Application Default Credentials without handling keys directly.

Q11 Hard

Your organization has hundreds of GCP projects. You need to enforce a policy that all Compute Engine instances must have OS Login enabled. What is the most efficient way to implement this across all projects?

  • A Use Cloud Scheduler to run a daily script that enables OS Login
  • B Manually enable OS Login on each instance in each project
  • C Create a Deployment Manager template and deploy it to each project
  • D Use Organization Policy constraints to enforce OS Login at the organization level ✓ Correct
Explanation

Organization Policy constraints provide centralized enforcement of OS Login requirements across all projects, folders, and resources without manual intervention. This is more scalable and maintainable than project-by-project configuration.

Q12 Hard

You are designing a disaster recovery solution for a critical application. You need RPO of 1 hour and RTO of 4 hours. Which backup and recovery strategy best meets these requirements?

  • A Hourly snapshots of persistent disks with automated failover setup ✓ Correct
  • B Weekly full backups with daily incremental backups
  • C Real-time replication to a secondary region with standby resources
  • D Daily snapshots of persistent disks stored in Cloud Storage
Explanation

Hourly snapshots provide the 1-hour RPO, and automated failover infrastructure supports the 4-hour RTO requirement. Daily or weekly backups cannot meet the 1-hour RPO, and real-time replication exceeds typical cost/complexity needs for this RPO/RTO.

Q13 Medium

You have a Cloud SQL instance with a large dataset. You need to export data to a specific Cloud Storage bucket that is in a different project. What authentication approach should you use?

  • A Create a shared VPC network connecting both projects
  • B Export to a bucket in the same project, then copy files across projects
  • C Use a service account key to authenticate between projects
  • D Grant the Cloud SQL service account 'Storage Admin' role on the target bucket with cross-project IAM binding ✓ Correct
Explanation

Cross-project IAM bindings allow the Cloud SQL service account in one project to access Cloud Storage resources in another without managing keys or complex network setup. This is the recommended secure approach.

Q14 Easy

You are configuring an identity and access management hierarchy for your organization. Which of the following represents the correct order from most to least restrictive scope?

  • A Folder → Organization → Project → Resource
  • B Organization → Folder → Project → Resource ✓ Correct
  • C Project → Folder → Organization → Resource
  • D Resource → Project → Folder → Organization
Explanation

The IAM hierarchy flows from broadest (Organization) to most specific (individual Resource). Permissions granted at higher levels are inherited by lower levels, making Organization the most restrictive in scope.

Q15 Medium

Your application requires ultra-low latency access to frequently accessed data across multiple regions. Which caching solution should you implement?

  • A Cloud Storage with strong consistency
  • B Memcached instances in each region with application-level cache coordination
  • C Cloud Memorystore with standard Redis configuration ✓ Correct
  • D Cloud CDN for static asset caching at edge locations
Explanation

Cloud Memorystore (Redis) provides ultra-low latency in-memory caching with replication capabilities. Cloud CDN suits static assets, Memcached requires manual coordination, and Cloud Storage doesn't offer the required latency characteristics.

Q16 Easy

You need to set up monitoring for a critical application running on Compute Engine. You want to receive alerts when CPU utilization exceeds 80% for more than 5 minutes. Which Google Cloud service should you use?

  • A Cloud Logging with log-based metrics
  • B Compute Engine instance groups with autoscaling rules
  • C Cloud Monitoring with alert policies ✓ Correct
  • D Cloud Profiler to analyze resource usage
Explanation

Cloud Monitoring allows creation of alert policies based on metrics thresholds and durations. Cloud Logging is for event tracking, instance groups handle scaling, and Cloud Profiler is for code performance analysis.

Q17 Medium

You are designing a multi-tier application with separate networks for frontend, backend, and database layers. How should you structure your VPC to enforce network isolation?

  • A Create multiple VPCs, one per tier, with VPC peering enabled
  • B Create separate projects for each tier with isolated networks
  • C Use Cloud Armor to restrict traffic between tiers
  • D Use a single VPC with multiple subnets and firewall rules to control traffic between layers ✓ Correct
Explanation

A single VPC with multiple subnets and firewall rules provides efficient isolation between network tiers while maintaining intra-organization connectivity. Multiple VPCs adds unnecessary complexity, and Cloud Armor is designed for edge protection, not inter-tier isolation.

Q18 Hard

Your organization uses Cloud Identity for user management. You need to ensure that users can only access Google Cloud resources from corporate IP addresses. Which feature should you implement?

  • A VPC Service Controls with access levels based on IP ranges ✓ Correct
  • B Cloud Armor with geographic restrictions
  • C Compute Engine firewall rules at the instance level
  • D Cloud IAM conditional access policies based on IP address attributes
Explanation

VPC Service Controls with access levels can enforce IP-based restrictions for Google Cloud API access. Cloud IAM doesn't natively support IP-based conditions, and firewall rules/Cloud Armor operate at different layers of the stack.

Q19 Medium

You have deployed an application on Cloud Run that processes images. The function occasionally times out on large files. What is the first optimization you should attempt?

  • A Increase the memory allocation to the Cloud Run service
  • B Implement request queueing with Pub/Sub for asynchronous processing ✓ Correct
  • C Reduce the image quality before processing to decrease execution time
  • D Deploy to Compute Engine for more control over timeout settings
Explanation

For long-running operations like image processing, asynchronous processing with Pub/Sub decouples the request from processing, avoiding timeout issues. Increasing memory can help but doesn't address the timeout constraint; the other options are less optimal.

Q20 Medium

When using Cloud Build, you want to automatically build and deploy your application whenever code is pushed to a specific branch in Cloud Source Repositories. How should you configure this?

  • A Configure Cloud Source Repositories to invoke Cloud Build webhooks on push events
  • B Set up a Cloud Scheduler job to poll the repository and invoke Cloud Build
  • C Use Cloud Functions to detect commits and trigger Cloud Build manually
  • D Create a Cloud Build trigger connected to the repository with branch filter matching your target branch ✓ Correct
Explanation

Cloud Build triggers provide native integration with Cloud Source Repositories and support branch filtering, enabling automatic builds on code pushes. Cloud Functions, webhooks, and Cloud Scheduler are less direct approaches for this use case.

Q21 Medium

You need to analyze network traffic patterns across your GCP infrastructure. Which tool provides the most detailed visibility into network flows and connectivity?

  • A Network Topology tool in the Google Cloud Console
  • B VPC Flow Logs exported to Cloud Logging ✓ Correct
  • C Compute Engine Activity logs
  • D Cloud Trace for latency analysis
Explanation

VPC Flow Logs provide detailed packet-level information (source, destination, protocol, bytes) exported to Cloud Logging for comprehensive analysis. Network Topology shows architecture but not flows; Activity logs track resource changes; Cloud Trace focuses on application latency.

Q22 Hard

Your organization requires that all Google Cloud APIs be accessed through a VPC Service Controls perimeter. How should you configure this to allow legitimate internal applications while blocking external access?

  • A Use Cloud Armor rules to block external traffic at the load balancer level
  • B Enable VPC Flow Logs and configure blocking rules based on traffic patterns
  • C Define access levels that specify authorized identity attributes, then create service control policies restricting API access to the perimeter ✓ Correct
  • D Implement firewall rules on the VPC to prevent external access to API endpoints
Explanation

VPC Service Controls perimeters combined with access levels provide comprehensive control over API access based on identity and network context. Cloud Armor, firewall rules, and VPC Flow Logs don't directly enforce service-level access controls.

Q23 Medium

You are migrating a legacy application from on-premises to Google Cloud. The application requires specific Windows Server patches. How should you handle this?

  • A Rely on Google Cloud to automatically patch all instances
  • B Manually apply patches after each instance is launched
  • C Configure Cloud Scheduler to trigger patch updates weekly
  • D Use Compute Engine custom images with patches pre-installed and enable OS patch management ✓ Correct
Explanation

Creating custom images with patches pre-installed and enabling OS patch management provides consistent, automated patching. Manual patching is error-prone; Google Cloud doesn't auto-patch, and Cloud Scheduler isn't designed for OS patching.

Q24 Hard

You have a Cloud Firestore database in native mode with global distribution enabled. What is the main consideration for consistency guarantees in multi-region deployments?

  • A Firestore always provides strong consistency regardless of region configuration
  • B Consistency is determined by the document write frequency and region latency
  • C Multi-region deployments provide eventual consistency with potential temporary inconsistencies between regions
  • D Firestore offers strong consistency within a single region but eventual consistency across regions ✓ Correct
Explanation

Cloud Firestore provides strong consistency for reads and writes within a single region, but multi-region deployments involve eventual consistency between regions due to replication latency. This is an important design consideration for global applications.

Q25 Easy

Your application needs to store sensitive data such as API keys and database passwords. Which Google Cloud service is specifically designed for this use case?

  • A Cloud Datastore with access restrictions
  • B Secret Manager for secure secret storage and retrieval ✓ Correct
  • C Cloud Spanner with field-level encryption
  • D Cloud Storage with encryption
Explanation

Secret Manager is purpose-built for storing, managing, and retrieving sensitive data with automatic rotation, audit logging, and fine-grained access control. Cloud Storage, Datastore, and Spanner are general-purpose storage solutions not optimized for secrets.

Q26 Medium

You are implementing a disaster recovery strategy with a standby region. To minimize data loss, you replicate data continuously to the standby region. However, there is replication lag. Which metric should you monitor to ensure it meets your RPO requirements?

  • A Total data transferred per day between regions
  • B Replication lag time between primary and standby databases ✓ Correct
  • C Network latency between primary and standby regions
  • D Instance startup time in the standby region
Explanation

Replication lag directly determines the maximum potential data loss, which defines the Recovery Point Objective (RPO). Network latency, data transfer volume, and startup time are related but don't directly measure RPO.

Q27 Medium

You need to create a GKE cluster with nodes that automatically scale based on workload demand. Which features should you enable?

  • A Manual node management with scheduled scaling scripts
  • B Vertical Pod Autoscaler to adjust pod resource requests
  • C Node pool autoscaling with Cluster Autoscaler ✓ Correct
  • D Horizontal Pod Autoscaler (HPA) only
Explanation

Node pool autoscaling with Cluster Autoscaler automatically adds or removes nodes based on pod resource requests and cluster utilization. HPA scales pods within existing nodes; VPA adjusts resource requests but doesn't add nodes; manual scaling is inefficient.

Q28 Medium

You need to deploy a containerized application on Google Cloud with automatic scaling based on CPU utilization. Which service should you choose?

  • A Compute Engine with instance templates
  • B Cloud Run
  • C App Engine standard environment
  • D Google Kubernetes Engine (GKE) ✓ Correct
Explanation

GKE provides native Kubernetes support with automatic scaling based on custom metrics including CPU utilization. While Cloud Run auto-scales, it's event-driven rather than metric-based.

Q29 Easy

What is the primary purpose of Cloud IAM service accounts?

  • A To authenticate human users accessing Google Cloud Console
  • B To provide application-to-application authentication and authorization ✓ Correct
  • C To manage billing accounts and payment methods
  • D To track user activity and generate audit logs
Explanation

Service accounts are designed for applications and services to authenticate with Google Cloud APIs and resources, not for human users.

Q30 Medium

You are configuring VPC firewall rules for your application. The rule has direction set to INGRESS and action set to DENY. What happens when this rule matches traffic?

  • A Outbound traffic from the VM is blocked
  • B The traffic is allowed to enter the network but logged
  • C All traffic in both directions matching the rule criteria is denied
  • D Inbound traffic matching the rule is dropped ✓ Correct
Explanation

INGRESS rules apply to inbound traffic, and DENY action blocks matching traffic. The direction and action work together to drop incoming packets matching the rule criteria.

Q31 Medium

Your company requires that all data at rest in Cloud Storage buckets be encrypted with customer-managed encryption keys (CMEK). Which service must you use to manage these keys?

  • A Cloud Identity and Access Management
  • B Cloud HSM
  • C Cloud Key Management Service (KMS) ✓ Correct
  • D Cloud Security Command Center
Explanation

Cloud KMS is the service specifically designed to manage encryption keys for CMEK scenarios across Google Cloud services including Cloud Storage.

Q32 Medium

You have a Compute Engine instance with a standard persistent disk. You need to increase the disk size from 100 GB to 200 GB while the instance is running. What is the correct approach?

  • A Resize the disk using gcloud or Console, then expand the filesystem on the instance ✓ Correct
  • B Use Cloud Snapshots to automatically resize the disk during operation
  • C Create a new disk with larger capacity, copy data, and detach the old disk
  • D Stop the instance, delete the disk, and create a new one
Explanation

Persistent disks can be resized while attached and running, but you must then expand the filesystem using tools like resize2fs or equivalent to make the additional space usable.

Q33 Easy

When using Cloud Pub/Sub, what is the relationship between topics and subscriptions?

  • A Topics and subscriptions are synonymous; they represent the same entity
  • B Topics receive messages from publishers; subscriptions deliver messages to subscribers ✓ Correct
  • C Subscriptions create topics automatically when they receive their first message
  • D Topics are only used for internal Google Cloud services and cannot be accessed by subscribers
Explanation

Cloud Pub/Sub uses a publish-subscribe model where topics are message channels created by publishers, and subscriptions allow subscribers to receive messages from those topics.

Q34 Medium

You need to migrate a database from on-premises to Cloud SQL with minimal downtime. Which feature should you use?

  • A One-time Cloud SQL import from SQL dump file
  • B Cloud Storage transfer with Database Import API
  • C Database Migration Service with continuous replication ✓ Correct
  • D Manual export using mysqldump and import
Explanation

Database Migration Service provides continuous replication to minimize downtime, allowing you to validate the migrated data before cutover and keep the source and target in sync.

Q35 Medium

In Cloud Load Balancing, what is the difference between a health check and a backend service?

  • A Backend services check instance health; health checks route traffic to available instances
  • B They are the same component with different names depending on the load balancer type
  • C Health checks monitor instance availability; backend services define the group of instances and routing rules for traffic ✓ Correct
  • D Health checks are used only for global load balancers and backend services for regional ones
Explanation

Health checks determine if instances are healthy and able to receive traffic, while backend services are groups of instances configured with load balancing parameters and traffic routing policies.

Q36 Medium

You configure a Cloud Storage bucket with uniform bucket-level access enabled. What impact does this have on object-level IAM permissions?

  • A Object-level permissions are converted to bucket-level permissions automatically
  • B Object-level permissions are ignored; only bucket-level IAM policies apply ✓ Correct
  • C Object-level permissions continue to work alongside bucket-level policies for flexible access control
  • D Uniform bucket-level access is incompatible with object-level permissions and must be disabled
Explanation

Enabling uniform bucket-level access disables object-level ACLs and permissions, enforcing that all access control is defined at the bucket level only.

Q37 Hard

You need to ensure that a specific Google Cloud project can only be accessed from your corporate network IP range. Which service should you implement?

  • A IAM conditions restricting access by IP address and VPC
  • B Cloud Armor policies on all resources
  • C VPC Service Controls with access levels and service perimeters ✓ Correct
  • D Cloud KMS with regional restrictions
Explanation

VPC Service Controls allows you to define security perimeters that restrict access to Google Cloud resources based on network location and device attributes, including IP ranges.

Q38 Hard

When creating a custom machine type on Compute Engine, what are the constraints on vCPU selection?

  • A vCPUs must be even numbers and memory must be at least 0.9 GB per vCPU ✓ Correct
  • B vCPUs must be multiples of 4 and selected from predefined configurations only
  • C Custom vCPU selection is not available; only predefined machine types can be used
  • D vCPUs can be any odd number between 1 and 96 with flexible memory allocation
Explanation

Custom machine types require vCPU counts to be even numbers (with some exceptions like 1 vCPU) and memory allocation must meet minimum ratios for optimal performance.

Q39 Medium

You deploy an application on App Engine standard environment and need to store session data that persists across requests. What is the recommended approach?

  • A Write session data to temporary local files
  • B Use local instance memory to cache session data
  • C Store session data directly in environment variables
  • D Store session data in Cloud Datastore or Firestore ✓ Correct
Explanation

App Engine standard environment instances are stateless and may be terminated between requests, so persistent session data must be stored in external services like Datastore or Firestore.

Q40 Easy

What is the primary advantage of using Cloud CDN compared to serving content directly from Cloud Storage?

  • A Cloud CDN provides stronger encryption than Cloud Storage
  • B Cloud CDN automatically compresses all content types
  • C Cloud CDN eliminates the need for origin servers entirely
  • D Cloud CDN caches content at Google edge locations globally to reduce latency and origin load ✓ Correct
Explanation

Cloud CDN integrates with Google's global edge network to cache and serve content closer to users, reducing latency and bandwidth costs at the origin.

Q41 Medium

You are designing a system that requires strong consistency for real-time financial transactions. Which database should you choose?

  • A Cloud SQL with synchronous replication ✓ Correct
  • B Cloud Firestore in Datastore mode with eventual consistency
  • C Cloud Bigtable with eventual consistency configuration
  • D BigQuery with real-time streaming inserts
Explanation

Cloud SQL provides ACID compliance and strong consistency guarantees, making it suitable for financial transactions where data accuracy is critical and eventual consistency is unacceptable.

Q42 Medium

In Google Cloud, what is the purpose of organization policies (formerly known as security policies)?

  • A To manage billing and cost allocation across projects
  • B To configure network routing and firewall rules
  • C To define IAM roles and permissions for individual users
  • D To enforce centralized control over Google Cloud resource configuration and compliance requirements ✓ Correct
Explanation

Organization policies allow administrators to enforce constraints on how Google Cloud resources can be created and configured, enabling centralized compliance and security governance.

Q43 Medium

You need to monitor the performance of a GKE cluster and receive alerts when pods consume excessive memory. Which tool should you use?

  • A Cloud Logging for pod event tracking and alerting policies
  • B Cloud Monitoring with custom metrics and alert policies on container memory usage ✓ Correct
  • C kubectl logs with manual threshold checking
  • D GKE Dashboard in the Cloud Console for real-time monitoring only
Explanation

Cloud Monitoring provides native integration with GKE to collect container and pod metrics including memory usage, and allows you to create alert policies based on these metrics.

Q44 Medium

What is the difference between Cloud Identity and Google Cloud IAM?

  • A IAM manages authentication; Cloud Identity manages API access tokens
  • B They serve the same purpose with Cloud Identity being the newer version
  • C Cloud Identity is for external users; IAM is only for Google employees
  • D Cloud Identity manages user identities and credentials; IAM controls authorization to cloud resources ✓ Correct
Explanation

Cloud Identity handles user and device identity management (authentication), while IAM controls what authenticated users and service accounts can do with resources (authorization).

Q45 Medium

You configure a Cloud SQL instance for high availability with automatic failover. How does this protect your database?

  • A It creates read replicas in multiple regions for load distribution and disaster recovery
  • B It maintains a synchronous replica that automatically promotes if the primary instance fails ✓ Correct
  • C It backs up the database hourly and allows point-in-time recovery only
  • D It encrypts all database connections using Cloud KMS keys
Explanation

High availability in Cloud SQL creates a standby replica with synchronous replication, enabling automatic failover with minimal downtime if the primary instance becomes unavailable.

Q46 Medium

When using Dataflow to process streaming data, what is the significance of specifying the windowing strategy?

  • A It specifies the data serialization format used for transport
  • B It determines how data is grouped and aggregated over time intervals for analysis ✓ Correct
  • C It defines the region where the pipeline executes
  • D It controls network bandwidth allocation for the pipeline
Explanation

Windowing in Dataflow defines how unbounded streaming data is grouped into finite sets for processing, enabling operations like time-based aggregations and statistics.

Q47 Easy

You have multiple Google Cloud projects in an organization and want to consolidate billing across all projects. How should you configure this?

  • A Configure cross-project billing policies in organization policies
  • B Use Cloud Marketplace to aggregate billing across projects
  • C Link all projects to a single billing account within the organization ✓ Correct
  • D Create separate billing accounts for each project and manually combine invoices
Explanation

Multiple projects can be linked to a single billing account, consolidating charges and enabling you to view combined usage and costs across projects.

Q48 Medium

You need to deploy an application that requires persistent storage accessible from multiple Compute Engine instances simultaneously. Which storage option is most appropriate?

  • A Cloud Storage buckets for all data storage and caching
  • B Multiple persistent disks with manual synchronization between instances
  • C Persistent disks attached to individual VMs with network sharing
  • D Filestore (Network File System) mounted to multiple instances ✓ Correct
Explanation

Filestore provides NFS-based shared file storage that can be mounted simultaneously by multiple Compute Engine instances, enabling shared access without manual synchronization.

Q49 Medium

When configuring a Cloud Run service, what is the significance of setting the concurrency value?

  • A It controls the number of regions where the service is deployed
  • B It defines the timeout duration for request processing
  • C It determines the maximum number of cloud functions that can run in parallel
  • D It limits the number of requests a single container instance can handle simultaneously before spawning new instances ✓ Correct
Explanation

Concurrency in Cloud Run specifies how many requests a single container instance processes simultaneously; exceeding this triggers the creation of additional instances to maintain performance.

Q50 Hard

You are implementing disaster recovery for a critical application with a Recovery Time Objective (RTO) of 1 hour. Which backup and recovery strategy best meets this requirement while optimizing costs?

  • A Daily backups stored in Cloud Storage with tested restoration procedures ✓ Correct
  • B Continuous cross-region replication with failover in under 15 minutes
  • C Snapshots taken weekly and stored in multi-region Cloud Storage
  • D Real-time replication to a standby environment with automatic failover mechanisms
Explanation

Daily backups with tested restoration procedures provide a cost-effective approach that can meet a 1-hour RTO, especially if restoration takes minutes and backups are frequent enough to limit data loss.

Q51 Medium

What is the primary purpose of Cloud Armor in Google Cloud?

  • A To enforce network segmentation and firewall rules at the instance level
  • B To manage cryptographic keys and encryption policies
  • C To encrypt data in transit between Google Cloud regions
  • D To provide DDoS protection and WAF capabilities for HTTP(S) load balancers ✓ Correct
Explanation

Cloud Armor is a Web Application Firewall that protects HTTP(S) load balancers from DDoS attacks and common web exploits using rules and policies.

Q52 Medium

You need to analyze large datasets stored in BigQuery for business intelligence. When should you use BigQuery ML instead of training models externally?

  • A When you require real-time predictions with sub-millisecond latency for individual requests
  • B When you want to build models directly on data in BigQuery without moving data and use SQL for model development ✓ Correct
  • C When you need full control over training hyperparameters and custom loss functions only
  • D When the dataset is smaller than 1 GB and doesn't require distributed processing
Explanation

BigQuery ML allows you to create and train models using SQL queries on BigQuery data, eliminating data movement and simplifying the ML workflow for analysts and data engineers.

Q53 Hard

How does Google Cloud's commitment to a multi-cloud strategy impact the tools available to Cloud Engineers?

  • A It provides Anthos for managing applications across Google Cloud and other cloud providers with consistent tooling ✓ Correct
  • B It requires organizations to use only Google Cloud services for compatibility
  • C It eliminates the need for hybrid cloud solutions altogether
  • D It restricts Google Cloud tools to only work within Google Cloud environments
Explanation

Anthos is Google Cloud's platform for running and managing applications across Google Cloud, on-premises, and other cloud providers, providing consistent deployment and management.

Q54 Medium

You are deploying a multi-tier application on Google Cloud. You need to ensure that your Compute Engine instances in a private subnet can access external APIs without exposing them to the internet. What should you implement?

  • A Cloud NAT to allow outbound internet access from private instances ✓ Correct
  • B VPN tunnels to route all traffic through your on-premises network
  • C Cloud Interconnect to establish dedicated network connectivity
  • D A public IP address on each instance for direct internet access
Explanation

Cloud NAT provides outbound internet access for resources with only internal IPs, allowing private instances to reach external APIs securely without public IPs.

Q55 Hard

Your organization uses Cloud Storage buckets across multiple projects. You need to enforce consistent lifecycle policies across all buckets without manually configuring each one. Which approach is most efficient?

  • A Use Cloud Deployment Manager to create buckets with predefined lifecycle policies
  • B Apply Organization Policy constraints to enforce lifecycle management at the organization level
  • C Use Terraform modules with consistent lifecycle policy configurations ✓ Correct
  • D Write a Cloud Function that applies policies to existing buckets on a schedule
Explanation

Terraform modules allow you to define reusable infrastructure code with consistent configurations, making it the most efficient way to deploy and maintain uniform policies across multiple buckets.

Q56 Medium

You have a GKE cluster where some pods are experiencing high latency when communicating with Cloud SQL. What is the most likely cause and the recommended solution?

  • A The pods are not using Cloud SQL Proxy; configure pods to use the Cloud SQL Proxy sidecar ✓ Correct
  • B Insufficient memory is allocated to pods; increase pod resource requests and limits
  • C The pods lack proper network policies; create NetworkPolicy resources to isolate traffic
  • D The Cloud SQL instance is in a different region; migrate the instance to the same region as the GKE cluster
Explanation

Cloud SQL Proxy provides secure, encrypted connections and handles authentication automatically, significantly reducing latency compared to direct connections.

Q57 Hard

Your company requires that all data stored in Cloud Storage must be encrypted with customer-managed encryption keys (CMEK). You have already created keys in Cloud KMS. What is the next step to enforce this requirement?

  • A Enable default CMEK at the bucket level when creating new buckets and use gsutil to re-encrypt existing buckets
  • B Create a Cloud Function to monitor and re-encrypt any unencrypted objects automatically
  • C Use Organization Policy to require CMEK for all Cloud Storage operations across the organization ✓ Correct
  • D Configure IAM bindings to grant the Cloud Storage service account access to your Cloud KMS keys
Explanation

Organization Policy constraints enforce CMEK usage organization-wide, ensuring compliance across all Cloud Storage buckets without manual configuration on each bucket.

Q58 Easy

You are configuring monitoring and alerting for a production application on Google Cloud. You need to receive notifications when CPU utilization exceeds 80% on any Compute Engine instance. What should you use?

  • A Cloud Monitoring alert policies with a metric threshold condition and notification channels ✓ Correct
  • B Cloud Logging filters to detect high CPU and send email notifications
  • C Cloud Trace to identify performance bottlenecks and trigger alerts based on trace data
  • D Cloud Monitoring to create an uptime check that verifies CPU metrics
Explanation

Cloud Monitoring alert policies allow you to define threshold-based conditions on metrics like CPU utilization and route notifications through various channels like email.

Q59 Hard

Your application requires low-latency access to frequently changing configuration data that must be consistent across regions. Which Google Cloud solution best meets these requirements?

  • A Cloud Memorystore for Redis as a distributed cache layer
  • B Cloud Datastore with geo-replication enabled
  • C Cloud Firestore in multi-region mode with eventual consistency
  • D Cloud Firestore in multi-region mode with strong consistency ✓ Correct
Explanation

Cloud Firestore in multi-region mode provides strong consistency for configuration data while offering low-latency reads across geographic regions.

Q60 Medium

You need to migrate a legacy on-premises MySQL database to Cloud SQL while minimizing downtime. What tool should you use for the initial data transfer?

  • A Cloud Storage Transfer Service to copy database files directly
  • B Cloud Dataflow to transform and load data into Cloud SQL
  • C Manual export using mysqldump and import into Cloud SQL
  • D Database Migration Service (DMS) to perform the migration with minimal downtime ✓ Correct
Explanation

Database Migration Service automates the migration process, handles continuous replication, and minimizes downtime during cutover for MySQL to Cloud SQL migrations.

Q61 Medium

You have deployed a containerized application on Cloud Run that calls other Google Cloud APIs. The application is failing with permission errors when accessing Cloud Storage. What is the most likely cause?

  • A The application must use service account keys instead of the default credentials
  • B The Cloud Run service account lacks the necessary IAM roles to access Cloud Storage ✓ Correct
  • C Cloud Run does not support calling other Google Cloud APIs from containers
  • D Cloud Storage buckets require public access to be called from Cloud Run services
Explanation

Cloud Run uses a service account to call other Google Cloud APIs; the service account must have the appropriate IAM roles (like Storage Object Viewer) granted.

Q62 Hard

Your organization wants to implement a hub-and-spoke network topology connecting multiple projects across Google Cloud regions. Which networking solution should you implement?

  • A Cloud VPN with custom routes configured between each project and the hub
  • B Shared VPC with multiple shared subnets in each region for centralized management ✓ Correct
  • C Cloud Interconnect with multiple cross-connect locations for dedicated connectivity
  • D VPC Service Controls to isolate network traffic between projects
Explanation

Shared VPC enables centralized network management by allowing a hub project to share subnets with multiple spoke projects, simplifying the hub-and-spoke topology.

Q63 Easy

You are optimizing costs for a batch processing job that runs nightly on Compute Engine. The job is not latency-sensitive and can tolerate interruptions. What approach would reduce costs most significantly?

  • A Migrate the workload to Cloud Dataflow for automatic scaling and cost optimization
  • B Switch to Preemptible VMs and add retry logic to handle interruptions gracefully ✓ Correct
  • C Implement custom autoscaling based on job queue depth to match capacity with demand more precisely
  • D Use sustained use discounts by committing to a 1-year contract for the instances
Explanation

Preemptible VMs cost up to 80% less than standard instances, making them ideal for fault-tolerant, non-latency-sensitive workloads like batch processing with proper retry mechanisms.

Ready to test your knowledge?

You've reviewed all 63 questions. Take the interactive practice exam to simulate the real test environment.

▶ Start Practice Exam — Free