Exam: GCP: Professional Cloud Architect

Total Questions: 283
Page of

You deployed a new application to Google Kubernetes Engine and are experiencing some performance degradation. Your logs are being written to Cloud

Logging, and you are using a Prometheus sidecar model for capturing metrics. You need to correlate the metrics and data from the logs to troubleshoot the performance issue and send real-time alerts while minimizing costs. What should you do?

A. Create custom metrics from the Cloud Logging logs, and use Prometheus to import the results using the Cloud Monitoring REST API.
B. Export the Cloud Logging logs and the Prometheus metrics to Cloud Bigtable. Run a query to join the results, and analyze in Google Data Studio.
C. Export the Cloud Logging logs and stream the Prometheus metrics to BigQuery. Run a recurring query to join the results, and send notifications using Cloud Tasks.
D. Export the Prometheus metrics and use Cloud Monitoring to view them as external metrics. Configure Cloud Monitoring to create log-based metrics from the logs, and correlate them with the Prometheus data.
Question image
Answer: D

You are evaluating developer tools to help drive Google Kubernetes Engine adoption and integration with your development environment, which includes VS Code and IntelliJ. What should you do?

A. Use Cloud Code to develop applications.
B. Use the Cloud Shell integrated Code Editor to edit code and configuration files.
C. Use a Cloud Notebook instance to ingest and process data and deploy models.
D. Use Cloud Shell to manage your infrastructure and applications from the command line.
Answer: A ✅ Explanation -Cloud Code is a set of IDE plugins from Google Cloud that support Visual Studio Code and IntelliJ. It is specifically designed to enhance the developer experience for Kubernetes and Google Kubernetes -Engine (GKE) by enabling: Easy configuration of Kubernetes applications. Built-in support for Skaffold for CI/CD workflows. Integration with GKE and Cloud Run. Simplified deployment and debugging of applications directly from your IDE. -This makes Cloud Code the ideal tool when your goal is to drive GKE adoption and integrate Kubernetes development with VS Code or IntelliJ. -Other Options: B. Cloud Shell Editor is useful but limited compared to full IDEs like VS Code or IntelliJ. C. Cloud Notebooks are intended for data science and machine learning, not GKE development. D. Cloud Shell is good for CLI management, but not specifically for application development or IDE integration.

You are deploying your applications on Compute Engine. One of your Compute Engine instances failed to launch. What should you do? (Choose two.)

A. Determine whether your file system is corrupted.
B. Access Compute Engine as a different SSH user.
C. Troubleshoot firewall rules or routes on an instance.
D. Check whether your instance boot disk is completely full.
E. Check whether network traffic to or from your instance is being dropped.
Answer: AD ✅ Explanation -The two correct answers are: ✅ A. Determine whether your file system is corrupted. If the file system is corrupted, the instance may fail to boot properly. Reviewing the serial port output or using a recovery instance to inspect the disk can help identify and fix file system issues. ✅ D. Check whether your instance boot disk is completely full. A full boot disk can prevent the operating system from functioning correctly during startup, causing the instance to fail to launch.

You are designing an application that uses a microservices architecture. You are planning to deploy the application in the cloud and on-premises. You want to make sure the application can scale up on demand and also use managed services as much as possible. What should you do?

A. Deploy open source Istio in a multi-cluster deployment on multiple Google Kubernetes Engine (GKE) clusters managed by Anthos.
B. Create a GKE cluster in each environment with Anthos, and use Cloud Run for Anthos to deploy your application to each cluster.
C. Install a GKE cluster in each environment with Anthos, and use Cloud Build to create a Deployment for your application in each cluster.
D. Create a GKE cluster in the cloud and install open-source Kubernetes on-premises. Use an external load balancer service to distribute traffic across the two environments.
Answer: B ✅ Explanation: -You are: Using microservices architecture Requiring hybrid deployment (cloud and on-premises) Wanting to scale on demand Prefer to use managed services -Anthos is designed specifically for hybrid and multi-cloud Kubernetes deployments. It allows you to manage GKE clusters across cloud and on-premises environments with a unified control plane. -Cloud Run for Anthos allows you to deploy containerized applications with serverless capabilities on GKE clusters—both cloud and on-prem—giving you the scalability and simplicity of serverless, while still running in your own infrastructure.

You are developing an application that will handle requests from end users. You need to secure a Cloud Function called by the application to allow authorized end users to authenticate to the function via the application while restricting access to unauthorized users. You will integrate Google Sign-In as part of the solution and want to follow Google-recommended best practices. What should you do?

A. Deploy from a source code repository and grant users the roles/cloudfunctions.viewer role.
B. Deploy from a source code repository and grant users the roles/cloudfunctions.invoker role
C. Deploy from your local machine using gcloud and grant users the roles/cloudfunctions.admin role
D. Deploy from your local machine using gcloud and grant users the roles/cloudfunctions.developer role
Answer: B ✅ Explanation: -To secure a Cloud Function that handles requests from end users, and to restrict access to authorized users only, you should: -Integrate Google Sign-In in your front-end application so users authenticate with their Google accounts. -On successful sign-in, the app obtains a Google-signed ID token from the user. -The app then makes an HTTP request to your Cloud Function and passes the ID token in the Authorization header (Authorization: Bearer <ID_TOKEN>). -The Cloud Function verifies the token and ensures it matches a trusted identity. -To allow authenticated users to invoke the function, you must assign the appropriate IAM role: roles/cloudfunctions.invoker: This role allows users or identities to invoke (call) the function, which is exactly what's needed for end-user access. -Deploying from a source repository (like GitHub or Cloud Source Repositories) is a Google best practice for managing and automating deployments.

Case study -

This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study -

To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Company Overview -

HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in demand around the world.

Executive Statement -

We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away from each other.

Solution Concept -

HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides clear uptime data, and that they analyze and respond to any issues that occur.

Existing Technical Environment -

HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:

• Existing APIs run on Compute Engine virtual machine instances hosted in GCP.

• State is stored in a single instance MySQL database in GCP.

• Release cycles include development freezes to allow for QA testing.

• The application has no logging.

• Applications are manually deployed by infrastructure engineers during periods of slow traffic on weekday evenings.

• There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.

Business Requirements -

HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:

• Expand availability of the application to new regions.

• Support 10x as many concurrent users.

• Ensure a consistent experience for users when they travel to different regions.

• Obtain user activity metrics to better understand how to monetize their product.

• Ensure compliance with regulations in the new regions (for example, GDPR).

• Reduce infrastructure management time and cost.

• Adopt the Google-recommended practices for cloud computing.

○ Develop standardized workflows and processes around application lifecycle management.

○ Define service level indicators (SLIs) and service level objectives (SLOs).

Technical Requirements -

• Provide secure communications between the on-premises data center and cloud-hosted applications and infrastructure.

• The application must provide usage metrics and monitoring.

• APIs require authentication and authorization.

• Implement faster and more accurate validation of new features.

• Logging and performance metrics must provide actionable information to be able to provide debugging information and alerts.

• Must scale to meet user demand.

For this question, refer to the HipLocal case study.

A recent security audit discovers that HipLocal’s database credentials for their Compute Engine-hosted MySQL databases are stored in plain text on persistent disks. HipLocal needs to reduce the risk of these credentials being stolen. What should they do?

A. Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain the database credentials.
B. Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain a key used to decrypt the database credentials.
C. Create a service account and grant it the roles/iam.serviceAccountUser role. Impersonate as this account and authenticate using the Cloud SQL Proxy.
D. Grant the roles/secretmanager.secretAccessor role to the Compute Engine service account. Store and access the database credentials with the Secret Manager API.
Answer: D

You have a container deployed on Google Kubernetes Engine. The container can sometimes be slow to launch, so you have implemented a liveness probe. You notice that the liveness probe occasionally fails on launch. What should you do?

A. Add a startup probe.
B. Increase the initial delay for the liveness probe.
C. Increase the CPU limit for the container.
D. Add a readiness probe.
Answer: A ✅ Explanation: -If your container is slow to start and the liveness probe fails during startup, the best practice recommended by Kubernetes (and Google Cloud) is to: -Use a startupProbe to delay the execution of the liveness probe until the application is fully started. -startupProbe is specifically designed for slow-starting applications. -While the startupProbe is failing, Kubernetes will not run the livenessProbe or readinessProbe. -Once the startupProbe succeeds, the livenessProbe and readinessProbe take over. -This prevents Kubernetes from killing your container prematurely before it has had a chance to initialize.

You work for an organization that manages an ecommerce site. Your application is deployed behind a global HTTP(S) load balancer. You need to test a new product recommendation algorithm. You plan to use A/B testing to determine the new algorithm’s effect on sales in a randomized way. How should you test this feature?

A. Split traffic between versions using weights.
B. Enable the new recommendation feature flag on a single instance.
C. Mirror traffic to the new version of your application.
D. Use HTTP header-based routing.
Answer: A ✅ Explanation: -A/B testing involves randomly directing a portion of user traffic to a different version of your application (or feature) and comparing the outcomes — such as sales performance — between the original and the test version. -Google Cloud HTTP(S) Load Balancer supports traffic splitting by weights when used with backend services or URL maps. This allows you to: -Control what percentage of users get routed to version A (existing) vs. version B (new). -Perform a true randomized test at the load balancer level. -Gather statistically meaningful insights with minimal infrastructure changes. -This is the most scalable, reliable, and accurate method for A/B testing in this context.

You are developing a new web application using Cloud Run and committing code to Cloud Source Repositories. You want to deploy new code in the most efficient way possible. You have already created a Cloud Build YAML file that builds a container and runs the following command: gcloud run deploy. What should you do next?

A. Create a Pub/Sub topic to be notified when code is pushed to the repository. Create a Pub/Sub trigger that runs the build file when an event is published to the topic.
B. Create a build trigger that runs the build file in response to a repository code being pushed to the development branch.
C. Create a webhook build trigger that runs the build file in response to HTTP POST calls to the webhook URL.
D. Create a Cron job that runs the following command every 24 hours: gcloud builds submit.
Answer: B ✅ Explanation: -Since you're using Cloud Source Repositories and you already have a Cloud Build YAML file that builds and deploys your Cloud Run service using gcloud run deploy, the most efficient and automated way to deploy new code is by: -Creating a Cloud Build trigger that watches the development branch (or whichever branch you want to deploy from). -This trigger will automatically run your Cloud Build YAML file every time code is pushed to that branch. -This ensures: -Continuous deployment (CD) is enabled. -There is no manual intervention needed after code is committed. -Your Cloud Run service is automatically updated with the new version.

Your team has created an application that is hosted on a Google Kubernetes Engine (GKE) cluster. You need to connect the application to a legacy REST service that is deployed in two GKE clusters in two different regions. You want to connect your application to the target service in a way that is resilient. You also want to be able to run health checks on the legacy service on a separate port. How should you set up the connection? (Choose two.)

A. Use Traffic Director with a sidecar proxy to connect the application to the service.
B. Use a proxyless Traffic Director configuration to connect the application to the service.
C. Configure the legacy service's firewall to allow health checks originating from the proxy.
D. Configure the legacy service's firewall to allow health checks originating from the application.
E. Configure the legacy service's firewall to allow health checks originating from the Traffic Director control plane.
Answer: AC ✅ Explanation: -You want to: -Connect to a legacy REST service in two GKE clusters (multi-region). -Ensure the connection is resilient. -Run health checks on a separate port. To achieve this: ✅ A. Use Traffic Director with a sidecar proxy Traffic Director is Google Cloud’s managed service mesh control plane, providing resilient, intelligent routing, and load balancing across services and regions. To enable advanced features like health checks, failover, and traffic routing, use sidecar proxies (e.g., Envoy) with your application. Sidecar proxies can handle traffic routing, health checking, retries, and failover automatically, providing high resilience. ✅ C. Configure the legacy service's firewall to allow health checks originating from the proxy The sidecar proxy performs active health checks to the legacy service on a specified port. The legacy service must allow ingress from the proxy's IP range for the health check port. This ensures Traffic Director (via the sidecar proxy) can evaluate service health before routing traffic.