You are hosting an application from Compute Engine virtual machines (VMs) in us--central1--
a. You want to adjust your design to support the failure of a single Compute Engine zone, eliminate downtime, and minimize cost. What should you do?
Answer : A
Choosing a region and zone You choose which region or zone hosts your resources, which controls where your data is stored and used. Choosing a region and zone is important for several reasons:
Handling failures
You have developed a containerized web application that will serve Internal colleagues during business hours. You want to ensure that no costs are incurred outside of the hours the application is used. You have just created a new Google Cloud project and want to deploy the application. What should you do?
You are using Container Registry to centrally store your company's container images in a separate project. In another project, you want to create a Google Kubernetes Engine (GKE) cluster. You want to ensure that Kubernetes can download images from Container Registry. What should you do?
Answer : A
Configure the ACLs on each image in Cloud Storage to give read-only access to the default Compute Engine service account. is not right. As mentioned above, Container Registry ignores permissions set on individual objects within the storage bucket so this isnt going to work.
Ref:https://cloud.google.com/container-registry/docs/access-control
You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost. How should you run this reverse proxy?
Answer : A
What is Google Cloud Memorystore?
Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform. Applications running on Google Cloud Platform can achieve extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments.
You used the gcloud container clusters command to create two Google Cloud Kubernetes (GKE) clusters prod-cluster and dev-cluster.
* prod-cluster is a standard cluster.
* dev-cluster is an auto-pilot duster.
When you run the Kubect1 get nodes command, you only see the nodes from prod-cluster Which commands should you run to check the node status for dev-cluster?
A.
B.
C.
D.
Answer : C
(Your company uses a multi-cloud strategy that includes Google Cloud. You want to centralize application logs in a third-party software-as-a-service (SaaS) tool from all environments. You need to integrate logs originating from Cloud Logging, and you want to ensure the export occurs with the least amount of delay possible. What should you do?)
Answer : B
Comprehensive and Detailed In Depth Explanation:
The requirement is to export logs from Cloud Logging to a third-party SaaS tool with the least amount of delay possible. Let's analyze each option:
A . Cloud Scheduler, Cloud Function, and querying Cloud Logging: This approach introduces a delay based on the Cloud Scheduler's cron job frequency. The Cloud Function would periodically query Cloud Logging, which might not capture the logs in real-time. This does not meet the 'least amount of delay possible' requirement.
B . Cloud Logging sink to Pub/Sub, SaaS tool subscribing to Pub/Sub: Cloud Logging sinks can be configured to export logs in near real-time as they are ingested into Cloud Logging. Pub/Sub is a messaging service designed for asynchronous and near real-time message delivery. By configuring the sink to send logs to a Pub/Sub topic, and having the SaaS tool subscribe to this topic, logs can be delivered to the SaaS tool with minimal delay. This aligns with the requirement for immediate export.
C . Cloud Logging sink to Cloud Storage, SaaS tool reading Cloud Storage: Exporting logs to Cloud Storage involves a batch-oriented approach. Logs are typically written to files periodically. The SaaS tool would then need to poll or be configured to read these files, introducing a significant delay compared to a streaming approach.
D . Cloud Logging sink to BigQuery, SaaS tool querying BigQuery: Similar to Cloud Storage, exporting to BigQuery is more suitable for analytical purposes. The SaaS tool would need to periodically query BigQuery, which introduces latency and is not the most efficient way to achieve near real-time log delivery.
Therefore, configuring a Cloud Logging sink to Pub/Sub and having the SaaS tool subscribe to the Pub/Sub topic provides the lowest latency for exporting logs.
Google Cloud Documentation Reference:
Cloud Logging Sinks Overview: https://cloud.google.com/logging/docs/export/configure_export_v2 - This document explains how to create and manage Cloud Logging sinks, including the available destinations.
Pub/Sub Overview: https://cloud.google.com/pubsub/docs/overview - This highlights Pub/Sub's capabilities for real-time message delivery and its use cases in streaming data.
Exporting Logs with Cloud Logging: https://cloud.google.com/logging/docs/export - This provides a comprehensive guide to exporting logs from Cloud Logging to various destinations, emphasizing Pub/Sub for streaming.
You have an application that uses Cloud Spanner as a backend database. The application has a very predictable traffic pattern. You want to automatically scale up or down the number of Spanner nodes depending on traffic. What should you do?