Your Google Cloud organization allows for administrative capabilities to be distributed to each team through provision of a Google Cloud project with Owner role (roles/ owner). The organization contains thousands of Google Cloud Projects Security Command Center Premium has surfaced multiple cpen_myscl_port findings. You are enforcing the guardrails and need to prevent these types of common misconfigurations.
What should you do?
Answer : D
Challenge:
Prevent common misconfigurations that expose services (e.g., MYSQL) to the public internet.
Hierarchical Firewall Policies:
These policies can be applied at the organization level to enforce consistent network security rules across all projects.
Solution:
Create a hierarchical firewall policy that allows connections only from internal IP ranges.
This policy ensures that services like MySQL are not exposed to 0.0.0.0/0 (the entire internet).
Steps:
Step 1: Define the hierarchical firewall policy at the organization level.
Step 2: Set the rule to allow traffic only from internal IP ranges.
Step 3: Apply the policy to all projects under the organization.
Benefits:
Centralized management of network security.
Prevents accidental exposure of services to the public internet, enhancing security.
Hierarchical Firewall Policies
Securing MySQL on GCP
You define central security controls in your Google Cloud environment for one of the folders in your organization you set an organizational policy to deny the assignment of external IP addresses to VMs. Two days later you receive an alert about a new VM with an external IP address under that folder.
What could have caused this alert?
Answer : C
Understand Organization Policies:
Organization policies allow you to enforce restrictions on Google Cloud resources to adhere to your organization's security and compliance requirements.
Policies can be set at the organization, folder, or project level, with project-level policies able to override higher-level policies unless explicitly prevented.
Identify the Policy Constraint:
The specific constraint in question is likely constraints/compute.vmExternalIpAccess, which controls whether VMs can have external IP addresses.
Check Policy Overwrites:
Navigate to the Organization Policies page in the Google Cloud Console.
Check the policy settings at the project level under the affected folder to see if there is an override in place with an 'allow' value.
This override would permit the creation of VMs with external IP addresses despite the higher-level restriction.
Resolve the Policy Conflict:
If an override is found, remove or modify the project-level policy to align with the organizational policy denying external IP addresses.
Communicate with project administrators to ensure they understand and comply with the overarching security policies.
Organization Policy Best Practices
Managing Policy Constraints
You have just created a new log bucket to replace the _Default log bucket. You want to route all log entries that are currently routed to the _Default log bucket to this new log bucket in the most efficient manner. What should you do?
Answer : D
In Google Cloud's Logging service, log entries are automatically routed to the _Default log bucket unless configured otherwise. When you create a new log bucket and intend to redirect all log entries from the _Default bucket to this new bucket, the most efficient approach is to modify the existing _Default sink to point to the new log bucket.
Option A: Creating a new user-defined sink with filters replicated from the _Default sink is redundant and may lead to configuration complexities.
Option B: Implementing exclusion filters on the _Default sink and then creating a new sink introduces unnecessary steps and potential for misconfiguration.
Option C: Disabling the _Default sink would stop all log routing to it, but creating a new sink to replicate its functionality is inefficient.
Option D: Editing the _Default sink to change its destination to the new log bucket ensures a seamless transition of log routing without additional configurations.
Therefore, Option D is the most efficient and straightforward method to achieve the desired log routing.
Routing and Storage Overview
Configure Default Log Router Settings
Your company runs a website that will store PII on Google Cloud Platform. To comply with data privacy regulations, this data can only be stored for a specific amount of time and must be fully deleted after this specific period. Data that has not yet reached the time period should not be deleted. You want to automate the process of complying with this regulation.
What should you do?
Your organization is using Google Cloud to develop and host its applications Following Google-recommended practices, the team has created dedicated projects for development and production Your development team is located in Canada and Germany The operations team works exclusively from Germany to adhere to local laws You need to ensure that admin access to Google Cloud APIs is restricted to these countries and environments What should you do?
Answer : C
The problem requires restricting admin access to Google Cloud APIs based on geographic location (Canada and Germany) and environment (development and production projects)
VPC Service Controls (VPC SC): VPC Service Controls is designed to create security perimeters around Google Cloud resources and services Its primary purpose is to prevent data exfiltration and control access to Google Cloud APIs based on the context of the request, which includes the source IP address
Extract Reference: 'VPC Service Controls provides an extra layer of security defense for Google Cloud services that is independent of Identity and Access Management (IAM) While IAM enables granular identity-based access control, VPC Service Controls enables broader context-based perimeter security, including controlling data egress across the perimeter' (Google Cloud Documentation: 'Overview of VPC Service Controls' - https://cloudgooglecom/vpc-service-controls/docs/overview)
Service Perimeters for Environments: Creating dedicated perimeters for development and production projects allows for logical separation of environments, which aligns with the 'dedicated projects for development and production' structure
Ingress Policies with Geographic Restrictions: VPC Service Controls uses 'ingress rules' to define who and from where requests can enter a service perimeter These ingress rules can be configured to allow access based on various attributes, including the source IP address of the request By allowing access from specific IP ranges corresponding to Canada and Germany, you effectively restrict administrative access to APIs from those countries You can define 'access levels' (which can include IP subnets or geographical origins) and attach them to ingress policies
Extract Reference: 'To allow ingress to resources, VPC Service Controls evaluates sources and identityType attributes as an AND condition You must specify an accessLevel or a resource (Google Cloud project or VPC network), or set accessLevel attribute to *' (Google Cloud Documentation: 'Ingress and egress rules | VPC Service Controls' - https://cloudgooglecom/vpc-service-controls/docs/ingress-egress-rules)
Extract Reference (for Context-Aware Access which underpins access levels): 'You can create different types of Context-Aware Access policies for accessing apps: IP, device, geographic origin, and custom access-level attributes' (Google Workspace Admin Help: 'Protect your business with Context-Aware Access' - https://supportgooglecom/a/answer/9275380) - While this references Workspace apps, the underlying mechanism of Access Context Manager (used by VPC SC) supports geographic restrictions
Let's evaluate the other options:
A Create dedicated firewall policies restrict access based on geolocations: VPC firewall rules operate at the network level (Layers 3/4) within a VPC They control traffic between VM instances or to/from the internet for network services They do not directly control admin access to Google Cloud APIs (eg, via the console or gcloud CLI calls) originating from outside the VPC
B Activate the organization policy on the folders to restrict resource location: The Resource Location Restriction organization policy constraint restricts where new resources can be created or stored (eg, data residency requirements) It does not restrict where administrators can connect from to manage these resources or access APIs
D Create dedicated IAM Groups Grant access: IAM (Identity and Access Management) controls who can access what resources and what actions they can perform It does not natively provide control over where the access originates from (eg, country-specific IP addresses)
An organization is starting to move its infrastructure from its on-premises environment to Google Cloud Platform (GCP). The first step the organization wants to take is to migrate its current data backup and disaster recovery solutions to GCP for later analysis. The organization's production environment will remain on- premises for an indefinite time. The organization wants a scalable and cost-efficient solution.
Which GCP solution should the organization use?
Answer : B
To migrate the current data backup and disaster recovery solutions to GCP while keeping the production environment on-premises, the most scalable and cost-efficient solution is using Google Cloud Storage with scheduled tasks and the gsutil command.
Setup Cloud Storage: Create a Cloud Storage bucket to store the backups.
Go to the Cloud Console and navigate to Cloud Storage.
Click 'Create bucket' and follow the prompts to configure the storage bucket.
Install gsutil: Ensure gsutil is installed on the on-premises servers.
gsutil is a command-line tool for interacting with Cloud Storage.
Follow the installation guide here.
Create Backup Script: Write a script to upload data to Cloud Storage using gsutil.
#!/bin/bash gsutil -m cp -r /path/to/local/backup gs://your-bucket-name
Schedule Backup Task: Use a scheduling tool like cron on Linux to run the backup script at regular intervals.
Edit the crontab file with crontab -e and add an entry like:
Cloud Storage Documentation
gsutil Documentation
A patch for a vulnerability has been released, and a DevOps team needs to update their running containers in Google Kubernetes Engine (GKE).
How should the DevOps team accomplish this?
Answer : C
When a vulnerability patch is released for a running container in Google Kubernetes Engine (GKE), the recommended approach is to update the application code or apply the patch directly to the codebase. Then, a new container image should be built incorporating these changes. After building the new image, it should be deployed to replace the running containers. This method ensures that the containers run the updated, secure code.
Steps:
Update Application Code: Modify the application code or dependencies to incorporate the vulnerability patch.
Build New Image: Use a tool like Docker to build a new container image with the updated code.
Push New Image: Push the new container image to the Container Registry.
Update Deployments: Update the Kubernetes deployment to use the new image. This can be done by modifying the image tag in the deployment YAML file.
Redeploy Containers: Apply the updated deployment configuration using kubectl apply -f <deployment-file>.yaml, which will redeploy the containers with the new image.
Google Cloud: Container security
Kubernetes: Updating an application