A company has an AWS CodePipeline pipeline that is configured with an Amazon S3 bucket in the eu-west-1 Region. The pipeline deploys an AWS Lambda application to the same Region. The pipeline consists of an AWS CodeBuild project build action and an AWS CloudFormation deploy action.
The CodeBuild project uses the aws cloudformation package AWS CLI command to build an artifact that contains the Lambda function code's .zip file and the CloudFormation template. The CloudFormation deploy action references the CloudFormation template from the output artifact of the CodeBuild project's build action.
The company wants to also deploy the Lambda application to the us-east-1 Region by using the pipeline in eu-west-1. A DevOps engineer has already updated the CodeBuild project to use the aws cloudformation package command to produce an additional output artifact for us-east-1.
Which combination of additional steps should the DevOps engineer take to meet these requirements? (Choose two.)
Answer : A, B
A) The CloudFormation template should be modified to include a parameter that indicates the location of the .zip file containing the Lambda function's code. This allows the CloudFormation deploy action to use the correct artifact depending on the region. This is critical because Lambda functions need to reference their code artifacts from the same region they are being deployed in. B. You would also need to create a new CloudFormation deploy action for the us-east-1 Region within the pipeline. This action should be configured to use the CloudFormation template from the artifact that was specifically created for us-east-1.
A company that runs many workloads on AWS has an Amazon EBS spend that has increased over time. The DevOps team notices there are many unattached
EBS volumes. Although there are workloads where volumes are detached, volumes over 14 days old are stale and no longer needed. A DevOps engineer has been tasked with creating automation that deletes unattached EBS volumes that have been unattached for 14 days.
Which solution will accomplish this?
Answer : C
The requirement is to create automation that deletes unattached EBS volumes that have been unattached for 14 days. To do this, the DevOps engineer needs to use the following steps:
Create an Amazon CloudWatch Events rule to execute an AWS Lambda function daily. CloudWatch Events is a service that enables event-driven architectures by delivering events from various sources to targets. Lambda is a service that lets you run code without provisioning or managing servers. By creating a CloudWatch Events rule that executes a Lambda function daily, the DevOps engineer can schedule a recurring task to check and delete unattached EBS volumes.
The Lambda function should find unattached EBS volumes and tag them with the current date, and delete unattached volumes that have tags with dates that are more than 14 days old. The Lambda function can use the EC2 API to list and filter unattached EBS volumes based on their state and tags. The function can then tag each unattached volume with the current date using the create-tags command. The function can also compare the tag value with the current date and delete any unattached volume that has been tagged more than 14 days ago using the delete-volume command.
A company has many AWS accounts. During AWS account creation the company uses automation to create an Amazon CloudWatch Logs log group in every AWS Region that the company operates in. The automaton configures new resources in the accounts to publish logs to the provisioned log groups in their Region.
The company has created a logging account to centralize the logging from all the other accounts. A DevOps engineer needs to aggregate the log groups from all the accounts to an existing Amazon S3 bucket in the logging account.
Which solution will meet these requirements in the MOST operationally efficient manner?
Answer : C
This solution will meet the requirements in the most operationally efficient manner because it will use CloudWatch Logs destination to aggregate the log groups from all the accounts to a single S3 bucket in the logging account. However, unlike option A, this solution will create a CloudWatch Logs destination for each region, instead of a single destination for all regions. This will improve the performance and reliability of the log delivery, as it will avoid cross-region data transfer and latency issues. Moreover, this solution will use an Amazon Kinesis data stream and an Amazon Kinesis Data Firehose delivery stream for each region, instead of a single stream for all regions. This will also improve the scalability and throughput of the log delivery, as it will avoid bottlenecks and throttling issues that may occur with a single stream.
A large enterprise is deploying a web application on AWS. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application stores data in an Amazon RDS for Oracle DB instance and Amazon DynamoDB. There are separate environments tor development testing and production.
What is the MOST secure and flexible way to obtain password credentials during deployment?
A company has an organization in AWS Organizations. The organization includes workload accounts that contain enterprise applications. The company centrally manages users from an operations account. No users can be created in the workload accounts. The company recently added an operations team and must provide the operations team members with administrator access to each workload account.
Which combination of actions will provide this access? (Choose three.)
A DevOps team is deploying microservices for an application on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The cluster uses managed node groups.
The DevOps team wants to enable auto scaling for the microservice Pods based on a specific CPU utilization percentage. The DevOps team has already installed the Kubernetes Metrics Server on the cluster.
Which solution will meet these requirements in the MOST operationally efficient way?
Answer : D
Comprehensive and Detailed Explanation From Exact Extract:
To scale microservice Pods based on CPU utilization, the Kubernetes Horizontal Pod Autoscaler (HPA) uses the Kubernetes Metrics Server to monitor resource usage and automatically adjusts the number of Pods. However, scaling Pods may require additional nodes if the current node capacity is insufficient.
The Cluster Autoscaler works with EKS managed node groups to add or remove worker nodes based on pending Pod requirements and resource usage.
By deploying both HPA and Cluster Autoscaler, the system can automatically scale Pods and add nodes as necessary, ensuring efficient resource utilization and availability.
Configuring the Cluster Autoscaler with auto-discovery allows it to manage node groups without manual intervention, reducing operational effort.
Option A only scales nodes based on node CPU utilization, not Pods.
Option B uses VPA recommender mode, which only suggests resource changes and does not scale automatically.
Option C involves manual updates and is not automated scaling.
Therefore, option D provides the most operationally efficient, fully automated scaling solution.
Reference from AWS Official Documentation:
Kubernetes Horizontal Pod Autoscaler:
'HPA automatically scales the number of Pods based on observed CPU utilization or other metrics.'
(Kubernetes HPA)
Cluster Autoscaler on Amazon EKS: 'The Cluster Autoscaler automatically adjusts the size of the Kubernetes cluster when there are Pods that fail to run due to insufficient resources or when nodes in the cluster are underutilized.' (AWS EKS Cluster Autoscaler)
'The Cluster Autoscaler automatically adjusts the size of the Kubernetes cluster when there are Pods that fail to run due to insufficient resources or when nodes in the cluster are underutilized.'
Cluster Autoscaler on Amazon EKS: 'The Cluster Autoscaler automatically adjusts the size of the Kubernetes cluster when there are Pods that fail to run due to insufficient resources or when nodes in the cluster are underutilized.' (AWS EKS Cluster Autoscaler)
A large company recently acquired a small company. The large company invited the small company to join the large company's existing organization in AWS Organizations as a new OU. A DevOps engineer determines that the small company needs to launch t3.small Amazon EC2 instance types for the company's application workloads. The small company needs to deploy the instances only within US-based AWS Regions. The DevOps engineer needs to use an SCP in the small company's new OU to ensure that the small company can launch only the required instance types. Which solution will meet these requirements?
Answer : B