Linux Foundation Certified Cloud Native Platform Engineering Associate CNPA Exam Practice Test

Page: 1 / 14
Total 85 questions
Question 1

In a cloud native environment, how do policy engines facilitate a unified approach for teams to consume platform services?



Answer : D

Policy engines (such as Open Policy Agent -- OPA or Kyverno) play a critical role in enforcing governance, security, and compliance consistently across cloud native platforms. Option D is correct because policy engines provide centralized, reusable policies that can be applied across clusters, services, and environments. This ensures that developers consume platform services in a compliant and secure manner, without needing to manage these controls manually.

Option A is partially correct but too narrow, as policies extend beyond compliance to include operational, security, and cost-control measures. Option B is not the primary function of policy engines, though integration with CI/CD is possible. Option C is incorrect because SLAs are business agreements, not enforced by policy engines directly.

Policy engines enforce guardrails like image signing, RBAC rules, resource quotas, and network policies automatically, reducing cognitive load for developers while giving platform teams confidence in compliance. This supports the platform engineering principle of combining self-service with governance.


--- CNCF Platforms Whitepaper

--- CNCF Security TAG (OPA, Kyverno)

--- Cloud Native Platform Engineering Study Guide

Question 2

In a GitOps workflow, what is a secure and efficient method for managing secrets within a Git repository?



Answer : B

The secure and efficient way to handle secrets in a GitOps workflow is to use a dedicated secrets management tool (e.g., HashiCorp Vault, Sealed Secrets, or External Secrets Operator) and store only references or encrypted placeholders in the Git repository. Option B is correct because Git should remain the source of truth for configuration, but sensitive values should be abstracted or encrypted to maintain security.

Option A (environment variables) can supplement secret management but lacks versioning and auditability when used alone. Option C (encrypting secrets in Git) can work with tools like Mozilla SOPS, but it still requires external key management, making Option B a more complete and secure approach. Option D (plain text secrets) is highly insecure and should never be used.

By integrating secrets managers into GitOps workflows, teams achieve both security and automation, ensuring secrets are delivered securely during reconciliation without exposing sensitive data in Git.


--- CNCF GitOps Principles

--- CNCF Supply Chain Security Whitepaper

--- Cloud Native Platform Engineering Study Guide

Question 3

Which approach is an effective method for securing secrets in CI/CD pipelines?



Answer : B

The most secure and scalable method for handling secrets in CI/CD pipelines is to use a secrets manager with encryption. Option B is correct because solutions like HashiCorp Vault, AWS Secrets Manager, or Kubernetes Secrets (backed by KMS) securely store, encrypt, and control access to sensitive values such as API keys, tokens, or credentials.

Option A (restricted config files) may protect secrets but lacks auditability and rotation capabilities. Option C (plain-text environment variables) exposes secrets to accidental leaks through logs or misconfigurations. Option D (base64 encoding) is insecure because base64 is an encoding, not encryption, and secrets can be trivially decoded.

Using a secrets manager ensures secure retrieval, audit trails, access policies, and secret rotation. This aligns with supply chain security and zero-trust practices, reducing risks of credential leakage in CI/CD pipelines.


--- CNCF Security TAG Best Practices

--- CNCF Platforms Whitepaper

--- Cloud Native Platform Engineering Study Guide

Question 4

In a cloud native environment, which factor most critically influences the need for customized CI pipeline configurations across different application types?



Answer : B

The biggest driver for customizing CI pipeline configurations across application types is technical differences between programming languages, frameworks, and artifact formats. Option B is correct because applications written in Java, Python, Go, or Node.js require different build tools (e.g., Maven, pip, go build, npm), testing frameworks, and packaging mechanisms. These differences must be reflected in the CI pipeline to ensure successful builds, tests, and artifact generation.

Option A (priority-based pipelines) is more of an organizational practice, not a technical necessity. Option C (team sizes and expertise) may influence usability but does not drive pipeline configuration. Option D (visual distinction) relates to dashboards and observability, not pipeline functionality.

Platform engineers often provide pipeline templates or abstractions that encapsulate these differences while standardizing security and compliance checks. This balances customization with consistency, enabling developers to use pipelines suited to their technology stack without fragmenting governance.


--- CNCF Platforms Whitepaper

--- Continuous Delivery Foundation Guidance

--- Cloud Native Platform Engineering Study Guide

Question 5

What is the primary purpose of Kubernetes runtime security?



Answer : B

The main purpose of Kubernetes runtime security is to protect workloads during execution. Option B is correct because runtime security focuses on monitoring active Pods, containers, and processes to detect and prevent malicious activity such as privilege escalation, anomalous network connections, or unauthorized file access.

Option A (etcd encryption) addresses data at rest, not runtime. Option C (image scanning) occurs pre-deployment, not during execution. Option D (API access control) is enforced through RBAC and IAM, not runtime security.

Runtime security solutions (e.g., Falco, Cilium, or Kyverno) continuously observe system calls, network traffic, and workload behaviors to enforce policies and detect threats in real time. This ensures compliance, strengthens defenses in zero-trust environments, and provides critical protection for cloud native workloads in production.


--- CNCF Security TAG Guidance

--- CNCF Platforms Whitepaper

--- Cloud Native Platform Engineering Study Guide

Question 6

In assessing the effectiveness of platform engineering initiatives, which DORA metric most directly correlates to the time it takes for code from its initial commit to be deployed into production?



Answer : A

Lead Time for Changes is a DORA (DevOps Research and Assessment) metric that measures the time from code commit to successful deployment in production. Option A is correct because it directly reflects how quickly the platform enables developers to turn ideas into delivered software. Shorter lead times indicate an efficient delivery pipeline, streamlined workflows, and effective automation.

Option B (Deployment Frequency) measures how often code is deployed, not how long it takes to reach production. Option C (Mean Time to Recovery) measures operational resilience after failures. Option D (Change Failure Rate) indicates stability by measuring the percentage of deployments causing incidents. While all DORA metrics are valuable, only Lead Time for Changes measures end-to-end speed of delivery.

In platform engineering, improving lead time often involves automating CI/CD pipelines, implementing GitOps, and reducing manual approvals. It is a core measurement of developer experience and platform efficiency.


--- CNCF Platforms Whitepaper

--- Accelerate: State of DevOps Report (DORA Metrics)

--- Cloud Native Platform Engineering Study Guide

Question 7

What is a key consideration during the setup of a Continuous Integration/Continuous Deployment (CI/CD) pipeline to ensure efficient and reliable software delivery?



Answer : B

Automated testing throughout the pipeline is a key enabler of efficient and reliable delivery. Option B is correct because incorporating unit tests, integration tests, and security scans at different pipeline stages ensures that errors are caught early, reducing the risk of faulty code reaching production. This also accelerates delivery by providing fast, consistent feedback to developers.

Option A (single environment) undermines isolation and does not reflect real-world deployment conditions. Option C (skipping packaging) prevents reproducibility and traceability of builds. Option D (manual approvals) adds delays and reintroduces human bottlenecks, which goes against DevOps and GitOps automation principles.

Automated testing, combined with immutable artifacts and GitOps-driven deployments, aligns with platform engineering's focus on automation, reliability, and developer experience. It reduces cognitive load for teams and enforces quality consistently.


--- CNCF Platforms Whitepaper

--- Continuous Delivery Foundation Best Practices

--- Cloud Native Platform Engineering Study Guide

Page:    1 / 14   
Total 85 questions