Cisco Designing and Implementing Enterprise Network Assurance 300-445 ENNA Exam Questions

Page: 1 / 14
Total 68 questions
Question 1

The network team has deployed Webex RoomOS Endpoint Agents and integrated Webex Control Hub with ThousandEyes. The VoIP team wants to know which metrics they can collect from the Webex Control Hub view. Where does the VoIP team find the network data?



Answer : B

According to the Designing and Implementing Enterprise Network Assurance (300-445 ENNA) curriculum, the integration between ThousandEyes and Webex Control Hub provides a streamlined troubleshooting experience for collaboration services. For the VoIP team to access the specific ThousandEyes network telemetry---such as latency, loss, and jitter---they must navigate to the Network Path (Option B) section within the Troubleshooting tab of the Control Hub.15

The Network Path visualization is a direct result of the ThousandEyes Endpoint Agent data being pulled into the Webex interface.16 When a user or RoomOS device experiences poor audio or video quality during a meeting, the Control Hub's troubleshooting view displays a 'Network Path' line under the participant's details.17 By clicking on this line, the VoIP team can see a hop-by-hop breakdown of the entire route from the collaboration device to the Webex media node. This view highlights specific hops where performance is 'Poor' (red), 'Fair' (yellow), or 'Good' (green) based on predefined thresholds for latency (>400ms) or loss (>5%).

While 'Devices' (Option A) is where the agents are activated, and 'Users' (Option C) allows for selecting a specific participant, the actual telemetry metrics and the visualization of the network route are strictly located in the Network Path view. This integration eliminates the need for the VoIP team to leave the Webex environment for initial triage, as they can identify if a problem is local to the branch office or deep within a service provider's network directly from the 'Network Path' dashboard.


Question 2

You're analyzing NetFlow data for a network supporting voice and video traffic. The data shows consistent spikes in delay and jitter during peak hours. Which optimization would you recommend?



Answer : C

In the Designing and Implementing Enterprise Network Assurance (300-445 ENNA) curriculum, capacity planning and optimization are driven by telemetry data such as NetFlow. When NetFlow identifies that delay and jitter---metrics highly impactful to real-time traffic---spike during peak hours, it indicates that high-priority packets are competing for resources with bulk data.

The most appropriate recommendation is to tune the existing QoS configuration (Option C). This involves adjusting the Queuing and Scheduling policies on the routers to ensure that voice and video traffic (typically marked with EF and AF41/AF42 DSCP values) is serviced before other traffic classes during periods of congestion. This solution is targeted, cost-effective, and directly addresses the observed jitter issues without the need for massive capital expenditure.

Reviewing other options:

Option A: A complete QoS redesign is often unnecessary and too invasive for solving peak-hour jitter if a basic QoS framework is already in place.

Option B: Increasing bandwidth on 'all' links is a 'brute force' approach that is expensive and fails to address the underlying problem of traffic prioritization.

Option D: Hardware replacement is a last resort and would not resolve delay/jitter if the new hardware still lacks a properly tuned QoS policy.


Question 3

An engineer needs to create a test to execute a user's workflow where the user has to log in to OneDrive and download a file. The test has to implement a retry mechanism. The engineer has limited scripting experience. What are the actions that the engineer needs to take?



Answer : D

In the Designing and Implementing Enterprise Network Assurance (300-445 ENNA) curriculum, transaction monitoring is essential for validating complex, multi-step user workflows. When an engineer with limited scripting experience needs to monitor a OneDrive file download process, the ThousandEyes ecosystem provides several tools to simplify the creation of robust Transaction tests.

The correct approach is to leverage all available resources, making Option D the definitive answer. First, the ThousandEyes Recorder IDE (Option B) is a critical tool for non-scripters. It allows an engineer to perform the actual workflow---navigating to OneDrive, logging in, and initiating the download---in a browser environment on their local machine while the tool records every click and keyboard entry. The Recorder automatically generates the corresponding JavaScript code using the ThousandEyes transaction library. Second, the platform provides pre-built script templates (Option A), such as those for Office 365 and OneDrive, which include baseline logic and best practices for these specific services. Third, the official transaction-scripting-examples repository on GitHub (Option C) is a maintained source of code snippets. This is particularly useful for implementing advanced logic, such as a retry mechanism, which ensures that the test does not report a 'failure' due to a transient network hiccup but instead attempts the action again before triggering an alert.

As shown in the provided exhibit, a typical script uses import statements from @thousandeyes and selenium-webdriver to control the browser. By combining the Recorder for the basic flow, a Template for service-specific nuances, and Sample Scripts for logic enhancements like retries, the engineer can deploy a highly reliable assurance test without deep coding expertise. Therefore, all three actions are highly recommended and valid within the ENNA implementation framework.


Question 4

Employees and customers of a retail company are experiencing performance issues with the store website, such as slowness during the login process or failure when adding items to the cart. Which test type is the most useful for identifying the root cause of these problems?



Answer : C

In the Designing and Implementing Enterprise Network Assurance (300-445 ENNA) curriculum, selecting the appropriate test type is essential for isolating performance bottlenecks in complex web applications. When users report issues with specific multi-step workflows---such as logging into a portal or interacting with a shopping cart---the Transaction test type (Option C) is the most effective tool.

A Transaction test is a specialized Web-layer test that utilizes a script (typically written in JavaScript/TypeScript) to mimic real user interactions with a website. Unlike a simple HTTP Server test (Option A) that only checks for a 200 OK response from a single URL, or a Page Load test (Option B) that measures the rendering of a single page, a Transaction test follows a predefined user journey. For this retail scenario, the test can be configured to navigate to the homepage, enter credentials, click the login button, search for a product, and add it to the cart.

The primary advantage of this approach is that it provides granular, step-specific timing. It allows the engineer to identify precisely where the latency or failure occurs---for example, if the backend database takes too long to process the login or if an API call fails during the 'add to cart' action. While Agent-to-server (Option D) and DNS Server (Option E) tests provide valuable network and infrastructure data, they cannot validate the functional logic of a web application. Agent-to-agent (Option F) is used for measuring throughput between two managed points and is irrelevant for public-facing website testing. Therefore, for troubleshooting interactive web processes, the Transaction test type is the definitive choice for pinpointing the source of the issue.


Question 5

An engineer needs to create a test that requires authentication configuration to monitor an API. The test must send a POST request with client credentials parameters to get a token. The token then needs to be sent out on a GET request to be authorized to get the resource.

What must be done to meet the requirements? (Choose two)



Answer : D, E

In the Designing and Implementing Enterprise Network Assurance (300-445 ENNA) curriculum, monitoring modern APIs often requires handling complex authentication flows, such as OAuth 2.0 with specific client credential parameters. The provided exhibit illustrates the HTTP Authentication options available within a standard HTTP Server test: None, Basic, NTLM, Kerberos, and OAuth.

According to the ENNA implementation standards, while the HTTP Server test type supports OAuth (Option C), its native implementation is limited. Specifically, it is designed to use a pre-existing token or a simple token refresh flow; it does not support the injection of custom parameters in the initial POST request required to obtain a token in many enterprise client-credential scenarios. Basic and NTLM (Options A and B) are legacy protocols that rely on simple username/password headers and cannot facilitate the multi-step token exchange process described.

To fulfill the requirement of a two-step flow (POST for token, followed by GET for resource), engineers must use more flexible test types:

Transaction scripts (Option D): These allow the engineer to write custom JavaScript using the ThousandEyes transaction library to programmatically handle the POST request, parse the resulting JSON token, and then pass that token into the subsequent GET request's header.

API tests (Option E): These are purpose-built for API monitoring and natively support the definition of variables and multi-step requests where the output of one call (the token) serves as the input for the next.

By utilizing these advanced test types, the engineer can successfully navigate complex authentication requirements that the standard HTTP Server test cannot accommodate.


Question 6

Refer to the exhibit.

A network engineer is tasked with configuring an alert that will trigger if the HTTP server responds with a server error. What alert conditions should be configured to meet the specified requirements?



Answer : D

56

In the Designing and Implement7ing Enterpr8ise Network Assurance (300-445 ENNA) framework, configuring effective alert rules is critical for distinguishing between standard network noise and actionable application-layer failures. For Web - HTTP Server tests, ThousandEyes allows engineers to monitor both network-level metrics (like Connect time) and application-level indicators (like HTTP response codes).

The requirement is to trigger an alert specifically when the HTTP server responds with a server error. In the HTTP protocol, server errors are categorized as the 5XX series of status codes (e.g., 500 Internal Server Error, 503 Service Unavailable, 504 Gateway Timeout). To meet this requirement, the engineer must configure a location alert condition where the Metric is set to Response Code and the condition value is server error(5XX) (Option D).

Reviewing the other options:

Error type is any (Option A): While this would capture server errors, it would also trigger for 4XX client errors (like 404 Not Found) and network-layer timeouts, making it too broad for a specific 'server error' requirement.

Wait Time is Dynamic (Option B): This monitors the time-to-first-byte using statistical baselining. While high wait times often precede 5XX errors, this condition only alerts on latency, not on the actual error code itself.

Response Time (Option C): Similar to wait time, this monitors performance speed rather than the logical success or failure of the server's response.

By specifically selecting Response Code: server error(5XX), the engineer ensures that the operations team is only notified when the application backend is experiencing a functional failure, rather than just a slow response or a client-side misconfiguration.


Question 7

A CPU utilization alert for Endpoint Agents is triggering too frequently, creating alert noise. Which of the following steps would help reduce the sensitivity of the alert rule? (Select two)



Answer : A, C

Alert fatigue is a major operational challenge in network assurance. To make a CPU utilization alert for Endpoint Agents less sensitive and reduce 'noise,' an engineer must implement statistical filters that ignore transient spikes or isolated events.

The two most effective methods are:

Increase the agent count (Option A): By requiring a higher number or percentage of agents to simultaneously exceed the threshold, the system ensures the alert represents a widespread environmental issue rather than a single user running a CPU-intensive background process.

Require more rounds of data (Option C): Instead of alerting on a single measurement (1 of 1 round), the engineer can configure the rule to require the threshold to be breached for multiple consecutive checks (e.g., 2 of 3 rounds). This filters out brief, non-impactful CPU spikes that occur naturally during OS updates or browser startups.

Other options would have the opposite effect:

Option B: Lowering the percentage threshold (e.g., from 90% to 50%) would cause the alert to trigger much more frequently, increasing noise.

Option D: Enabling the alert on more agents increases the pool of potential triggers, which typically leads to more notifications unless the logic in Option A is also applied.


Page:    1 / 14   
Total 68 questions