What type of data would be protected by using Zscaler Indexed Document Matching (IDM)?
Answer : D
Zscaler Indexed Document Matching (IDM) is a DLP technique used to protect entire documents or large portions of text-based content, rather than discrete data fields. Administrators upload representative samples of ''crown jewel'' documents (for example, contract templates, medical forms, HR records, or tax documents). Zscaler processes and indexes the textual content, then uses this index to detect when similar or identical document content is uploaded, shared, or exfiltrated through monitored channels.
This approach is ideal for high-value, unstructured documents that contain sensitive information in a repeatable format. It is distinct from Exact Data Match (EDM), which is used for structured field-level data such as credit card numbers or national IDs, and it is not optimized for pure image content or OCR-based detection. While IDM can apply to many file types (Word, PDF, spreadsheets that contain meaningful text, etc.), the core use case is protecting documents where overall content similarity matters.
Therefore, the best description is that IDM protects high-value documents that tend to carry sensitive data, such as medical forms and tax documents.
===========
How does log streaming work in ZIA?
Answer : C
In ZIA, user traffic is first forwarded to a Zscaler Enforcement Node (ZEN), where security and access policies are enforced and transaction logs are generated. Those logs are then sent from the ZEN to the cloud-based Nanolog cluster, which is the highly scalable logging and storage layer used by Zscaler. Nanolog compresses and stores the logs for reporting, analytics, and long-term retention.
To deliver logs to a customer's SIEM, the Nanolog Streaming Service (NSS) is deployed in the customer environment. NSS establishes a secure, outbound tunnel to the Nanolog service in the Zscaler cloud and subscribes to that customer's log stream. Nanolog then continuously streams a copy of relevant logs over this secure connection to NSS. NSS receives the logs, converts them into the required output format (for example, syslog or CEF), and forwards them on to the configured SIEM or log receiver.
Option C is the only answer that correctly represents the logical sequence: user traffic through ZEN, ZEN to Nanolog, secure tunnel from NSS, Nanolog streaming to NSS, and finally NSS forwarding to the SIEM.
===========
A customer wants to set up an alert rule in ZDX to monitor the Wi-Fi signal on newly deployed laptops. What type of alert rule should they create?
Answer : B
Zscaler Digital Experience (ZDX) organizes its telemetry and alerting around key domains: Application, Network, and Device. Wi-Fi signal strength is a client-side characteristic of the endpoint itself, measured from the user's device, not from the network path or the application service. In the ZDX training content, Wi-Fi signal, Wi-Fi link speed, CPU, memory, and similar metrics are clearly categorized under Device health.
When creating an alert rule to monitor newly deployed laptops, the administrator should therefore choose a Device-type alert and then select Wi-Fi signal--related metrics and thresholds. This allows ZDX to trigger alerts whenever the Wi-Fi signal on those endpoints falls below an acceptable level, helping operations teams quickly identify poor local wireless conditions that degrade user experience.
Network alerts are intended for end-to-end path health (latency, packet loss, DNS resolution, gateway reachability, etc.), and Application alerts focus on performance and availability of specific apps or services. ''Interface'' as a standalone alert type is not how ZDX structures its top-level alert categories; interface-related metrics are surfaced as device-side attributes. Consequently, the correct classification for Wi-Fi signal monitoring in ZDX is a Device alert rule.
===========
For App Connectors, why shouldn't the customer pre-configure memory and CPU resources to accommodate a higher bandwidth capacity, like 1 Gbps or more?
Answer : D
In ZPA, App Connectors are designed to be lightweight, horizontally scalable components. Their effective throughput and concurrent-connection capacity are often constrained more by network stack limitations (such as ephemeral port exhaustion and per-process file descriptor limits) than by raw CPU or memory. As a result, simply over-provisioning vCPUs and RAM to ''hit'' a target like 1 Gbps on a single connector usually does not provide linear performance gains.
Zscaler design guidance emphasizes deploying multiple App Connectors and allowing ZPA to intelligently load-balance traffic across them. This delivers resiliency and scales capacity while staying within realistic limits of TCP/UDP ports and OS-level descriptors. Over-scaling a single connector can lead to diminishing returns and may even create harder-to-diagnose issues when port ranges or file descriptors are saturated.
Storage is not the main factor in App Connector performance, and the platform does not recommend a ''just throw more resources at it'' approach. For these reasons, the correct answer is that port exhaustion and file descriptors, rather than memory or CPU, are typically the true limiting factors for App Connectors.
===========
A customer requires 2 Gbps of throughput through the GRE tunnels to Zscaler. Which is the ideal architecture?
Answer : B
Zscaler design guidance for GRE connectivity emphasizes three key principles: terminate GRE on border (edge) devices, avoid NAT on GRE source addresses, and scale bandwidth by using multiple tunnels. In Zscaler documentation and engineering training, each GRE tunnel is typically sized for up to about 1 Gbps of throughput. For a 2 Gbps requirement, customers are advised to deploy at least two primary GRE tunnels, with two additional backup tunnels for redundancy and failover.
These tunnels should terminate on border routers that own public IP addresses, ensuring optimal routing and simplifying troubleshooting. Zscaler specifically recommends that the public source IPs used for GRE must not be translated by NAT, because the Zscaler cloud must see the original, registered public IP to associate tunnels with the correct organization and enforce policy. Enabling NAT on GRE traffic can break tunnel establishment and lead to asymmetric or unpredictable routing.
Using internal routers introduces extra hops and complexity and often requires NAT or policy-based routing, which goes against recommended best practices. Similarly, any architecture with NAT enabled on GRE traffic conflicts with Zscaler's published requirements. Therefore, the ideal and recommended design for 2 Gbps via GRE is two primary and two backup GRE tunnels from border routers with NAT disabled.
What capabilities within Zscaler External Attack Surface Management (EASM) are specifically designed to uncover and assess domains that are intentionally created to resemble your legitimate brand or websites?
Answer : D
Zscaler External Attack Surface Management (EASM) includes a dedicated capability called Lookalike Domains. Zscaler defines lookalike domains as fraudulent or fake domains intentionally created by threat actors to mimic your legitimate domains and brand presence, often for phishing, credential theft, or brand abuse.
Within the EASM portal, the Lookalike Domains pages and widgets present a curated list of suspicious domains that closely resemble your seed or official domains. Analysts can review exposure scores, registrar details, hosting information, and other attributes to determine which of these domains pose the highest risk and warrant takedown or additional monitoring.
This feature is specifically designed for external risk and brand-protection use cases: it highlights where attackers are impersonating your organization on the public internet, which is a core component of digital-risk and external-attack-surface management. While words such as ''fake,'' ''mimic,'' or ''spoofing'' may be used generically in security discussions, ''Lookalike Domains'' is the exact term and feature name Zscaler uses in the EASM product and documentation. Options A, B, and C do not correspond to a named EASM capability and therefore are not correct in the ZDTE context.
===========
Logging services exist in which part of the Zscaler architecture?
Answer : D
The Zscaler Digital Transformation study guides describe the Zero Trust Exchange using the conceptual model of ''Brains and Engines.'' Engines are the inline enforcement components---ZIA Public Service Edges, ZPA Service Edges, App Connectors, etc.---that sit in the data path to forward traffic, apply policy, and perform inspection.
The ''Brains'' side, however, represents the cloud control and intelligence plane. Here Zscaler hosts components such as Central Authority, policy and configuration stores, analytics engines, and, critically, the Logging and Reporting infrastructure (Nanolog clusters, Log Streaming Service, and analytics dashboards). The documentation explicitly associates log collection, compression, forwarding to SIEM/SOAR platforms, and long-term analytics with this centralized cloud layer rather than the enforcement engines themselves.
Engines generate rich telemetry, but they stream it back to the brains layer, where it is normalized, indexed, retained, and made searchable for investigations, compliance, and performance analysis. OneAPI is an access interface, not the location of the logging services, and ''Memory'' is not a formal architectural construct in the Zscaler model. Therefore, in the official architecture view taught for the exam, logging services clearly reside in the Brains component of the platform.
===========