Pure Storage Pure Certified FlashArray Storage Professional FlashArray-Storage-Professional Exam Questions

Page: 1 / 14
Total 75 questions
Question 1

What are the two types of FA File quota limits?



Answer : C

In Pure Storage FlashArray File Services (Purity//FA), administrators can apply Quota Policies to managed directories to control and monitor capacity consumption. When configuring the rules for these quotas, the limits are categorized into two specific types: Enforced and Unenforced.

Enforced Quotas (Hard Limits): When a quota rule is set with the --enforced flag set to True, it acts as a hard boundary. If the users or applications writing to that managed directory hit the specified capacity limit, the FlashArray will actively block any further write operations, ensuring the directory cannot exceed its allocated space.

Unenforced Quotas (Soft Limits): When a quota rule is unenforced (the flag is set to False), it acts purely as a monitoring and alerting threshold. Users can continue to write data and organically grow the directory past the specified limit without application disruption, but the system will track the overage and trigger administrative notifications.

Here is why the other options are incorrect:

File and Block (A): This describes the two underlying storage protocols/architectures the unified FlashArray serves, not the types of capacity quota limits for directories.

Limited and Unlimited (B): While you can theoretically leave a file system to grow 'unlimited' up to the size of the array, the specific technical parameters in the Purity quota policy engine are defined as enforced vs. unenforced.


Question 2

What is the best practice for configuring VMFS UNMAP for ESXi 6.7 or later?



Answer : C

What is UNMAP?: UNMAP (SCSI command 0x42) is the mechanism that allows a host (like ESXi) to inform the storage array that specific blocks of data are no longer in use (e.g., after a VM is deleted or moved). This is critical for Pure Storage because it allows the array to reclaim that space and maintain high data reduction ratios.

Evolution in ESXi: In versions prior to 6.5, UNMAP was a manual process executed via the CLI. Starting with ESXi 6.7, VMware introduced Automatic Space Reclamation, which runs in the background.

The Pure Storage Recommendation: Pure Storage recommends setting the reclamation priority to Auto with Low Priority.

Low Priority: This ensures that the UNMAP commands are sent to the FlashArray at a steady, manageable rate (roughly up to 25 MB/s to 100 MB/s depending on the Purity version). Because FlashArrays are built on a high-performance metadata engine, 'Low Priority' is more than sufficient to keep up with even high-churn environments without causing any contention for active application I/O.

Why avoid High Priority (Option B)?: Setting it to high priority or using a fixed high-burst rate can lead to 'bursty' SCSI traffic. While the FlashArray can handle the load, it is considered a best practice to keep background maintenance tasks like space reclamation at a lower priority to ensure the 'Big Three' (latency, bandwidth, IOPS) for production workloads remain optimized.

Verification: You can verify that UNMAP is working by looking at the Data Reduction metrics in the Purity GUI or Pure1. If the 'Thin Provisioning' or 'Reclaimed' numbers are increasing after file deletions, the host is correctly communicating its freed space to the array.


Question 3

How would a FlashArray administrator view external latency for write requests for a specific volume?



Answer : A

The Analysis Tab: In the Pure Storage FlashArray GUI, the Analysis tab is the primary location for deep-dive performance troubleshooting and historical data visualization. While the Storage tab provides a real-time 'at-a-glance' view of a volume, the Analysis tab allows for granular filtering of specific metrics.

Granular Metric Filtering: When troubleshooting latency, it is critical to distinguish between Read and Write operations, as they interact with the Purity operating environment differently (e.g., writes hitting NVRAM vs. reads hitting the Flash modules).

External vs. Internal Latency: Pure Storage differentiates between 'Array Latency' (internal processing) and 'External Latency' (the time seen by the host). By navigating to Analysis > Performance, an administrator can drill down into the Volumes sub-tab.

Selecting the Volume and Operations: Once a specific volume is selected, the chart typically defaults to a combined view. To isolate 'external latency for write requests,' the administrator must use the legend/filters to select 'Write' while deselecting 'Read' and 'Mirrored Write' (which refers to synchronous replication traffic in ActiveCluster environments). This provides a clean graph of the round-trip write latency specifically for that volume's host I/O.

Why other options are incorrect: Option B refers to physical port health and hardware status, not volume-level performance. Option C provides basic volume metadata and real-time total latency, but lacks the granular historical filtering (selecting/deselecting specific I/O types) required for detailed performance analysis.


Question 4

Volume space has increased on a FlashArray and shared space decreased by the same amount.

What does this indicate?



Answer : C

Understanding Space Reporting: To understand this behavior, you have to look at how Purity calculates capacity. Pure Storage uses a data reduction engine where data is deduplicated and compressed.

Volume Space vs. Shared Space:

Volume Space: This represents the unique data belonging to a specific volume that is not shared with any other volume via deduplication or snapshots.

Shared Space: This represents the data that is common across multiple volumes or snapshots. If you have two volumes that are clones of each other, most of that data is 'Shared.'

The 'Shift' Mechanism: When a volume is deleted (and potentially eradicated), the data it once shared with other volumes no longer needs to be 'shared.'

Imagine Volume A and Volume B share 100GB of data. That 100GB is accounted for in Shared Space.

If you delete Volume B, that 100GB of data is now only referenced by Volume A.

Consequently, that 100GB is moved from the Shared Space bucket into Volume A's Volume Space bucket.

Net Result: The total physical space used on the array remains the same initially, but the accounting shifts. You see a decrease in Shared Space and an identical increase in the Volume Space of the remaining volumes that held those deduplication references.


Question 5

What should an administrator configure when setting up device-level access control in an NVMe/TCP network?



Answer : B

In any NVMe-based storage fabric (including NVMe/TCP, NVMe/FC, and NVMe/RoCE), the standard method for identifying endpoints and enforcing device-level access control is the NQN (NVMe Qualified Name).

The NQN serves the exact same purpose in the NVMe protocol as an IQN (iSCSI Qualified Name) does in an iSCSI environment, or a WWPN (World Wide Port Name) does in a Fibre Channel environment. It is a unique identifier assigned to both the host (initiator) and the storage array (target subsystem). When setting up access control on a Pure Storage FlashArray, the storage administrator must capture the Host NQN from the operating system and configure a Host object on the array with that specific NQN. This ensures that only the authorized host can discover, connect to, and access its provisioned NVMe namespaces (volumes).

Here is why the other options are incorrect:

VLANs (A): Virtual LANs are used for network-level isolation and segmentation at Layer 2 of the OSI model. While you might use a VLAN to separate your storage traffic from your management traffic, it is a network security measure, not a device-level access control mechanism for the storage protocol itself.

LACP (C): Link Aggregation Control Protocol (LACP) is a network protocol used to bundle multiple physical network links into a single logical link for redundancy and increased bandwidth. It has nothing to do with storage access control or mapping volumes to hosts.


Question 6

A FlashArray//XL is used for NVMe-RoCE services. The array has been lightly loaded and has performed as expected. A new workload has been added to the array, which is within the array's performance envelope. The change has resulted in extreme latency and service outages for all workloads utilizing NVMe-RoCE.

Which misconfiguration is this a symptom of?



Answer : B

Requirement for Lossless Ethernet: NVMe over RoCE (RDMA over Converged Ethernet) requires a lossless fabric to function correctly. Unlike standard iSCSI which uses TCP for error recovery, RoCE assumes the network will not drop packets. If the network is 'lossy,' performance degrades significantly.

The Role of PFC: Priority Flow Control (PFC) (IEEE 802.1Qbb) is the specific mechanism used in Data Center Bridging (DCB) to provide flow control on a per-priority basis. It allows the switch to send a 'pause' frame to the sender when buffers are full, preventing packet drops.

Symptom Analysis: In the scenario provided, the array itself is not overloaded ('within the performance envelope'). However, the addition of a new workload increased traffic to the point where buffer congestion occurred. Because PFC was likely misconfigured (either on the FlashArray ports, the network switches, or the host NICs), the network dropped packets instead of pausing traffic. This leads to 'go-back-N' retransmissions and massive latency spikes that affect all workloads sharing that fabric.

Pure Storage Best Practices: Pure Storage documentation for NVMe-RoCE emphasizes that PFC must be enabled and consistent across the entire path. If there is a mismatch in PFC configuration, the resulting packet loss will cause the symptoms described: extreme latency and potential service outages.


Question 7

A storage administrator is troubleshooting multipathing issues.

What is the CLI command that allows the administrator to sample the I/O balance information at a consistent interval?



Answer : C

Command Purpose: The purehost monitor command is the primary tool in the Pure Storage CLI for observing real-time performance and connectivity health from the perspective of the hosts connected to the FlashArray.

The --balance Flag: When the --balance flag is added, the output shifts from general performance (IOPS, bandwidth, latency) to showing how I/O is distributed across the available paths (controllers and ports). This is critical for identifying 'unbalanced' loads, which usually point to misconfigured MPIO (Multi-Path I/O) on the host side (e.g., a host only using one controller's ports).

Interval vs. Repeat:

The --interval flag specifies the time in seconds between each sample. In option C, --interval 15 tells the array to refresh the data every 15 seconds.

The --repeat flag (seen in option A) is used to limit the total number of samples taken before the command exits. However, in standard troubleshooting, the administrator typically wants a consistent stream of data until manually stopped (Ctrl+C).

--resample (seen in option B) is not a valid flag for the purehost monitor command in Purity.

Best Practice: When troubleshooting multipathing, Pure Storage recommends monitoring the balance to ensure that the 'Relative I/O' percentage is roughly equal across all active paths. Large discrepancies often indicate that the host's MPIO policy is set to 'Failover Only' instead of the recommended 'Round Robin' or 'Least Queue Depth.'


Page:    1 / 14   
Total 75 questions