Pure Storage Pure Certified FlashArray Storage Professional FlashArray-Storage-Professional Exam Questions

Page: 1 / 14
Total 75 questions
Question 1

Pure Protect //DRaaS is configured with a Business Policy to back up data to AWS. An administrator, with DRaaS Global Admin access, is trying to delete the policy but is unable to do so.

What is restricting the administrator from deleting the policy?



Answer : A

In policy-driven data protection and disaster recovery architectures like Pure Protect //DRaaS, a 'Business Policy' dictates the critical Service Level Agreements (SLAs) for your environment, such as your Recovery Point Objective (RPO), replication frequency, and retention schedules. These policies are then assigned to 'Application Groups,' which act as logical containers for the specific virtual machines being protected and replicated to AWS.

As a fundamental safety mechanism built into the platform to prevent accidental exposure and SLA breaches, the system places a hard dependency lock on actively used policies. An administrator cannot delete a Business Policy if there are still Application Groups actively relying on it for their DR scheduling. To successfully delete the policy, the administrator must first modify all associated Application Groups and assign them to a different Business Policy, or completely remove the protection from those groups.

Here is why the other options are incorrect:

The administrator also needs DRaaS Cloud Admin access (C): The scenario explicitly states the user already has 'DRaaS Global Admin access.' In the Pure Protect //DRaaS Role-Based Access Control (RBAC) model, Global Admin is the highest tier of privilege and has full rights to manage and delete policies. A lack of permissions is not the issue here.

The Business Policy is marked as the Primary Policy (B): While a policy might be a default or primary template, the actual hard restriction that prevents deletion in the software is active resource assignment (the Application Groups), not just a 'Primary' label.


Question 2

What is the recommended Maximum Transmission Unit (MTU) size for the replication ports on a FlashArray?



Answer : C

Pure Storage strongly recommends an MTU size of 9000 (Jumbo Frames) for replication networks---such as those used for Asynchronous Replication, ActiveCluster, and ActiveDR---as well as for iSCSI and NVMe/TCP data networks.

A 9000-byte MTU significantly reduces protocol overhead and CPU processing load on the storage controllers by allowing a much larger payload of data to be transmitted inside a single network packet. During heavy replication, this drastically increases throughput and maximizes bandwidth efficiency.

Here is why the other options are incorrect:

1500 (B): While 1500 bytes is the standard default MTU for Ethernet and is exactly what Pure Storage recommends for the management ports (vir0), it is not the recommended optimization for high-throughput replication traffic. (Note: If your network cannot support 9000 end-to-end, 1500 must be used to prevent packet fragmentation, but 9000 remains the best-practice recommendation).

4200 (A): This is an arbitrary number and is not a standard network MTU size used in Pure Storage environments.


Question 3

An administrator is running commands to verify NVME/TCP connectivity from the hosts to the FlashArray. They use the command ping -M do -s 8972 from the initiator and it fails.

What should the administrator do to resolve the issue?



Answer : B

When configuring NVMe/TCP (or iSCSI) for optimal performance on a Pure Storage FlashArray, configuring Jumbo Frames (an MTU of 9000) end-to-end is a standard best practice.

The command ping -M do -s 8972 <ip_addr> is specifically used to verify Jumbo Frame configuration across the network.

The -M do flag sets the 'Do Not Fragment' (DF) bit, meaning the network is not allowed to break the packet into smaller pieces.

The -s 8972 flag sets the ICMP data payload to 8972 bytes. When you add the standard 8-byte ICMP header and the 20-byte IP header, the total packet size equals exactly 9000 bytes.

If this ping command fails, it indicates that somewhere along the network path between the host (initiator) and the FlashArray (target), a switch port, router, or network interface is not configured to support an MTU of 9000. The packet is being dropped because it is too large and cannot be fragmented. The administrator must verify the MTU settings on every network hop (switches, routers, and host NICs) to resolve the issue.

Here is why the other options are incorrect:

Engage support to enable NVME/ TCP services (A): The failure of a Jumbo Frame ping test is a Layer 2/Layer 3 network configuration issue, not an indicator that the NVMe/TCP storage protocol service is disabled on the array.

Run the command from the target (C): While pinging from the FlashArray back to the host is a valid secondary troubleshooting step, it will likely also fail if the network path doesn't support Jumbo Frames. The actual resolution is to fix the MTU on the network hops.


Question 4

What are the two types of FA File quota limits?



Answer : C

In Pure Storage FlashArray File Services (Purity//FA), administrators can apply Quota Policies to managed directories to control and monitor capacity consumption. When configuring the rules for these quotas, the limits are categorized into two specific types: Enforced and Unenforced.

Enforced Quotas (Hard Limits): When a quota rule is set with the --enforced flag set to True, it acts as a hard boundary. If the users or applications writing to that managed directory hit the specified capacity limit, the FlashArray will actively block any further write operations, ensuring the directory cannot exceed its allocated space.

Unenforced Quotas (Soft Limits): When a quota rule is unenforced (the flag is set to False), it acts purely as a monitoring and alerting threshold. Users can continue to write data and organically grow the directory past the specified limit without application disruption, but the system will track the overage and trigger administrative notifications.

Here is why the other options are incorrect:

File and Block (A): This describes the two underlying storage protocols/architectures the unified FlashArray serves, not the types of capacity quota limits for directories.

Limited and Unlimited (B): While you can theoretically leave a file system to grow 'unlimited' up to the size of the array, the specific technical parameters in the Purity quota policy engine are defined as enforced vs. unenforced.


Question 5

An administrator wants to upgrade an Edge Services agent and sees the Gateway Update Status in the GUI showing "Eligible (updates disallowed)".

What should the administrator do?



Answer : C

Edge Services and Gateways: Pure Storage FlashArray uses Edge Services (often associated with FA File or cloud integrations) to manage communication between the array and external services. The Gateway is the component that facilitates this secure connection.

Update Policy Control: To prevent unplanned outages or changes to the environment, Purity includes a safety toggle for Gateway updates. When the status shows 'Eligible (updates disallowed)', it means a newer version of the agent is available on the Pure Storage back-end, but the array's local policy is currently set to prevent automatic or manual 'one-click' updates.

GUI Authorization: This is a security and administrative control. An administrator with Array Admin privileges must navigate to the Edge Services/Gateway configuration section in the Purity GUI and explicitly change the setting to 'Allow Updates'. Once this toggle is enabled, the status will change to 'Eligible,' and the update can be initiated.

Why Option A is incorrect: While the CLI is used for many advanced support functions, the puresupport namespace is generally reserved for Pure Storage Support technicians and requires a challenge-response session key. Standard agent updates are handled via the administrative GUI.

Why Option B is incorrect: Removing and re-installing the agent is an unnecessary and disruptive process. The 'disallowed' status is simply a policy setting, not a corruption of the agent itself.


Question 6

Which FA File Directory Services statement is correct?



Answer : A

Interface Separation: Pure Storage FlashArrays distinguish between Management traffic (used for the GUI, CLI, and Pure1 connectivity) and Data traffic (used for host connectivity). Within the data path, File Service Interfaces are the logical interfaces assigned to handle NFS and SMB traffic.

Directory Service Communication: For FA File to function, it must communicate with external identity providers (like Microsoft Active Directory or OpenLDAP) to authenticate users and resolve permissions.

The Default Path: By design, Purity prioritizes the File Service Interfaces for these lookups. This is because the directory servers (Domain Controllers) are typically located on the same production/data network as the clients accessing the files. Using the File Service Interface ensures that authentication traffic follows the same network path and security rules as the data traffic itself.

Why Option B is incorrect: While the array management plane can use the management interfaces for its own administrative LDAP/SAML logins, the File Services component defaults to the data-path (File) interfaces to avoid 'crossing the streams' between management and production data networks.

Why Option C is incorrect: Purity is very specific about its routing tables. It does not blindly use 'all' interfaces for DNS/Directory services. If a File Service interface is configured and active, it becomes the primary egress point for file-related metadata and authentication requests.


Question 7

Volume space has increased on a FlashArray and shared space decreased by the same amount.

What does this indicate?



Answer : C

Understanding Space Reporting: To understand this behavior, you have to look at how Purity calculates capacity. Pure Storage uses a data reduction engine where data is deduplicated and compressed.

Volume Space vs. Shared Space:

Volume Space: This represents the unique data belonging to a specific volume that is not shared with any other volume via deduplication or snapshots.

Shared Space: This represents the data that is common across multiple volumes or snapshots. If you have two volumes that are clones of each other, most of that data is 'Shared.'

The 'Shift' Mechanism: When a volume is deleted (and potentially eradicated), the data it once shared with other volumes no longer needs to be 'shared.'

Imagine Volume A and Volume B share 100GB of data. That 100GB is accounted for in Shared Space.

If you delete Volume B, that 100GB of data is now only referenced by Volume A.

Consequently, that 100GB is moved from the Shared Space bucket into Volume A's Volume Space bucket.

Net Result: The total physical space used on the array remains the same initially, but the accounting shifts. You see a decrease in Shared Space and an identical increase in the Volume Space of the remaining volumes that held those deduplication references.


Page:    1 / 14   
Total 75 questions