NetApp certified support engineer - ONTAP specialist NS0-593 Exam Questions

Page: 1 / 14
Total 60 questions
Question 1

Your customer wants to access a LUN on a FAS 8300 system from a VMware ESXi server through the FC protocol. They already created a new SVM, volume. LUN, and igroup for this purpose. The customer reports that the server's FC HBA port Is online, but the LUN does not show up.

Referring to the exhibit, what is the reason for this problem?



Answer : A

To access a LUN on a FAS 8300 system from a VMware ESXi server through the FC protocol, the customer must configure the FC service on the SVM that owns the LUN. The FC service enables the SVM to act as an FC target and communicate with the FC initiators on the host. Without the FC service, the LUN will not be visible to the host, even if the LUN is mapped to an igroup and the FC LIFs are up. The exhibit shows that the FC service is not configured on the SVM, as the output of the commandvserver fcp initiator show -vserver SVM1is empty. Therefore, the reason for the problem is that the FC service has not been configured on the SVM.Reference=Configure an SVM for FC,Create an FC protocol service,Single IQN iSCSI session with ESXi on ONTAP when igroup has two IQNs


Question 2

An administrator receives the following error message:

What are two causes for this error? (Choose two.)



Answer : A, D

The error message ''wafl.cp.toolong:error'' indicates that a WAFL consistency point (CP) took longer than 30 seconds to complete. A CP is a process that flushes the data from the NVRAM buffer to the disk.A long CP can cause latency and performance issues for the system1

One possible cause for a long CP is excessive SSD load causing the wear leveling to become unbalanced. Wear leveling is a technique that distributes the write operations evenly across the SSD cells to extend the lifespan of the SSD.If some SSD cells are written more frequently than others, the wear leveling will become unbalanced and the SSD performance will degrade2

Another possible cause for a long CP is an SSD disk performing garbage collection to create a dense data layout. Garbage collection is a process that reclaims the space occupied by invalid or deleted data on the SSD.Garbage collection can improve the write performance and storage efficiency of the SSD, but it can also consume CPU and disk resources and cause long CPs3

A disk failing or being failed is not a likely cause for a long CP, because the system will automatically mark the disk as failed and remove it from the aggregate.The system will also initiate a disk reconstruction or a RAID scrub to restore the data protection and redundancy4

There is no evidence that the system has SATA HDDs, so there is no reason to assume that there is excessive SATA HDD load.Moreover, SATA HDDs are usually used for secondary or backup storage, not for primary or performance-sensitive workloads5


1: Are long Consistency Points (wafl.cp.toolong) normal?- NetApp Knowledge Base2: How to troubleshoot SSD performance issues - NetApp Knowledge Base3: How to troubleshoot SSD garbage collection issues - NetApp Knowledge Base4: How to troubleshoot disk failures and replacements - NetApp Knowledge Base5: ONTAP 9 - Hardware Universe - The Open Group

Question 3

Your customer noticed in NetApp Active IQ that their NetApp Cloud Volumes ONTAP for Azure HA solution is no longer sending AutoSupport messages over HTTPS. A support ticket has been opened to find out why. No changes have been made to the Cloud Volumes ONTAP for Azure HA environment.

In this scenario, which two autosupport command parameters should be used to validate that AutoSupport Is working properly? {Choose two.)



Answer : B, C

= The -transport parameter specifies the protocol used to send AutoSupport messages, which should be HTTPS by default. The -proxy-url parameter specifies the proxy server used to send AutoSupport messages, which should be the Connector's IP address and port if the Cloud Volumes ONTAP nodes do not have outbound internet access. These two parameters can be used to check the AutoSupport configuration and connectivity status.Reference=Verify AutoSupport setup,Troubleshoot your AutoSupport configuration,High-availability pairs in Azure


Question 4

A customer is calling you to troubleshoot why users are unable to connect to their CIFS SVM.

Referring to the Information shown in the exhibit, what Is the source of the problem?



Answer : D

The broken disk in Node03 is causing the cluster ring to be offline, which prevents the CIFS SVM from being accessible. The cluster ring is a distributed database that stores cluster configuration information and enables communication between cluster nodes. If the cluster ring is offline, the cluster cannot function properly and the CIFS SVM cannot serve data to clients. The other options are not relevant to the CIFS SVM connectivity issue.Reference=https://www.netapp.com/support-and-training/netapp-learning-services/certifications/support-engineer/

https://mysupport.netapp.com/site/docs-and-kb


Question 5

When you review performance data for a NetApp ONTAP cluster node, there are back-to-back (B2B) type consistency points (CPs) found occurring on the loot aggregate.

In this scenario, how will performance of the client operations on the data aggregates be affected?



Answer : B

A B2B type consistency point (CP) occurs when a new CP is triggered before the previous CP is completed, due to the second memory buffer reaching a watermark. This can cause write latency to increase as user write operations are not replied until a write buffer frees up. However, this only affects the aggregate that is undergoing the B2B processing, and not the other aggregates on the same node. Therefore, the performance of the client operations on the data aggregates will not be affected by B2B processing on the root aggregate.Reference=What is the Back-to-Back (B2B) Consistency Point Scenario?,What are the different Consistency Point types and how are they measured in ONTAP 9?,What are the different Consistency Point types and how are they measured?


Question 6

A customer enabled NFSv4.0 on an SVM and changed the client mount from NFSv3 to NFSv4. Afterwards, the customer found that the directory owner was changed from root to nobody.

In this scenario, which statement is true?



Answer : D

NFSv4 is a network file system protocol that supports security, performance, and scalability features.NFSv4 uses ID mapping to ensure that the permissions of files and directories are consistent across different NFSv4 servers and clients1

ID mapping is the process of translating the user and group identifiers (UIDs and GIDs) of the local system to the user and group names (user@domain and group@domain) of the remote system, and vice versa.ID mapping is done by the idmapd service, which uses the /etc/idmapd.conf file to determine the domain name of the system2

ID mapping requires that the NFSv4 server and client have the same domain name configured in the /etc/idmapd.conf file.If the domain names do not match, the idmapd service cannot map the UIDs and GIDs to the user and group names, and the permissions of the files and directories will be shown as nobody:nobody, which is the default anonymous user3

Therefore, if a customer enabled NFSv4.0 on an SVM and changed the client mount from NFSv3 to NFSv4, and found that the directory owner was changed from root to nobody, the most likely cause is that the ID mapping domains do not match between the client and server.The customer should check and correct the /etc/idmapd.conf file on both systems, and restart the idmapd service and remount the NFSv4 share4


1: ONTAP 9 - Network File System (NFS) - The Open Group2: ONTAP 9 - NFSv4 and NFSv4.1 Enhancements - The Open Group3: NFSv4 mount incorrectly shows all files with ownership as nobody:nobody - Red Hat Customer Portal4: NFSv4 mountpoint shows incorrect ownerships as nobody:nobody in CentOS/RHEL - The Geek Diary

Question 7

You have a 4-node NetApp ONTAP 9.8 cluster with an AFF A400 HA pair and a FAS8300 HA pair with 16 TB NL-SAS drives. You are asked to automatically tier 150 TB of Snapshot copy data from the AFF A400 aggregates to the FAS8300.

In this scenario, which ONTAP license must be added to the cluster to accomplish this task?



Answer : D

FabricPool is an ONTAP feature that enables tiering of cold data from SSD aggregates to low-cost object storage, either on-premises or in the cloud1.FabricPool requires a license to be installed on the cluster, and the license type depends on the cloud tier being used2.In this scenario, the cloud tier is another ONTAP cluster (FAS8300), which is not supported by the new Cloud Tiering license that is used for most FabricPool configurations3.Therefore, the old FabricPool license that is retained for dark sites or MetroCluster systems using FabricPool Mirror must be used3.The FabricPool license defines the amount of capacity that can be tiered to the cloud tier, and it can be increased by add-on orders4.Reference:

1: FabricPool overview5

2: FabricPool requirements6

3: Install a FabricPool license2

4: ONTAP FabricPool (FP) Licensing Overview1


Page:    1 / 14   
Total 60 questions