Refer to the Exhibit.
A customer has the following XtremIO environment:
If an application generates 100,000 IOPS of traffic, how many RPAs are needed to replicate the traffic from one XtremIO array to another XtremIO array over IP?
Answer : C
Required bandwidth= 100,000 * 8 * 1024 bytes
Provided bandwidth between XtremIO arrays with compression over Fiber Channel: 300 * 1024 * 1024 bytes
Required number of RPAs: 100,000 * 8 * 1024 / (300 * 1024 * 1024) = 800,000 / (300 * 1024) = 2.6.
Three RPAs would be enough.
A Linux administrator is attaching a new RHEL server to their XtremIO storage array. Which configuration setting should be changed?
Answer : B
The block size for both Oracle Cluster Registry (OCR) and Cluster Synchronization Services (CSS) voting files are 512 bytes. I/O operations to these file objects are therefore sized as a multiple of 512 bytes.
This is of no consequence since the best practice with XtremIO is to create volumes with 512e formatting.
References:https://www.emc.com/collateral/white-papers/h13497-oracle-best-practices-xtremio-wp.pdf, page 22
A customer is interested in purchasing XtremIO for their mission critical database applications that require the lowest possible response times. They have two data centers in which they want to introduce All-Flash arrays. However, they need a way to maintain true active/active access to all databases and application LUNs across both sites.
What is the recommended solution to address the requirements?
Answer : C
The EMC VPLEX family is the next-generation solution for data mobility and access within, across and between data centers.
VPLEX supports two configurations, local and metro. In the case of VPLEX Metro with the optional VPLEX Witness and Cross-Connected configuration, applications continue to operate in the surviving site with no interruption or downtime. Storage resources virtualized by VPLEX cooperate through the stack, with the ability to dynamically move applications and data across geographies and service providers.
A customer has a requirement to replicate their VDI to a newly purchased data center located 5 miles away. They require 10-day retention at each site and a continuous replication RPO. However, they want to have the same storage platform at each site. They have a limited budget but need to meet their requirements.
Which solution should be recommended to the customer?
Answer : C
The EMC RecoverPoint family provides cost-effective, local continuous data protection (CDP), continuous remote replication (CRR), and continuous local and remote replication (CLR) that allows for any-point-in-time data recovery and a new 'snap and replicate' mechanism for local and remote replication (XRP).
Native replication support for XtremIO
The native replication support for XtremIO is designed for high-performance and low-latency applications that provides a low Recovery Point Objective of one minute or less and immediate RTO.
The benefits include:
What is considered typical performance for an XtremIO single X-Brick cluster?
Answer : C
Choose an EMC XtremIO system and scale out linearly by adding more XtremIO X-Bricks.
At which point is data compressed when a host sends data to the XtremIO storage system?
Answer : A
XtremIO inline data deduplication and inline data compression services are inline, all the time.
References:https://www.emc.com/collateral/faq/faq-million-dollar-guarantee-rp-2016.pdf
Which multipathing software is supported by XtremIO?
Answer : A
Noting the inefficiencies in VMware's NMP driver, EMC developed a set of drivers specifically designed to overcome these limitations and improve the performance and reliability of the data passing between an array and a server. EMC developed the PowerPath family of products optimized specifically for Linux, Microsoft Windows, and UNIX Operating Systems as well as PowerPath/VE for VMware vSphere and Microsoft Hyper-V hypervisors.
PowerPath is installed on hosts to provide path failover, load balancing and performance optimization VPLEX engines (or directly to the XtremIO array if VPLEX is not used).
Note: VMware, with the cooperation of its storage partners, developed a Native Multipathing Plug-in (NMP). VMware NMP was designed to distribute the load over all the available paths and provide failover protection in the case of path, port or HBA failure, but it has not been fully optimized to work with the controllers in a storage systems. VMware's NMP Round Robin policy does not have the intelligence that PowerPath has as PowerPath uses testing and diagnostics to continually monitor an environment to determine the optimal path for queuing requests and will adapt to current conditions.
References:https://www.emc.com/collateral/analyst-reports/emc-taneja-group-powerpath-tb.pdf