HPE2-N69 Using HPE AI and Machine Learning Exam Practice Test

Page: 1 / 14
Total 40 questions
Question 1

A customer mentions that the ML team wants to avoid overfitting models. What does this mean?



Answer : C

Overfitting occurs when a model is trained too closely on the training data, leading to a model that performs very well on the training data but poorly on new data. This is because the model has been trained too closely to the training data, and so cannot generalize the patterns it has learned to new data. To avoid overfitting, the ML team needs to ensure that their models are not overly trained on the training data and that they have enough generalization capacity to be able to perform well on new data.


Question 2

What are the mechanics of now a model trains?



Answer : B

This is done by running the model through a training loop, where the model is fed data and the parameter weights are adjusted based on the results of the model's performance on the data. For example, if the model is a neural network, the weights of the connections between the neurons are adjusted based on the results of the model's performance on the data. This process is repeated until the model performs better on the data, at which point the model is considered trained.


Question 3

What distinguishes deep learning (DL) from other forms of machine learning (ML)?



Answer : A

Models based on neural networks with interconnected layers of nodes, including multiple hidden layers. Deep learning (DL) is a type of machine learning (ML) that uses models based on neural networks with interconnected layers of nodes, including multiple hidden layers. This is what distinguishes it from other forms of ML, which typically use simpler models with fewer layers. The multiple layers of DL models enable them to learn complex patterns and features from the data, allowing for more accurate and powerful predictions.


Question 4

A company has recently expanded its ml engineering resources from 5 CPUs 1012 GPUs.

What challenge is likely to continue to stand in the way of accelerating deep learning (DU training?



Answer : B

The complexity of adjusting model code to distribute the training process across multiple GPUs. Deep learning (DL) training requires a large amount of computing power and can be accelerated by using multiple GPUs. However, this requires adjusting the model code to distribute the training process across the GPUs, which can be a complex and time-consuming process. Thus, the complexity of adjusting the model code is likely to continue to be a challenge in accelerating DL training.


Question 5

ML engineers are defining a convolutional neural network (CNN) model bur they are not sure how many filters to use in each convolutional layer. What can help them address this concern?



Answer : A

Hyperparameter optimization is a process of tuning the hyperparameters of a machine learning model, such as the number of filters in a convolutional neural network (CNN) model, to determine the best combination of hyperparameters that will result in the best model performance. HPO techniques are used to automatically find the optimal hyperparameter values, which can greatly increase the accuracy and performance of the model.


Question 6

An HPE Machine Learning Development Environment resource pool uses priority scheduling with preemption disabled. Currently Experiment 1 Trial I is using 32 of the pool's 40 total slots; it has priority 42. Users then run two more experiments:

* Experiment 2:1 trial (Trial 2) that needs 24 slots; priority 50

* Experiment 3; l trial (Trial 3) that needs 24 slots; priority I

What happens?



Answer : D

Trial 3 is scheduled on 8 of the slots. Then, after Trial 1 has finished, it receives 16 more slots. This is because priority scheduling is used in the HPE Machine Learning Development Environment resource pool, which means higher priority tasks will be given priority over lower priority tasks. As such, Trial 3 with priority 1 will be given priority over Trial 2 with priority 50.


Question 7

What common challenge do ML teams lace in implementing hyperparameter optimization (HPO)?



Answer : C

Implementing hyperparameter optimization (HPO) manually can be time-consuming and demand a great deal of expertise. HPO is not a joint ML and IT Ops effort and it can be implemented on TensorFlow models, so these are not the primary challenges faced by ML teams. Additionally, ML teams often have access to large enough data sets to make HPO feasible and worthwhile.


Page:    1 / 14   
Total 40 questions