Splunk Enterprise Certified Architect SPLK-2002 Exam Questions

Page: 1 / 14
Total 205 questions
Question 1

When should a dedicated deployment server be used?



Answer : C

A dedicated deployment server is a Splunk instance that manages the distribution of configuration updates and apps to a set of deployment clients, such as forwarders, indexers, or search heads. A dedicated deployment server should be used when there are more than 50 deployment clients, because this number exceeds the recommended limit for a non-dedicated deployment server. A non-dedicated deployment server is a Splunk instance that also performs other roles, such as indexing or searching. Using a dedicated deployment server can improve the performance, scalability, and reliability of the deployment process. Option C is the correct answer. Option A is incorrect because the number of search peers does not affect the need for a dedicated deployment server. Search peers are indexers that participate in a distributed search. Option B is incorrect because the number of apps to deploy does not affect the need for a dedicated deployment server. Apps are packages of configurations and assets that provide specific functionality or views in Splunk. Option D is incorrect because the number of server classes does not affect the need for a dedicated deployment server.Server classes are logical groups of deployment clients that share the same configuration updates and apps12

1: https://docs.splunk.com/Documentation/Splunk/9.1.2/Updating/Aboutdeploymentserver2: https://docs.splunk.com/Documentation/Splunk/9.1.2/Updating/Whentousedeploymentserver


Question 2

If .delta replication fails during knowledge bundle replication, what is the fall-back method for Splunk?



Answer : C

This is the fall-back method for Splunk if .delta replication fails during knowledge bundle replication.Knowledge bundle replication is the process of distributing the knowledge objects, such as lookups, macros, and field extractions, from the search head cluster to the indexer cluster1.Splunk uses two methods of knowledge bundle replication: .delta replication and .bundle replication1..Delta replication is the default and preferred method, as it only replicates the changes or updates to the knowledge objects, which reduces the network traffic and disk space usage1.However, if .delta replication fails for some reason, such as corrupted files or network errors, Splunk automatically switches to .bundle replication, which replicates the entire knowledge bundle, regardless of the changes or updates1.This ensures that the knowledge objects are always synchronized between the search head cluster and the indexer cluster, but it also consumes more network bandwidth and disk space1. The other options are not valid fall-back methods for Splunk.Option A, restarting splunkd, is not a method of knowledge bundle replication, but a way to restart the Splunk daemon on a node2. This may or may not fix the .delta replication failure, but it does not guarantee the synchronization of the knowledge objects.Option B, .delta replication, is not a fall-back method, but the primary method of knowledge bundle replication, which is assumed to have failed in the question1.Option D, restarting mongod, is not a method of knowledge bundle replication, but a way to restart the MongoDB daemon on a node3.This is not related to the knowledge bundle replication, but to the KV store replication, which is a different process3. Therefore, option C is the correct answer, and options A, B, and D are incorrect.

1: How knowledge bundle replication works2: Start and stop Splunk Enterprise3: Restart the KV store


Question 3

Which of the following items are important sizing parameters when architecting a Splunk environment? (select all that apply)



Answer : A, B, C

Number of concurrent users: This is an important factor because it affects the search performance and resource utilization of the Splunk environment. More users mean more concurrent searches, which require more CPU, memory, and disk I/O.The number of concurrent users also determines the search head capacity and the search head clustering configuration12

Volume of incoming data: This is another crucial factor because it affects the indexing performance and storage requirements of the Splunk environment. More data means more indexing throughput, which requires more CPU, memory, and disk I/O.The volume of incoming data also determines the indexer capacity and the indexer clustering configuration13

Existence of premium apps: This is a relevant factor because some premium apps, such as Splunk Enterprise Security and Splunk IT Service Intelligence, have additional requirements and recommendations for the Splunk environment. For example, Splunk Enterprise Security requires a dedicated search head cluster and a minimum of 12 CPU cores per search head.Splunk IT Service Intelligence requires a minimum of 16 CPU cores and 64 GB of RAM per search head45


1:Splunk Validated Architectures2:Search head capacity planning3:Indexer capacity planning4:Splunk Enterprise Security Hardware and Software Requirements5: [Splunk IT Service Intelligence Hardware and Software Requirements]

Question 4

Which of the following statements describe a Search Head Cluster (SHC) captain? (Select all that apply.)



Answer : A, D

The following statements describe a search head cluster captain:

Is the job scheduler for the entire search head cluster. The captain is responsible for scheduling and dispatching the searches that run on the search head cluster, as well as coordinating the search results from the search peers. The captain also ensures that the scheduled searches are balanced across the search head cluster members and that the search concurrency limits are enforced.

Replicates the search head cluster's knowledge bundle to the search peers. The captain is responsible for creating and distributing the knowledge bundle to the search peers, which contains the knowledge objects that are required for the searches. The captain also ensures that the knowledge bundle is consistent and up-to-date across the search head cluster and the search peers. The following statements do not describe a search head cluster captain:

Manages alert action suppressions (throttling). Alert action suppressions are the settings that prevent an alert from triggering too frequently or too many times. These settings are managed by the search head that runs the alert, not by the captain. The captain does not have any special role in managing alert action suppressions.

Synchronizes the member list with the KV store primary. The member list is the list of search head cluster members that are active and available. The KV store primary is the search head cluster member that is responsible for replicating the KV store data to the other members. These roles are not related to the captain, and the captain does not synchronize them. The member list and the KV store primary are determined by the RAFT consensus algorithm, which is independent of the captain election. For more information, see [About the captain and the captain election] and [About KV store and search head clusters] in the Splunk documentation.


Question 5

(When planning user management for a new Splunk deployment, which task can be disregarded?)



Answer : C

According to the Splunk Enterprise User Authentication and Authorization Guide, effective user management during deployment planning involves identifying how users will authenticate (native, LDAP, or SAML) and defining what roles and capabilities they will need to perform their tasks.

However, counting or analyzing the number of users who appear in Splunk log events (Option C) is not part of user management planning. This metric relates to audit and monitoring, not access provisioning or role assignment.

A proper user management plan should address:

Authentication method selection (native, LDAP, or SAML).

User mapping and provisioning workflows from existing identity stores.

Role-based access control (RBAC) --- assigning users appropriate permissions via Splunk roles and capabilities.

Administrative governance --- ensuring access policies align with compliance requirements.

Determining the number of users visible in log events provides no operational value when planning Splunk authentication or authorization architecture. Therefore, this task can be safely disregarded during initial planning.

Reference (Splunk Enterprise Documentation):

* User Authentication and Authorization in Splunk Enterprise

* Configuring LDAP and SAML Authentication

* Managing Users, Roles, and Capabilities

* Splunk Deployment Planning Manual -- Security and Access Control Planning


Question 6

Which component in the splunkd.log will log information related to bad event breaking?



Answer : D

The AggregatorMiningProcessor component in the splunkd.log file will log information related to bad event breaking. The AggregatorMiningProcessor is responsible for breaking the incoming data into events and applying the props.conf settings. If there is a problem with the event breaking, such as incorrect timestamps, missing events, or merged events, the AggregatorMiningProcessor will log the error or warning messages in the splunkd.log file. The Audittrail component logs information about the audit events, such as user actions, configuration changes, and search activity. The EventBreaking component logs information about the event breaking rules, such as the LINE_BREAKER and SHOULD_LINEMERGE settings. The IndexingPipeline component logs information about the indexing pipeline, such as the parsing, routing, and indexing phases. For more information, seeAbout Splunk Enterprise loggingand [Configure event line breaking] in the Splunk documentation.


Question 7

The frequency in which a deployment client contacts the deployment server is controlled by what?



Answer : D

The frequency in which a deployment client contacts the deployment server is controlled by the phoneHomeIntervalInSecs attribute in deploymentclient.conf. This attribute specifies how often the deployment client checks in with the deployment server to get updates on the apps and configurations that it should receive. The polling_interval attribute in outputs.conf controls how often the forwarder sends data to the indexer or another forwarder. The polling_interval attribute in deploymentclient.conf and the phoneHomeIntervalInSecs attribute in outputs.conf are not valid Splunk attributes. For more information, seeConfigure deployment clientsandConfigure forwarders with outputs.confin the Splunk documentation.


Page:    1 / 14   
Total 205 questions