Assume a file is being monitored and the data was incorrectly indexed to an exclusive index. The index is
cleaned and now the data must be reindexed. What other index must be cleaned to reset the input checkpoint
information for that file?
When running a real-time search, search results are pulled from which Splunk component?
Answer : D
Using the Splunk reference URL https://docs.splunk.com/Splexicon:Searchpeer
'search peer is a splunk platform instance that responds to search requests from a search head. The term 'search peer' is usally synonymous with the indexer role in a distributed search topology. However, other instance types also have access to indexed data, particularly internal diagnostic data, and thus function as search peers when they respond to search requests for that data.'
Which of the following monitor inputs stanza headers would match all of the following files?
/var/log/www1/secure.log
/var/log/www/secure.l
/var/log/www/logs/secure.logs
/var/log/www2/secure.log
Answer : C
Which of the methods listed below supports muti-factor authentication?
Local user accounts created in Splunk store passwords in which file?
Answer : A
'To set the default username and password, place user-seed.conf in $SPLUNK_HOME/etc/system/local. You must restart Splunk to enable configurations. If the $SPLUNK_HOME/etc/passwd file is present, the settings in this file (user-seed.conf) are not used.'
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
Which feature in Splunk allows Event Breaking, Timestamp extractions, and any advanced configurations
found in props.conf to be validated all through the UI?
Which authentication methods are natively supported within Splunk Enterprise? (select all that apply)
Answer : A, B, C
Splunk authentication: Provides Admin, Power and User by default, and you can define your own roles using a list of capabilities. If you have an Enterprise license, Splunk authentication is enabled by default. See Set up user authentication with Splunk's built-in system for more information. LDAP: Splunk Enterprise supports authentication with its internal authentication services or your existing LDAP server. See Set up user authentication with LDAP for more information. Scripted authentication API: Use scripted authentication to integrate Splunk authentication with an external authentication system, such as RADIUS or PAM. See Set up user authentication with external systems for more information. Note: Authentication, including native authentication, LDAP, and scripted authentication, is not available in Splunk Free.
Assume a file is being monitored and the data was incorrectly indexed to an exclusive index. The index is
cleaned and now the data must be reindexed. What other index must be cleaned to reset the input checkpoint
information for that file?
During search time, which directory of configuration files has the highest precedence?
Answer : D
Adding further clarity and quoting same Splunk reference URL from @giubal'
'To keep configuration settings consistent across peer nodes, configuration files are managed from the cluster master, which pushes the files to the slave-app directories on the peer nodes. Files in the slave-app directories have the highest precedence in a cluster peer's configuration. Here is the expanded precedence order for cluster peers:
1.Slave-app local directories -- highest priority
2. System local directory
3. App local directories
4. Slave-app default directories
5. App default directories
6. System default directory --lowest priority
A Universal Forwarder is collecting two separate sources of data (A,B). Source A is being routed through a Heavy Forwarder and then to an indexer. Source B is being routed directly to the indexer. Both sets of data require the masking of raw text strings before being written to disk. What does the administrator need to do to
ensure that the masking takes place successfully?
Answer : D
The correct answer is D. Place both props . conf and transforms . conf on the Heavy Forwarder for source A, and place both props . conf and transforms . conf on the indexer for source B.
For source A, the data is routed through a heavy forwarder, which can parse the data before sending it to the indexer. Therefore, you need to place both props.conf and transforms.conf on the heavy forwarder for source A, so that the masking takes place before indexing.
For source B, the data is routed directly to the indexer, which parses and indexes the data. Therefore, you need to place both props.conf and transforms.conf on the indexer for source B, so that the masking takes place before indexing.
Which feature in Splunk allows Event Breaking, Timestamp extractions, and any advanced configurations
found in props.conf to be validated all through the UI?
After an Enterprise Trial license expires, it will automatically convert to a Free license. How many days is an Enterprise Trial license valid before this conversion occurs?
When indexing a data source, which fields are considered metadata?
Answer : D
Which of the following are required when defining an index in indexes. conf? (select all that apply)
Answer : A, B, D
homePath = $SPLUNK_DB/hatchdb/db
coldPath = $SPLUNK_DB/hatchdb/colddb
thawedPath = $SPLUNK_DB/hatchdb/thaweddb
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Indexesconf#PER_INDEX_OPTIONS
Which of the methods listed below supports muti-factor authentication?
Which setting allows the configuration of Splunk to allow events to span over more than one line?
Local user accounts created in Splunk store passwords in which file?
Answer : A
'To set the default username and password, place user-seed.conf in $SPLUNK_HOME/etc/system/local. You must restart Splunk to enable configurations. If the $SPLUNK_HOME/etc/passwd file is present, the settings in this file (user-seed.conf) are not used.'
Which pathway represents where a network input in Splunk might be found?
Answer : B
The correct answer is B. The network input in Splunk might be found in the $SPLUNK_HOME/etc/apps/$appName/local/inputs.conf file.
A network input is a type of input that monitors data from TCP or UDP ports. To configure a network input, you need to specify the port number, the connection host, the source, and the sourcetype in the inputs.conf file. You can also set other optional settings, such as index, queue, and host_regex1.
The inputs.conf file is a configuration file that contains the settings for different types of inputs, such as files, directories, scripts, network ports, and Windows event logs. The inputs.conf file can be located in various directories, depending on the scope and priority of the settings. The most common locations are:
$SPLUNK_HOME/etc/system/default: This directory contains the default settings for all inputs. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/system/local: This directory contains the custom settings for all inputs that apply to the entire Splunk instance. The settings in this directory override the default settings2.
$SPLUNK_HOME/etc/apps/$appName/default: This directory contains the default settings for all inputs that are specific to an app. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/apps/$appName/local: This directory contains the custom settings for all inputs that are specific to an app. The settings in this directory override the default and system settings2.
Therefore, the best practice is to create or edit the inputs.conf file in the $SPLUNK_HOME/etc/apps/$appName/local directory, where $appName is the name of the app that you want to configure the network input for. This way, you can avoid modifying the default files and ensure that your settings are applied to the specific app.
The other options are incorrect because:
A . There is no network directory under the apps directory. The network input settings should be in the inputs.conf file, not in a separate directory.
C . There is no udp.conf file in Splunk. The network input settings should be in the inputs.conf file, not in a separate file. The system directory is not the recommended location for custom settings, as it affects the entire Splunk instance.
D . The var/lib/splunk directory is where Splunk stores the indexed data, not the input settings. The homePath setting is used to specify the location of the index data, not the input data. The inputName is not a valid variable for inputs.conf.
Load balancing on a Universal Forwarder is not scaling correctly. The forwarder's outputs. and the tcpout stanza are setup correctly. What else could be the cause of this scaling issue? (select all that apply)
Answer : A, C
The possible causes of the load balancing issue on the Universal Forwarder are A and C. The receiving port and the DNS record are both factors that affect the ability of the Universal Forwarder to distribute data across multiple receivers. If the receiving port is not properly set up to listen on the right port, or if the DNS record used is not set up with a valid list of IP addresses, the Universal Forwarder might fail to connect to some or all of the receivers, resulting in poor load balancing.
Which configuration file would be used to forward the Splunk internal logs from a search head to the indexer?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/DistSearch/Forwardsearchheaddata
Per the provided Splunk reference URL by @hwangho, scroll to section Forward search head data, subsection titled, 2. Configure the search head as a forwarder. 'Create an outputs.conf file on the search head that configures the search head for load-balanced forwarding across the set of search peers (indexers).'
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
Which data pipeline phase is the last opportunity for defining event boundaries?
Answer : C
The parsing phase is the process of extracting fields and values from raw data. The parsing phase respects LINE_BREAKER, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings in props.conf. These settings determine how Splunk breaks the data into events based on certain criteria, such as timestamps or regular expressions. The event boundaries are defined by the props.conf file, which can be modified by the administrator. Therefore, the parsing phase is the last opportunity for defining event boundaries.
When working with an indexer cluster, what changes with the global precedence when comparing to a standalone deployment?
Answer : C
The app local directories move to second in the priority list. This is explained in the Splunk documentation, which states:
In a clustered environment, the precedence of configuration files changes slightly from that of a standalone deployment. The app local directories move to second in the priority list, after the peer-apps local directory. This means that any configuration files in the app local directories on the individual peers are overridden by configuration files of the same name and type in the peer-apps local directory on the master node.
In which phase do indexed extractions in props.conf occur?
Answer : B
The following items in the phases below are listed in the order Splunk applies them (ie LINE_BREAKER occurs before TRUNCATE).
Input phase
inputs.conf
props.conf
CHARSET
NO_BINARY_CHECK
CHECK_METHOD
CHECK_FOR_HEADER (deprecated)
PREFIX_SOURCETYPE
sourcetype
wmi.conf
regmon-filters.conf
Structured parsing phase
props.conf
INDEXED_EXTRACTIONS, and all other structured data header extractions
Parsing phase
props.conf
LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings
TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ, and all other time extraction settings and rules
TRANSFORMS which includes per-event queue filtering, per-event index assignment, per-event routing
SEDCMD
MORE_THAN, LESS_THAN
transforms.conf
stanzas referenced by a TRANSFORMS clause in props.conf
LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH
Configurationparametersandthedatapipeline
What is the difference between the two wildcards ... and - for the monitor stanza in inputs, conf?
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Data/Specifyinputpathswithwildcards
... The ellipsis wildcard searches recursively through directories and any number of levels of subdirectories to find matches.
If you specify a folder separator (for example, //var/log/.../file), it does not match the first folder level, only subfolders.
* The asterisk wildcard matches anything in that specific folder path segment.
Unlike ..., * does not recurse through subfolders.
Which valid bucket types are searchable? (select all that apply)
Answer : A, B, C
Hot/warm/cold/thawed bucket types are searchable. Frozen isn't searchable because its either deleted at that state or archived.
Which of the following applies only to Splunk index data integrity check?
Answer : C
When running a real-time search, search results are pulled from which Splunk component?
Answer : D
Using the Splunk reference URL https://docs.splunk.com/Splexicon:Searchpeer
'search peer is a splunk platform instance that responds to search requests from a search head. The term 'search peer' is usally synonymous with the indexer role in a distributed search topology. However, other instance types also have access to indexed data, particularly internal diagnostic data, and thus function as search peers when they respond to search requests for that data.'
What hardware attribute would need to be changed to increase the number of simultaneous searches (ad-hoc and scheduled) on a single search head?
Answer : B
https://docs.splunk.com/Documentation/Splunk/7.3.1/DistSearch/SHCarchitecture
Scroll down to section titled, How the cluster handles concurrent search quotas, 'Overall search quota. This quota determines the maximum number of historical searches (combined scheduled and ad hoc) that the cluster can run concurrently. This quota is configured with max_Searches_per_cpu and related settings in limits.conf.'
For single line event sourcetypes. it is most efficient to set SHOULD_linemerge to what value?
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking
Attribute : SHOULD_LINEMERGE = [true|false]
Description : When set to true, the Splunk platform combines several input lines into a single event, with configuration based on the settings described in the next section.
Where should apps be located on the deployment server that the clients pull from?
Answer : D
After an app is downloaded, it resides under $SPLUNK_HOME/etc/apps on the deployment clients. But it resided in the $SPLUNK_HOME/etc/deployment-apps location in the deployment server.
Which Splunk indexer operating system platform is supported when sending logs from a Windows universal forwarder?
Answer : A
'The forwarder/indexer relationship can be considered platform agnostic (within the sphere of supported platforms) because they exchange their data handshake (and the data, if you wish) over TCP.
Which optional configuration setting in inputs .conf allows you to selectively forward the data to specific indexer(s)?
Answer : A
Specifies a comma-separated list of tcpout group names. Use this setting to selectively forward your data to specific indexers by specifying the tcpout groups that the forwarder should use when forwarding the data. Define the tcpout group names in the outputs.conf file in [tcpout:<tcpout_group_name>] stanzas. The groups present in defaultGroup in [tcpout] stanza in the outputs.conf file.
In which scenario would a Splunk Administrator want to enable data integrity check when creating an index?
Answer : D
Which of the following monitor inputs stanza headers would match all of the following files?
/var/log/www1/secure.log
/var/log/www/secure.l
/var/log/www/logs/secure.logs
/var/log/www2/secure.log
Answer : C
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
In this source definition the MAX_TIMESTAMP_LOOKHEAD is missing. Which value would fit best?
Event example:
Answer : D
https://docs.splunk.com/Documentation/Splunk/6.2.0/Data/Configuretimestamprecognition
'Specify how far (how many characters) into an event Splunk software should look for a timestamp.' since TIME_PREFIX = ^ and timestamp is from 0-29 position, so D=30 will pick up the WHOLE timestamp correctly.
Which Splunk component requires a Forwarder license?
Answer : B
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
Which is a valid stanza for a network input?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Data/Monitornetworkports
Bypassautomaticsourcetypeassignment
How do you remove missing forwarders from the Monitoring Console?
Answer : D
Which of the following indexes come pre-configured with Splunk Enterprise? (select all that apply)
Which additional component is required for a search head cluster?
Answer : A
The deployer. This is a Splunk Enterprise instance that distributes apps and other configurations to the cluster members. It stands outside the cluster and cannot run on the same instance as a cluster member. It can, however, under some circumstances, reside on the same instance as other Splunk Enterprise components, such as a deployment server or an indexer cluster master node.
In which Splunk configuration is the SEDCMD used?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Forwarddatatothird-partysystemsd
'You can specify a SEDCMD configuration in props.conf to address data that contains characters that the third-party server cannot process. '
User role inheritance allows what to be inherited from the parent role? (select all that apply)
Which Splunk component does a search head primarily communicate with?
Answer : A
A company moves to a distributed architecture to meet the growing demand for the use of Splunk. What parameter can be configured to enable automatic load balancing in the
Universal Forwarder to send data to the indexers?
Answer : D
To enable automatic load balancing, set the stanza to have a server value equal to a comma-separated list of IP addresses and indexer ports for each of the indexers in the environment. For example:
[tcpout] server=10.1.1.1:9997,10.1.1.2:9997,10.1.1.3:9997
The forwarder then distributes data across all of the indexers in the list.
Which of the following statements accurately describes using SSL to secure the feed from a forwarder?
Answer : A
AboutsecuringyourSplunkconfigurationwithSSL
Which setting allows the configuration of Splunk to allow events to span over more than one line?
The LINE_BREAKER attribute is configured in which configuration file?
Answer : A
After configuring a universal forwarder to communicate with an indexer, which index can be checked via the Splunk Web UI for a successful connection?
Answer : D
Which of the following authentication types requires scripting in Splunk?
Answer : D
https://answers.splunk.com/answers/131127/scripted-authentication.html
Scripted Authentication: An option for Splunk Enterprise authentication. You can use an authentication system that you have in place (such as PAM or RADIUS) by configuring authentication.conf to use a script instead of using LDAP or Splunk Enterprise default authentication.
What is required when adding a native user to Splunk? (select all that apply)
Answer : A, B
According to the Splunk system admin course PDF, When adding native users, Username and Password ARE REQUIRED
In a distributed environment, which Splunk component is used to distribute apps and configurations to the
other Splunk instances?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations
First line says it all: 'The deployment server distributes deployment apps to clients.'
Which file will be matched for the following monitor stanza in inputs. conf?
[monitor: ///var/log/*/bar/*. txt]
Answer : C
The correct answer is C. /var/log/host_460352847/bar/file/foo.txt.
The monitor stanza in inputs.conf is used to configure Splunk to monitor files and directories for new data. The monitor stanza has the following syntax1:
[monitor://<input path>]
The input path can be a file or a directory, and it can include wildcards (*) and regular expressions. The wildcards match any number of characters, including none, while the regular expressions match patterns of characters. The input path is case-sensitive and must be enclosed in double quotes if it contains spaces1.
In this case, the input path is /var/log//bar/.txt, which means Splunk will monitor any file with the .txt extension that is located in a subdirectory named bar under the /var/log directory. The subdirectory bar can be at any level under the /var/log directory, and the * wildcard will match any characters before or after the bar and .txt parts1.
Therefore, the file /var/log/host_460352847/bar/file/foo.txt will be matched by the monitor stanza, as it meets the criteria. The other files will not be matched, because:
A . /var/log/host_460352847/temp/bar/file/csv/foo.txt has a .csv extension, not a .txt extension.
B . /var/log/host_460352847/bar/foo.txt is not located in a subdirectory under the bar directory, but directly in the bar directory.
D . /var/log/host_460352847/temp/bar/file/foo.txt is located in a subdirectory named file under the bar directory, not directly in the bar directory.
Which of the following is valid distribute search group?
A)
B)
C)
D)
Answer : D
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
Which of the following are reasons to create separate indexes? (Choose all that apply.)
Answer : A, C
Different retention times: You can set different retention policies for different indexes, depending on how long you want to keep the data. For example, you can have an index for security data that has a longer retention time than an index for performance data that has a shorter retention time.
Restrict user permissions: You can set different access permissions for different indexes, depending on who needs to see the data. For example, you can have an index for sensitive data that is only accessible by certain users or roles, and an index for public data that is accessible by everyone.
Event processing occurs at which phase of the data pipeline?
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
Which Splunk indexer operating system platform is supported when sending logs from a Windows universal forwarder?
Answer : A
'The forwarder/indexer relationship can be considered platform agnostic (within the sphere of supported platforms) because they exchange their data handshake (and the data, if you wish) over TCP.
Which additional component is required for a search head cluster?
Answer : A
The deployer. This is a Splunk Enterprise instance that distributes apps and other configurations to the cluster members. It stands outside the cluster and cannot run on the same instance as a cluster member. It can, however, under some circumstances, reside on the same instance as other Splunk Enterprise components, such as a deployment server or an indexer cluster master node.
Assume a file is being monitored and the data was incorrectly indexed to an exclusive index. The index is
cleaned and now the data must be reindexed. What other index must be cleaned to reset the input checkpoint
information for that file?
Which pathway represents where a network input in Splunk might be found?
Answer : B
The correct answer is B. The network input in Splunk might be found in the $SPLUNK_HOME/etc/apps/$appName/local/inputs.conf file.
A network input is a type of input that monitors data from TCP or UDP ports. To configure a network input, you need to specify the port number, the connection host, the source, and the sourcetype in the inputs.conf file. You can also set other optional settings, such as index, queue, and host_regex1.
The inputs.conf file is a configuration file that contains the settings for different types of inputs, such as files, directories, scripts, network ports, and Windows event logs. The inputs.conf file can be located in various directories, depending on the scope and priority of the settings. The most common locations are:
$SPLUNK_HOME/etc/system/default: This directory contains the default settings for all inputs. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/system/local: This directory contains the custom settings for all inputs that apply to the entire Splunk instance. The settings in this directory override the default settings2.
$SPLUNK_HOME/etc/apps/$appName/default: This directory contains the default settings for all inputs that are specific to an app. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/apps/$appName/local: This directory contains the custom settings for all inputs that are specific to an app. The settings in this directory override the default and system settings2.
Therefore, the best practice is to create or edit the inputs.conf file in the $SPLUNK_HOME/etc/apps/$appName/local directory, where $appName is the name of the app that you want to configure the network input for. This way, you can avoid modifying the default files and ensure that your settings are applied to the specific app.
The other options are incorrect because:
A . There is no network directory under the apps directory. The network input settings should be in the inputs.conf file, not in a separate directory.
C . There is no udp.conf file in Splunk. The network input settings should be in the inputs.conf file, not in a separate file. The system directory is not the recommended location for custom settings, as it affects the entire Splunk instance.
D . The var/lib/splunk directory is where Splunk stores the indexed data, not the input settings. The homePath setting is used to specify the location of the index data, not the input data. The inputName is not a valid variable for inputs.conf.
To set up a Network input in Splunk, what needs to be specified'?
Which of the following statements describe deployment management? (select all that apply)
Answer : A, B
'All Splunk Enterprise instances functioning as management components needs access to an Enterprise license. Management components include the deployment server, the indexer cluster manager node, the search head cluster deployer, and the monitoring console.'
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
Which of the following must be done to define user permissions when integrating Splunk with LDAP?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.1.3/Security/ConfigureLDAPwithSplunkWeb
'You can map either users or groups, but not both. If you are using groups, all users must be members of an appropriate group. Groups inherit capabilities form the highest level role they're a member of.' 'If your LDAP environment does not have group entries, you can treat each user as its own group.'
How is a remote monitor input distributed to forwarders?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/Usingforwardingagents
Scroll down to the section Titled, How to configure forwarder inputs, and subsection Here are the main ways that you can configure data inputs on a forwarder Install the app or add-on that contains the inputs you wants
A company moves to a distributed architecture to meet the growing demand for the use of Splunk. What parameter can be configured to enable automatic load balancing in the
Universal Forwarder to send data to the indexers?
Answer : D
To enable automatic load balancing, set the stanza to have a server value equal to a comma-separated list of IP addresses and indexer ports for each of the indexers in the environment. For example:
[tcpout] server=10.1.1.1:9997,10.1.1.2:9997,10.1.1.3:9997
The forwarder then distributes data across all of the indexers in the list.
After an Enterprise Trial license expires, it will automatically convert to a Free license. How many days is an Enterprise Trial license valid before this conversion occurs?
Local user accounts created in Splunk store passwords in which file?
Answer : A
'To set the default username and password, place user-seed.conf in $SPLUNK_HOME/etc/system/local. You must restart Splunk to enable configurations. If the $SPLUNK_HOME/etc/passwd file is present, the settings in this file (user-seed.conf) are not used.'
What is the default value of LINE_BREAKER?
Answer : B
Line breaking, which uses theLINE_BREAKERsetting to split the incoming stream of data into separate lines. By default, theLINE_BREAKERvalue is any sequence of newlines and carriage returns. In regular expression format, this is represented as the following string:([\r\n]+). You don't normally need to adjust this setting, but in cases where it's necessary, you must configure it in the props.conf configuration file on the forwarder that sends the data to Splunk Cloud Platform or a Splunk Enterprise indexer. TheLINE_BREAKERsetting expects a value in regular expression format.
Immediately after installation, what will a Universal Forwarder do first?
In inputs. conf, which stanza would mean Splunk was only reading one local file?
Which of the following are required when defining an index in indexes. conf? (select all that apply)
Answer : A, B, D
homePath = $SPLUNK_DB/hatchdb/db
coldPath = $SPLUNK_DB/hatchdb/colddb
thawedPath = $SPLUNK_DB/hatchdb/thaweddb
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Indexesconf#PER_INDEX_OPTIONS
The priority of layered Splunk configuration files depends on the file's:
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
'To determine the order of directories for evaluating configuration file precendence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user'
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
Which default Splunk role could be assigned to provide users with the following capabilities?
Create saved searches
Edit shared objects and alerts
Not allowed to create custom roles
What happens when there are conflicting settings within two or more configuration files?
Answer : D
When there are conflicting settings within two or more configuration files, the setting with the highest precedence is used. The precedence of configuration files is determined by a combination of the file type, the directory location, and the alphabetical order of the file names.
What is required when adding a native user to Splunk? (select all that apply)
Answer : A, B
According to the Splunk system admin course PDF, When adding native users, Username and Password ARE REQUIRED
Which configuration file would be used to forward the Splunk internal logs from a search head to the indexer?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/DistSearch/Forwardsearchheaddata
Per the provided Splunk reference URL by @hwangho, scroll to section Forward search head data, subsection titled, 2. Configure the search head as a forwarder. 'Create an outputs.conf file on the search head that configures the search head for load-balanced forwarding across the set of search peers (indexers).'
Which configuration files are used to transform raw data ingested by Splunk? (Choose all that apply.)
Answer : A, D
use transformations with props.conf and transforms.conf to:
-- Mask or delete raw data as it is being indexed
--Override sourcetype or host based upon event values
-- Route events to specific indexes based on event content
-- Prevent unwanted events from being indexed
After how many warnings within a rolling 30-day period will a license violation occur with an enforced
Enterprise license?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Aboutlicenseviolations
'Enterprise Trial license. If you get five or more warnings in a rolling 30 days period, you are in violation of your license. Dev/Test license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. Developer license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. BUT for Free license. If you get three or more warnings in a rolling 30 days period, you are in violation of your license.'
Consider a company with a Splunk distributed environment in production. The Compliance Department wants to start using Splunk; however, they want to ensure that no one can see their reports or any other knowledge objects. Which Splunk Component can be added to implement this policy for the new team?
Answer : D
Which parent directory contains the configuration files in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Configurationfiledirectories
Section titled, Configuration file directories, states 'A detailed list of settings for each configuration file is provided in the .spec file names for that configuration file. You can find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc system/README folder of your Splunk Enterprise installation...'
The CLI command splunk add forward-server indexer:
which configuration file?
Answer : C
The CLI command 'Splunk add forward-server indexer:<receiving-port>' is used to define the indexer and the listening port on forwards. The command creates this kind of entry '[tcpout-server://<ip address>:
https://docs.splunk.com/Documentation/Forwarder/8.2.2/Forwarder/Configureforwardingwithoutputs.conf
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
A new forwarder has been installed with a manually created deploymentclient.conf.
What is the next step to enable the communication between the forwarder and the deployment server?
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
Which additional component is required for a search head cluster?
Answer : A
The deployer. This is a Splunk Enterprise instance that distributes apps and other configurations to the cluster members. It stands outside the cluster and cannot run on the same instance as a cluster member. It can, however, under some circumstances, reside on the same instance as other Splunk Enterprise components, such as a deployment server or an indexer cluster master node.
Which option accurately describes the purpose of the HTTP Event Collector (HEC)?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/UsetheHTTPEventCollector
'The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. HEC uses a token-based authentication model. You can generate a token and then configure a logging library or HTTP client with the token to send data to HEC in a specific format. This process eliminates the need for a Splunk forwarder when you send application events.'
After configuring a universal forwarder to communicate with an indexer, which index can be checked via the Splunk Web UI for a successful connection?
Answer : D
The universal forwarder has which capabilities when sending data? (select all that apply)
Which default Splunk role could be assigned to provide users with the following capabilities?
Create saved searches
Edit shared objects and alerts
Not allowed to create custom roles
Which of the following is the use case for the deployment server feature of Splunk?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
Which Splunk component requires a Forwarder license?
Answer : B
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
When would the following command be used?
Which forwarder type can parse data prior to forwarding?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Typesofforwarders
'A heavy forwarder parses data before forwarding it and can route data based on criteria such as source or type of event.'
When running a real-time search, search results are pulled from which Splunk component?
Answer : D
Using the Splunk reference URL https://docs.splunk.com/Splexicon:Searchpeer
'search peer is a splunk platform instance that responds to search requests from a search head. The term 'search peer' is usally synonymous with the indexer role in a distributed search topology. However, other instance types also have access to indexed data, particularly internal diagnostic data, and thus function as search peers when they respond to search requests for that data.'
Which data pipeline phase is the last opportunity for defining event boundaries?
Answer : C
The parsing phase is the process of extracting fields and values from raw data. The parsing phase respects LINE_BREAKER, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings in props.conf. These settings determine how Splunk breaks the data into events based on certain criteria, such as timestamps or regular expressions. The event boundaries are defined by the props.conf file, which can be modified by the administrator. Therefore, the parsing phase is the last opportunity for defining event boundaries.
Local user accounts created in Splunk store passwords in which file?
Answer : A
'To set the default username and password, place user-seed.conf in $SPLUNK_HOME/etc/system/local. You must restart Splunk to enable configurations. If the $SPLUNK_HOME/etc/passwd file is present, the settings in this file (user-seed.conf) are not used.'
Which of the following are methods for adding inputs in Splunk? (select all that apply)
Answer : A, B, C
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Configureyourinputs
Add your data to Splunk Enterprise. With Splunk Enterprise, you can add data using Splunk Web or Splunk Apps. In addition to these methods, you also can use the following methods. -The Splunk Command Line Interface (CLI) -The inputs.conf configuration file. When you specify your inputs with Splunk Web or the CLI, the details are saved in a configuartion file on Splunk Enterprise indexer and heavy forwarder instances.
To set up a Network input in Splunk, what needs to be specified'?
Which of the following are reasons to create separate indexes? (Choose all that apply.)
Answer : A, C
Different retention times: You can set different retention policies for different indexes, depending on how long you want to keep the data. For example, you can have an index for security data that has a longer retention time than an index for performance data that has a shorter retention time.
Restrict user permissions: You can set different access permissions for different indexes, depending on who needs to see the data. For example, you can have an index for sensitive data that is only accessible by certain users or roles, and an index for public data that is accessible by everyone.
Which configuration file would be used to forward the Splunk internal logs from a search head to the indexer?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/DistSearch/Forwardsearchheaddata
Per the provided Splunk reference URL by @hwangho, scroll to section Forward search head data, subsection titled, 2. Configure the search head as a forwarder. 'Create an outputs.conf file on the search head that configures the search head for load-balanced forwarding across the set of search peers (indexers).'
When does a warm bucket roll over to a cold bucket?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/HowSplunkstoresindexes
Once further conditions are met (for example, the index reaches some maximum number of warm buckets), the indexer begins to roll the warm buckets to cold, based on their age. It always selects the oldest warm bucket to roll to cold. Buckets continue to roll to cold as they age in this manner. Cold buckets reside in a different location from hot and warm buckets. You can configure the location so that cold buckets reside on cheaper storage.
166653
When deploying apps, which attribute in the forwarder management interface determines the apps that clients install?
Answer : C
<https://docs.splunk.com/Documentation/Splunk/8.0.6/Updating/Deploymentserverarchitecture>
https://docs.splunk.com/Splexicon:Serverclass
What is the correct example to redact a plain-text password from raw events?
Answer : B
The correct answer is B. in props.conf:
[identity]
SEDCMD-redact_pw = s/password=([^,|/s]+)/ ####REACTED####/g
s/password=([^,|/s]+)/ ####REACTED####/g
The g flag at the end means that the replacement is applied globally, not just to the first match.
Option A is incorrect because it uses the REGEX attribute instead of the SEDCMD attribute. The REGEX attribute is used to extract fields from events, not to modify them.
Option C is incorrect because it uses the transforms.conf file instead of the props.conf file. The transforms.conf file is used to define transformations that can be applied to fields or events, such as lookups, evaluations, or replacements. However, these transformations are applied after indexing, not before.
Option D is incorrect because it uses both the wrong attribute and the wrong file. There is no REGEX-redact_pw attribute in the transforms.conf file.
References: 1: Redact data from events - Splunk Documentation
If an update is made to an attribute in inputs.conf on a universal forwarder, on which Splunk component
would the fishbucket need to be reset in order to reindex the data?
Answer : A
https://www.splunk.com/en_us/blog/tips-and-tricks/what-is-this-fishbucket-thing.html
'Every Splunk instance has a fishbucket index, except the lightest of hand-tuned lightweight forwarders, and if you index a lot of files it can get quite large. As any other index, you can change the retention policy to control the size via indexes.conf'
Reference https://community.splunk.com/t5/Archive/How-to-reindex-data-from-a-forwarder/td-p/93310
Which is a valid stanza for a network input?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Data/Monitornetworkports
Bypassautomaticsourcetypeassignment
Who provides the Application Secret, Integration, and Secret keys, as well as the API Hostname when setting
up Duo for Multi-Factor Authentication in Splunk Enterprise?
Answer : A
Which of the following are supported options when configuring optional network inputs?
Which of the following authentication types requires scripting in Splunk?
Answer : D
https://answers.splunk.com/answers/131127/scripted-authentication.html
Scripted Authentication: An option for Splunk Enterprise authentication. You can use an authentication system that you have in place (such as PAM or RADIUS) by configuring authentication.conf to use a script instead of using LDAP or Splunk Enterprise default authentication.
Which optional configuration setting in inputs .conf allows you to selectively forward the data to specific indexer(s)?
Answer : A
Specifies a comma-separated list of tcpout group names. Use this setting to selectively forward your data to specific indexers by specifying the tcpout groups that the forwarder should use when forwarding the data. Define the tcpout group names in the outputs.conf file in [tcpout:<tcpout_group_name>] stanzas. The groups present in defaultGroup in [tcpout] stanza in the outputs.conf file.
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
What is the difference between the two wildcards ... and - for the monitor stanza in inputs, conf?
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Data/Specifyinputpathswithwildcards
... The ellipsis wildcard searches recursively through directories and any number of levels of subdirectories to find matches.
If you specify a folder separator (for example, //var/log/.../file), it does not match the first folder level, only subfolders.
* The asterisk wildcard matches anything in that specific folder path segment.
Unlike ..., * does not recurse through subfolders.
Which option on the Add Data menu is most useful for testing data ingestion without creating inputs.conf?
Answer : A
Which of the following are supported configuration methods to add inputs on a forwarder? (select all that apply)
Answer : A, B, D
https://docs.splunk.com/Documentation/Forwarder/8.2.1/Forwarder/HowtoforwarddatatoSplunkEnterprise
'You can collect data on the universal forwarder using several methods. Define inputs on the universal forwarder with the CLI. You can use the CLI to define inputs on the universal forwarder. After you define the inputs, the universal forwarder collects data based on those definitions as long as it has access to the data that you want to monitor. Define inputs on the universal forwarder with configuration files. If the input you want to configure does not have a CLI argument for it, you can configure inputs with configuration files. Create an inputs.conf file in the directory, $SPLUNK_HOME/etc/system/local
Which of the following types of data count against the license daily quota?
Which Splunk forwarder type allows parsing of data before forwarding to an indexer?
Answer : C
Consider the following stanza in inputs.conf:
What will the value of the source filed be for events generated by this scripts input?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Admin/Inputsconf
-Scroll down to source = <string>
*Default: the input file path
Where are license files stored?
Answer : C
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
Which of the following are supported options when configuring optional network inputs?
How can native authentication be disabled in Splunk?
Answer : B
Which feature in Splunk allows Event Breaking, Timestamp extractions, and any advanced configurations
found in props.conf to be validated all through the UI?
The LINE_BREAKER attribute is configured in which configuration file?
Answer : A
To set up a Network input in Splunk, what needs to be specified'?
In inputs. conf, which stanza would mean Splunk was only reading one local file?
What is the correct example to redact a plain-text password from raw events?
Answer : B
The correct answer is B. in props.conf:
[identity]
SEDCMD-redact_pw = s/password=([^,|/s]+)/ ####REACTED####/g
s/password=([^,|/s]+)/ ####REACTED####/g
The g flag at the end means that the replacement is applied globally, not just to the first match.
Option A is incorrect because it uses the REGEX attribute instead of the SEDCMD attribute. The REGEX attribute is used to extract fields from events, not to modify them.
Option C is incorrect because it uses the transforms.conf file instead of the props.conf file. The transforms.conf file is used to define transformations that can be applied to fields or events, such as lookups, evaluations, or replacements. However, these transformations are applied after indexing, not before.
Option D is incorrect because it uses both the wrong attribute and the wrong file. There is no REGEX-redact_pw attribute in the transforms.conf file.
References: 1: Redact data from events - Splunk Documentation
Which of the following are reasons to create separate indexes? (Choose all that apply.)
Answer : A, C
Different retention times: You can set different retention policies for different indexes, depending on how long you want to keep the data. For example, you can have an index for security data that has a longer retention time than an index for performance data that has a shorter retention time.
Restrict user permissions: You can set different access permissions for different indexes, depending on who needs to see the data. For example, you can have an index for sensitive data that is only accessible by certain users or roles, and an index for public data that is accessible by everyone.
Which setting allows the configuration of Splunk to allow events to span over more than one line?
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
Which of the following statements accurately describes using SSL to secure the feed from a forwarder?
Answer : A
AboutsecuringyourSplunkconfigurationwithSSL
During search time, which directory of configuration files has the highest precedence?
Answer : D
Adding further clarity and quoting same Splunk reference URL from @giubal'
'To keep configuration settings consistent across peer nodes, configuration files are managed from the cluster master, which pushes the files to the slave-app directories on the peer nodes. Files in the slave-app directories have the highest precedence in a cluster peer's configuration. Here is the expanded precedence order for cluster peers:
1.Slave-app local directories -- highest priority
2. System local directory
3. App local directories
4. Slave-app default directories
5. App default directories
6. System default directory --lowest priority
Which of the following statements describe deployment management? (select all that apply)
Answer : A, B
'All Splunk Enterprise instances functioning as management components needs access to an Enterprise license. Management components include the deployment server, the indexer cluster manager node, the search head cluster deployer, and the monitoring console.'
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
Which Splunk component distributes apps and certain other configuration updates to search head cluster members?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations First line says it all: 'The deployment server distributes deployment apps to clients.'
Which of the following is accurate regarding the input phase?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline 'The data pipeline segments in depth. INPUT - In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks it into 64K blocks, and annotates each block with some metadata keys. The keys can also include values that are used internally, such as the character encoding of the data stream, and values that control later processing of the data, such as the index into which the events should be stored. PARSING Annotating individual events with metadata copied from the source-wide keys. Transforming event data and metadata according to regex transform rules.'
When does a warm bucket roll over to a cold bucket?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/HowSplunkstoresindexes
Once further conditions are met (for example, the index reaches some maximum number of warm buckets), the indexer begins to roll the warm buckets to cold, based on their age. It always selects the oldest warm bucket to roll to cold. Buckets continue to roll to cold as they age in this manner. Cold buckets reside in a different location from hot and warm buckets. You can configure the location so that cold buckets reside on cheaper storage.
166653
Consider a company with a Splunk distributed environment in production. The Compliance Department wants to start using Splunk; however, they want to ensure that no one can see their reports or any other knowledge objects. Which Splunk Component can be added to implement this policy for the new team?
Answer : D
Which valid bucket types are searchable? (select all that apply)
Answer : A, B, C
Hot/warm/cold/thawed bucket types are searchable. Frozen isn't searchable because its either deleted at that state or archived.
Which Splunk forwarder has a built-in license?
Answer : C
In which scenario would a Splunk Administrator want to enable data integrity check when creating an index?
Answer : D
The priority of layered Splunk configuration files depends on the file's:
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
'To determine the order of directories for evaluating configuration file precendence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user'
Which pathway represents where a network input in Splunk might be found?
Answer : B
The correct answer is B. The network input in Splunk might be found in the $SPLUNK_HOME/etc/apps/$appName/local/inputs.conf file.
A network input is a type of input that monitors data from TCP or UDP ports. To configure a network input, you need to specify the port number, the connection host, the source, and the sourcetype in the inputs.conf file. You can also set other optional settings, such as index, queue, and host_regex1.
The inputs.conf file is a configuration file that contains the settings for different types of inputs, such as files, directories, scripts, network ports, and Windows event logs. The inputs.conf file can be located in various directories, depending on the scope and priority of the settings. The most common locations are:
$SPLUNK_HOME/etc/system/default: This directory contains the default settings for all inputs. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/system/local: This directory contains the custom settings for all inputs that apply to the entire Splunk instance. The settings in this directory override the default settings2.
$SPLUNK_HOME/etc/apps/$appName/default: This directory contains the default settings for all inputs that are specific to an app. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/apps/$appName/local: This directory contains the custom settings for all inputs that are specific to an app. The settings in this directory override the default and system settings2.
Therefore, the best practice is to create or edit the inputs.conf file in the $SPLUNK_HOME/etc/apps/$appName/local directory, where $appName is the name of the app that you want to configure the network input for. This way, you can avoid modifying the default files and ensure that your settings are applied to the specific app.
The other options are incorrect because:
A . There is no network directory under the apps directory. The network input settings should be in the inputs.conf file, not in a separate directory.
C . There is no udp.conf file in Splunk. The network input settings should be in the inputs.conf file, not in a separate file. The system directory is not the recommended location for custom settings, as it affects the entire Splunk instance.
D . The var/lib/splunk directory is where Splunk stores the indexed data, not the input settings. The homePath setting is used to specify the location of the index data, not the input data. The inputName is not a valid variable for inputs.conf.
When configuring HTTP Event Collector (HEC) input, how would one ensure the events have been indexed?
Answer : A
Per the provided Splunk reference URL
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
'While HEC has precautions in place to prevent data loss, it's impossible to completely prevent such an occurrence, especially in the event of a network failure or hardware crash. This is where indexer acknolwedgment comes in.'
Reference https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
In this source definition the MAX_TIMESTAMP_LOOKHEAD is missing. Which value would fit best?
Event example:
Answer : D
https://docs.splunk.com/Documentation/Splunk/6.2.0/Data/Configuretimestamprecognition
'Specify how far (how many characters) into an event Splunk software should look for a timestamp.' since TIME_PREFIX = ^ and timestamp is from 0-29 position, so D=30 will pick up the WHOLE timestamp correctly.
Which optional configuration setting in inputs .conf allows you to selectively forward the data to specific indexer(s)?
Answer : A
Specifies a comma-separated list of tcpout group names. Use this setting to selectively forward your data to specific indexers by specifying the tcpout groups that the forwarder should use when forwarding the data. Define the tcpout group names in the outputs.conf file in [tcpout:<tcpout_group_name>] stanzas. The groups present in defaultGroup in [tcpout] stanza in the outputs.conf file.
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
What is the default value of LINE_BREAKER?
Answer : B
Line breaking, which uses theLINE_BREAKERsetting to split the incoming stream of data into separate lines. By default, theLINE_BREAKERvalue is any sequence of newlines and carriage returns. In regular expression format, this is represented as the following string:([\r\n]+). You don't normally need to adjust this setting, but in cases where it's necessary, you must configure it in the props.conf configuration file on the forwarder that sends the data to Splunk Cloud Platform or a Splunk Enterprise indexer. TheLINE_BREAKERsetting expects a value in regular expression format.
What happens when the same username exists in Splunk as well as through LDAP?
Answer : C
Splunk platform attempts native authentication first. If authentication fails outside of a local account that doesn't exist, there is no attempt to use LDAP to log in. This is adapted from precedence of Splunk authentication schema.
To set up a Network input in Splunk, what needs to be specified'?
A Universal Forwarder is collecting two separate sources of data (A,B). Source A is being routed through a Heavy Forwarder and then to an indexer. Source B is being routed directly to the indexer. Both sets of data require the masking of raw text strings before being written to disk. What does the administrator need to do to
ensure that the masking takes place successfully?
Answer : D
The correct answer is D. Place both props . conf and transforms . conf on the Heavy Forwarder for source A, and place both props . conf and transforms . conf on the indexer for source B.
For source A, the data is routed through a heavy forwarder, which can parse the data before sending it to the indexer. Therefore, you need to place both props.conf and transforms.conf on the heavy forwarder for source A, so that the masking takes place before indexing.
For source B, the data is routed directly to the indexer, which parses and indexes the data. Therefore, you need to place both props.conf and transforms.conf on the indexer for source B, so that the masking takes place before indexing.
Where are license files stored?
Answer : C
On the deployment server, administrators can map clients to server classes using client filters. Which of the
following statements is accurate?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.1/Updating/Filterclients
same/td-p/390910
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
When enabling data integrity control, where does Splunk Enterprise store the hash files for each bucket?
Answer : B
Data integrity controls in Splunk ensure that indexed data has not been tampered with.
When enabled, Splunk calculates hashes for each bucket and stores these hash files in the rawdata directory of the corresponding bucket.
Incorrect Options:
A, C, D: These directories do not store hash files.
References:
Splunk Docs: Configure data integrity controls
When deploying apps, which attribute in the forwarder management interface determines the apps that clients install?
Answer : C
<https://docs.splunk.com/Documentation/Splunk/8.0.6/Updating/Deploymentserverarchitecture>
https://docs.splunk.com/Splexicon:Serverclass
Which of the following are available input methods when adding a file input in Splunk Web? (Choose all that
apply.)
Answer : A, D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Howdoyouwanttoadddata
The fastest way to add data to your Splunk Cloud instance or Splunk Enterprise deployment is to use Splunk Web. After you access the Add Data page, choose one of three options for getting data into your Splunk platform deployment with Splunk Web: (1) Upload, (2) Monitor, (3) Forward The Upload option lets you upload a file or archive of files for indexing. When you choose Upload option, Splunk Web opens the upload process page. Monitor. For Splunk Enterprise installations, the Monitor option lets you monitor one or more files, directories, network streams, scripts, Event Logs (on Windows hosts only), performance metrics, or any other type of machine data that the Splunk Enterprise instance has access to.
How can native authentication be disabled in Splunk?
Answer : B
Who provides the Application Secret, Integration, and Secret keys, as well as the API Hostname when setting
up Duo for Multi-Factor Authentication in Splunk Enterprise?
Answer : A
Which setting allows the configuration of Splunk to allow events to span over more than one line?
In case of a conflict between a whitelist and a blacklist input setting, which one is used?
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
Which Splunk forwarder type allows parsing of data before forwarding to an indexer?
Answer : C
Which feature of Splunk's role configuration can be used to aggregate multiple roles intended for groups of
users?
In which phase do indexed extractions in props.conf occur?
Answer : B
The following items in the phases below are listed in the order Splunk applies them (ie LINE_BREAKER occurs before TRUNCATE).
Input phase
inputs.conf
props.conf
CHARSET
NO_BINARY_CHECK
CHECK_METHOD
CHECK_FOR_HEADER (deprecated)
PREFIX_SOURCETYPE
sourcetype
wmi.conf
regmon-filters.conf
Structured parsing phase
props.conf
INDEXED_EXTRACTIONS, and all other structured data header extractions
Parsing phase
props.conf
LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings
TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ, and all other time extraction settings and rules
TRANSFORMS which includes per-event queue filtering, per-event index assignment, per-event routing
SEDCMD
MORE_THAN, LESS_THAN
transforms.conf
stanzas referenced by a TRANSFORMS clause in props.conf
LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH
Configurationparametersandthedatapipeline
During search time, which directory of configuration files has the highest precedence?
Answer : D
Adding further clarity and quoting same Splunk reference URL from @giubal'
'To keep configuration settings consistent across peer nodes, configuration files are managed from the cluster master, which pushes the files to the slave-app directories on the peer nodes. Files in the slave-app directories have the highest precedence in a cluster peer's configuration. Here is the expanded precedence order for cluster peers:
1.Slave-app local directories -- highest priority
2. System local directory
3. App local directories
4. Slave-app default directories
5. App default directories
6. System default directory --lowest priority
The Splunk administrator wants to ensure data is distributed evenly amongst the indexers. To do this, he runs
the following search over the last 24 hours:
index=*
What field can the administrator check to see the data distribution?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Knowledge/Usedefaultfields splunk_server
The splunk server field contains the name of the Splunk server containing the event. Useful in a distributed Splunk environment. Example: Restrict a search to the main index on a server named remote. splunk_server=remote index=main 404
In which Splunk configuration is the SEDCMD used?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Forwarddatatothird-partysystemsd
'You can specify a SEDCMD configuration in props.conf to address data that contains characters that the third-party server cannot process. '
When using license pools, volume allocations apply to which Splunk components?
What is required when adding a native user to Splunk? (select all that apply)
Answer : A, B
According to the Splunk system admin course PDF, When adding native users, Username and Password ARE REQUIRED
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
What is the correct example to redact a plain-text password from raw events?
Answer : B
The correct answer is B. in props.conf:
[identity]
SEDCMD-redact_pw = s/password=([^,|/s]+)/ ####REACTED####/g
s/password=([^,|/s]+)/ ####REACTED####/g
The g flag at the end means that the replacement is applied globally, not just to the first match.
Option A is incorrect because it uses the REGEX attribute instead of the SEDCMD attribute. The REGEX attribute is used to extract fields from events, not to modify them.
Option C is incorrect because it uses the transforms.conf file instead of the props.conf file. The transforms.conf file is used to define transformations that can be applied to fields or events, such as lookups, evaluations, or replacements. However, these transformations are applied after indexing, not before.
Option D is incorrect because it uses both the wrong attribute and the wrong file. There is no REGEX-redact_pw attribute in the transforms.conf file.
References: 1: Redact data from events - Splunk Documentation
What is the correct curl to send multiple events through HTTP Event Collector?
Answer : B
curl ''https://mysplunkserver.example.com:8088/services/collector'' \ -H ''Authorization: Splunk DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67'' \ -d '{''event'': ''Hello World''}, {''event'': ''Hola Mundo''}, {''event'': ''Hallo Welt''}'. This is the correct curl command to send multiple events through HTTP Event Collector (HEC), which is a token-based API that allows you to send data to Splunk Enterprise from any application that can make an HTTP request. The command has the following components:
The URL of the HEC endpoint, which consists of the protocol (https), the hostname or IP address of the Splunk server (mysplunkserver.example.com), the port number (8088), and the service name (services/collector).
The header that contains the authorization token, which is a unique identifier that grants access to the HEC endpoint. The token is prefixed with Splunk and enclosed in quotation marks. The token value (DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67) is an example and should be replaced with your own token value.
The data payload that contains the events to be sent, which are JSON objects enclosed in curly braces and separated by commas. Each event object has a mandatory field called event, which contains the raw data to be indexed. The event value can be a string, a number, a boolean, an array, or another JSON object. In this case, the event values are strings that say hello in different languages.
Which default Splunk role could be assigned to provide users with the following capabilities?
Create saved searches
Edit shared objects and alerts
Not allowed to create custom roles
Which network input option provides durable file-system buffering of data to mitigate data loss due to network outages and splunkd restarts?
Answer : C
In which Splunk configuration is the SEDCMD used?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Forwarddatatothird-partysystemsd
'You can specify a SEDCMD configuration in props.conf to address data that contains characters that the third-party server cannot process. '
How would you configure your distsearch conf to allow you to run the search below? sourcetype=access_combined status=200 action=purchase splunk_setver_group=HOUSTON
A)
B)
C)
D)
What is the default character encoding used by Splunk during the input phase?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Data/Configurecharactersetencoding
'Configure character set encoding. Splunk software attempts to apply UTF-8 encoding to your scources by default. If a source foesn't use UTF-8 encoding or is a non-ASCII file, Splunk software tries to convert data from the source to UTF-8 encoding unless you specify a character set to use by setting the CHARSET key in the props.conf file.'
In this source definition the MAX_TIMESTAMP_LOOKHEAD is missing. Which value would fit best?
Event example:
Answer : D
https://docs.splunk.com/Documentation/Splunk/6.2.0/Data/Configuretimestamprecognition
'Specify how far (how many characters) into an event Splunk software should look for a timestamp.' since TIME_PREFIX = ^ and timestamp is from 0-29 position, so D=30 will pick up the WHOLE timestamp correctly.
Which of the following statements accurately describes using SSL to secure the feed from a forwarder?
Answer : A
AboutsecuringyourSplunkconfigurationwithSSL
Where should apps be located on the deployment server that the clients pull from?
Answer : D
After an app is downloaded, it resides under $SPLUNK_HOME/etc/apps on the deployment clients. But it resided in the $SPLUNK_HOME/etc/deployment-apps location in the deployment server.
Which Splunk component distributes apps and certain other configuration updates to search head cluster members?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations First line says it all: 'The deployment server distributes deployment apps to clients.'
Which setting allows the configuration of Splunk to allow events to span over more than one line?
Which feature of Splunk's role configuration can be used to aggregate multiple roles intended for groups of
users?
Which valid bucket types are searchable? (select all that apply)
Answer : A, B, C
Hot/warm/cold/thawed bucket types are searchable. Frozen isn't searchable because its either deleted at that state or archived.
An index stores its data in buckets. Which default directories does Splunk use to store buckets? (Choose all that apply.)
Answer : C, D
The priority of layered Splunk configuration files depends on the file's:
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
'To determine the order of directories for evaluating configuration file precendence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user'
Where are license files stored?
Answer : C
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
After an Enterprise Trial license expires, it will automatically convert to a Free license. How many days is an Enterprise Trial license valid before this conversion occurs?
What are the required stanza attributes when configuring the transforms. conf to manipulate or remove events?
Answer : C
REGEX = <regular expression>
* Enter a regular expression to operate on your data.
FORMAT = <string>
* NOTE: This option is valid for both index-time and search-time field extraction. Index-time field extraction configuration require the FORMAT settings. The FORMAT settings is optional for search-time field extraction configurations.
* This setting specifies the format of the event, including any field names or values you want to add.
DEST_KEY = <key>
* NOTE: This setting is only valid for index-time field extractions.
* Specifies where SPLUNK software stores the expanded FORMAT results in accordance with the REGEX match.
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
A company moves to a distributed architecture to meet the growing demand for the use of Splunk. What parameter can be configured to enable automatic load balancing in the
Universal Forwarder to send data to the indexers?
Answer : D
To enable automatic load balancing, set the stanza to have a server value equal to a comma-separated list of IP addresses and indexer ports for each of the indexers in the environment. For example:
[tcpout] server=10.1.1.1:9997,10.1.1.2:9997,10.1.1.3:9997
The forwarder then distributes data across all of the indexers in the list.
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
Which of the following are required when defining an index in indexes. conf? (select all that apply)
Answer : A, B, D
homePath = $SPLUNK_DB/hatchdb/db
coldPath = $SPLUNK_DB/hatchdb/colddb
thawedPath = $SPLUNK_DB/hatchdb/thaweddb
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Indexesconf#PER_INDEX_OPTIONS
Where should apps be located on the deployment server that the clients pull from?
Answer : D
After an app is downloaded, it resides under $SPLUNK_HOME/etc/apps on the deployment clients. But it resided in the $SPLUNK_HOME/etc/deployment-apps location in the deployment server.
What is the correct curl to send multiple events through HTTP Event Collector?
Answer : B
curl ''https://mysplunkserver.example.com:8088/services/collector'' \ -H ''Authorization: Splunk DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67'' \ -d '{''event'': ''Hello World''}, {''event'': ''Hola Mundo''}, {''event'': ''Hallo Welt''}'. This is the correct curl command to send multiple events through HTTP Event Collector (HEC), which is a token-based API that allows you to send data to Splunk Enterprise from any application that can make an HTTP request. The command has the following components:
The URL of the HEC endpoint, which consists of the protocol (https), the hostname or IP address of the Splunk server (mysplunkserver.example.com), the port number (8088), and the service name (services/collector).
The header that contains the authorization token, which is a unique identifier that grants access to the HEC endpoint. The token is prefixed with Splunk and enclosed in quotation marks. The token value (DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67) is an example and should be replaced with your own token value.
The data payload that contains the events to be sent, which are JSON objects enclosed in curly braces and separated by commas. Each event object has a mandatory field called event, which contains the raw data to be indexed. The event value can be a string, a number, a boolean, an array, or another JSON object. In this case, the event values are strings that say hello in different languages.
After automatic load balancing is enabled on a forwarder, the time interval for switching indexers can be updated by using which of the following attributes?
Answer : C
What is the correct order of steps in Duo Multifactor Authentication?
Answer : C
Using the provided DUO/Splunk reference URL https://duo.com/docs/splunk
Scroll down to the Network Diagram section and note the following 6 similar steps
1 - SPlunk connection initiated
2 - Primary authentication
3 - Splunk connection established to Duo Security over TCP port 443
4 - Secondary authentication via Duo Security's service
5 - Splunk receives authentication response
6 - Splunk session logged in.
Which of the following is accurate regarding the input phase?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline 'The data pipeline segments in depth. INPUT - In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks it into 64K blocks, and annotates each block with some metadata keys. The keys can also include values that are used internally, such as the character encoding of the data stream, and values that control later processing of the data, such as the index into which the events should be stored. PARSING Annotating individual events with metadata copied from the source-wide keys. Transforming event data and metadata according to regex transform rules.'
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
When does a warm bucket roll over to a cold bucket?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/HowSplunkstoresindexes
Once further conditions are met (for example, the index reaches some maximum number of warm buckets), the indexer begins to roll the warm buckets to cold, based on their age. It always selects the oldest warm bucket to roll to cold. Buckets continue to roll to cold as they age in this manner. Cold buckets reside in a different location from hot and warm buckets. You can configure the location so that cold buckets reside on cheaper storage.
166653
Which configuration files are used to transform raw data ingested by Splunk? (Choose all that apply.)
Answer : A, D
use transformations with props.conf and transforms.conf to:
-- Mask or delete raw data as it is being indexed
--Override sourcetype or host based upon event values
-- Route events to specific indexes based on event content
-- Prevent unwanted events from being indexed
What are the values for host and index for [stanza1] used by Splunk during index time, given the following configuration files?
A non-clustered Splunk environment has three indexers (A,B,C) and two search heads (X, Y). During a search executed on search head X, indexer A crashes. What is Splunk's response?
Answer : A
This is explained in the Splunk documentation1, which states:
If an indexer goes down during a search, the search head notifies you that the results might be incomplete. The search head does not attempt to re-run the search on another indexer.
Which Splunk indexer operating system platform is supported when sending logs from a Windows universal forwarder?
Answer : A
'The forwarder/indexer relationship can be considered platform agnostic (within the sphere of supported platforms) because they exchange their data handshake (and the data, if you wish) over TCP.
Which Splunk component requires a Forwarder license?
Answer : B
Which of the following types of data count against the license daily quota?
How is a remote monitor input distributed to forwarders?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/Usingforwardingagents
Scroll down to the section Titled, How to configure forwarder inputs, and subsection Here are the main ways that you can configure data inputs on a forwarder Install the app or add-on that contains the inputs you wants
What are the minimum required settings when creating a network input in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Inputsconf
[tcp://<remote server>:
*Configures the input to listen on a specific TCP network port.
*If a <remote server> makes a connection to this instance, the input uses this stanza to configure itself.
*If you do not specify <remote server>, this stanza matches all connections on the specified port.
*Generates events with source set to 'tcp:
*If you do not specify a sourcetype, generates events with sourcetype set to 'tcp-raw'
When deploying apps on Universal Forwarders using the deployment server, what is the correct component and location of the app before it is deployed?
Answer : C
The correct answer is C. On Deployment Server, $SPLUNK_HOME/etc/deployment-apps.
A deployment server is a Splunk Enterprise instance that acts as a centralized configuration manager for any number of other instances, called ''deployment clients''. A deployment client can be a universal forwarder, a non-clustered indexer, or a search head1.
A deployment app is a directory that contains any content that you want to download to a set of deployment clients. The content can include a Splunk Enterprise app, a set of Splunk Enterprise configurations, or other content, such as scripts, images, and supporting files2.
You create a deployment app by creating a directory for it on the deployment server. The default location is $SPLUNK_HOME/etc/deployment-apps, but this is configurable through the repositoryLocation attribute in serverclass.conf. Underneath this location, each app must have its own subdirectory. The name of the subdirectory serves as the app name in the forwarder management interface2.
The other options are incorrect because:
A . On Universal Forwarder, $SPLUNK_HOME/etc/apps. This is the location where the deployment app resides after it is downloaded from the deployment server to the universal forwarder. It is not the location of the app before it is deployed2.
B . On Deployment Server, $SPLUNK_HOME/etc/apps. This is the location where the apps that are specific to the deployment server itself reside. It is not the location where the deployment apps for the clients are stored2.
The CLI command splunk add forward-server indexer:
which configuration file?
Answer : C
The CLI command 'Splunk add forward-server indexer:<receiving-port>' is used to define the indexer and the listening port on forwards. The command creates this kind of entry '[tcpout-server://<ip address>:
https://docs.splunk.com/Documentation/Forwarder/8.2.2/Forwarder/Configureforwardingwithoutputs.conf
Which of the following statements describes how distributed search works?
Answer : C
URL https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Configuredistributedsearch
'To activate distributed search, you add search peers, or indexers, to a Splunk Enterprise instance that you desingate as a search head. You do this by specifying each search peer manually.'
An index stores its data in buckets. Which default directories does Splunk use to store buckets? (Choose all that apply.)
Answer : C, D
On the deployment server, administrators can map clients to server classes using client filters. Which of the
following statements is accurate?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.1/Updating/Filterclients
same/td-p/390910
Which option on the Add Data menu is most useful for testing data ingestion without creating inputs.conf?
Answer : A
Which authentication methods are natively supported within Splunk Enterprise? (select all that apply)
Answer : A, B, C
Splunk authentication: Provides Admin, Power and User by default, and you can define your own roles using a list of capabilities. If you have an Enterprise license, Splunk authentication is enabled by default. See Set up user authentication with Splunk's built-in system for more information. LDAP: Splunk Enterprise supports authentication with its internal authentication services or your existing LDAP server. See Set up user authentication with LDAP for more information. Scripted authentication API: Use scripted authentication to integrate Splunk authentication with an external authentication system, such as RADIUS or PAM. See Set up user authentication with external systems for more information. Note: Authentication, including native authentication, LDAP, and scripted authentication, is not available in Splunk Free.
For single line event sourcetypes. it is most efficient to set SHOULD_linemerge to what value?
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking
Attribute : SHOULD_LINEMERGE = [true|false]
Description : When set to true, the Splunk platform combines several input lines into a single event, with configuration based on the settings described in the next section.
What is the correct curl to send multiple events through HTTP Event Collector?
Answer : B
curl ''https://mysplunkserver.example.com:8088/services/collector'' \ -H ''Authorization: Splunk DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67'' \ -d '{''event'': ''Hello World''}, {''event'': ''Hola Mundo''}, {''event'': ''Hallo Welt''}'. This is the correct curl command to send multiple events through HTTP Event Collector (HEC), which is a token-based API that allows you to send data to Splunk Enterprise from any application that can make an HTTP request. The command has the following components:
The URL of the HEC endpoint, which consists of the protocol (https), the hostname or IP address of the Splunk server (mysplunkserver.example.com), the port number (8088), and the service name (services/collector).
The header that contains the authorization token, which is a unique identifier that grants access to the HEC endpoint. The token is prefixed with Splunk and enclosed in quotation marks. The token value (DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67) is an example and should be replaced with your own token value.
The data payload that contains the events to be sent, which are JSON objects enclosed in curly braces and separated by commas. Each event object has a mandatory field called event, which contains the raw data to be indexed. The event value can be a string, a number, a boolean, an array, or another JSON object. In this case, the event values are strings that say hello in different languages.
An organization wants to collect Windows performance data from a set of clients, however, installing Splunk
software on these clients is not allowed. What option is available to collect this data in Splunk Enterprise?
Answer : B
'The Splunk platform collects remote Windows data for indexing in one of two ways: From Splunk forwarders, Using Windows Management Instrumentation (WMI). For Splunk Cloud deployments, you must use the Splunk Universal Forwarder on a Windows machines to montior remote Windows data.'
A new forwarder has been installed with a manually created deploymentclient.conf.
What is the next step to enable the communication between the forwarder and the deployment server?
Which of the following is a valid distributed search group?
How do you remove missing forwarders from the Monitoring Console?
Answer : D
Where can scripts for scripted inputs reside on the host file system? (select all that apply)
Answer : A, C, D
'Where to place the scripts for scripted inputs. The script that you refer to in $SCRIPT can reside in only one of the following places on the host file system:
$SPLUNK_HOME/etc/system/bin
$SPLUNK_HOME/etc/apps/<your_App>/bin
$SPLUNK_HOME/bin/scripts
As a best practice, put your script in the bin/ directory that is nearest to the inputs.conf file that calls your script on the host file system.'
How can native authentication be disabled in Splunk?
Answer : B
Which setting allows the configuration of Splunk to allow events to span over more than one line?
When using license pools, volume allocations apply to which Splunk components?
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
Which feature in Splunk allows Event Breaking, Timestamp extractions, and any advanced configurations
found in props.conf to be validated all through the UI?
An index stores its data in buckets. Which default directories does Splunk use to store buckets? (Choose all that apply.)
Answer : C, D
In which Splunk configuration is the SEDCMD used?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Forwarddatatothird-partysystemsd
'You can specify a SEDCMD configuration in props.conf to address data that contains characters that the third-party server cannot process. '
The CLI command splunk add forward-server indexer:
which configuration file?
Answer : C
The CLI command 'Splunk add forward-server indexer:<receiving-port>' is used to define the indexer and the listening port on forwards. The command creates this kind of entry '[tcpout-server://<ip address>:
https://docs.splunk.com/Documentation/Forwarder/8.2.2/Forwarder/Configureforwardingwithoutputs.conf
Which of the following statements accurately describes using SSL to secure the feed from a forwarder?
Answer : A
AboutsecuringyourSplunkconfigurationwithSSL
In a distributed environment, which Splunk component is used to distribute apps and configurations to the
other Splunk instances?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations
First line says it all: 'The deployment server distributes deployment apps to clients.'
Which valid bucket types are searchable? (select all that apply)
Answer : A, B, C
Hot/warm/cold/thawed bucket types are searchable. Frozen isn't searchable because its either deleted at that state or archived.
The priority of layered Splunk configuration files depends on the file's:
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
'To determine the order of directories for evaluating configuration file precendence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user'
When deploying apps on Universal Forwarders using the deployment server, what is the correct component and location of the app before it is deployed?
Answer : C
The correct answer is C. On Deployment Server, $SPLUNK_HOME/etc/deployment-apps.
A deployment server is a Splunk Enterprise instance that acts as a centralized configuration manager for any number of other instances, called ''deployment clients''. A deployment client can be a universal forwarder, a non-clustered indexer, or a search head1.
A deployment app is a directory that contains any content that you want to download to a set of deployment clients. The content can include a Splunk Enterprise app, a set of Splunk Enterprise configurations, or other content, such as scripts, images, and supporting files2.
You create a deployment app by creating a directory for it on the deployment server. The default location is $SPLUNK_HOME/etc/deployment-apps, but this is configurable through the repositoryLocation attribute in serverclass.conf. Underneath this location, each app must have its own subdirectory. The name of the subdirectory serves as the app name in the forwarder management interface2.
The other options are incorrect because:
A . On Universal Forwarder, $SPLUNK_HOME/etc/apps. This is the location where the deployment app resides after it is downloaded from the deployment server to the universal forwarder. It is not the location of the app before it is deployed2.
B . On Deployment Server, $SPLUNK_HOME/etc/apps. This is the location where the apps that are specific to the deployment server itself reside. It is not the location where the deployment apps for the clients are stored2.
Which optional configuration setting in inputs .conf allows you to selectively forward the data to specific indexer(s)?
Answer : A
Specifies a comma-separated list of tcpout group names. Use this setting to selectively forward your data to specific indexers by specifying the tcpout groups that the forwarder should use when forwarding the data. Define the tcpout group names in the outputs.conf file in [tcpout:<tcpout_group_name>] stanzas. The groups present in defaultGroup in [tcpout] stanza in the outputs.conf file.
When using license pools, volume allocations apply to which Splunk components?
Which of the methods listed below supports muti-factor authentication?
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
Consider a company with a Splunk distributed environment in production. The Compliance Department wants to start using Splunk; however, they want to ensure that no one can see their reports or any other knowledge objects. Which Splunk Component can be added to implement this policy for the new team?
Answer : D
After automatic load balancing is enabled on a forwarder, the time interval for switching indexers can be updated by using which of the following attributes?
Answer : C
When working with an indexer cluster, what changes with the global precedence when comparing to a standalone deployment?
Answer : C
The app local directories move to second in the priority list. This is explained in the Splunk documentation, which states:
In a clustered environment, the precedence of configuration files changes slightly from that of a standalone deployment. The app local directories move to second in the priority list, after the peer-apps local directory. This means that any configuration files in the app local directories on the individual peers are overridden by configuration files of the same name and type in the peer-apps local directory on the master node.
A Universal Forwarder is collecting two separate sources of data (A,B). Source A is being routed through a Heavy Forwarder and then to an indexer. Source B is being routed directly to the indexer. Both sets of data require the masking of raw text strings before being written to disk. What does the administrator need to do to
ensure that the masking takes place successfully?
Answer : D
The correct answer is D. Place both props . conf and transforms . conf on the Heavy Forwarder for source A, and place both props . conf and transforms . conf on the indexer for source B.
For source A, the data is routed through a heavy forwarder, which can parse the data before sending it to the indexer. Therefore, you need to place both props.conf and transforms.conf on the heavy forwarder for source A, so that the masking takes place before indexing.
For source B, the data is routed directly to the indexer, which parses and indexes the data. Therefore, you need to place both props.conf and transforms.conf on the indexer for source B, so that the masking takes place before indexing.
Where can scripts for scripted inputs reside on the host file system? (select all that apply)
Answer : A, C, D
'Where to place the scripts for scripted inputs. The script that you refer to in $SCRIPT can reside in only one of the following places on the host file system:
$SPLUNK_HOME/etc/system/bin
$SPLUNK_HOME/etc/apps/<your_App>/bin
$SPLUNK_HOME/bin/scripts
As a best practice, put your script in the bin/ directory that is nearest to the inputs.conf file that calls your script on the host file system.'
An organization wants to collect Windows performance data from a set of clients, however, installing Splunk
software on these clients is not allowed. What option is available to collect this data in Splunk Enterprise?
Answer : B
'The Splunk platform collects remote Windows data for indexing in one of two ways: From Splunk forwarders, Using Windows Management Instrumentation (WMI). For Splunk Cloud deployments, you must use the Splunk Universal Forwarder on a Windows machines to montior remote Windows data.'
After an Enterprise Trial license expires, it will automatically convert to a Free license. How many days is an Enterprise Trial license valid before this conversion occurs?
Which of the following are available input methods when adding a file input in Splunk Web? (Choose all that
apply.)
Answer : A, D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Howdoyouwanttoadddata
The fastest way to add data to your Splunk Cloud instance or Splunk Enterprise deployment is to use Splunk Web. After you access the Add Data page, choose one of three options for getting data into your Splunk platform deployment with Splunk Web: (1) Upload, (2) Monitor, (3) Forward The Upload option lets you upload a file or archive of files for indexing. When you choose Upload option, Splunk Web opens the upload process page. Monitor. For Splunk Enterprise installations, the Monitor option lets you monitor one or more files, directories, network streams, scripts, Event Logs (on Windows hosts only), performance metrics, or any other type of machine data that the Splunk Enterprise instance has access to.
What happens when there are conflicting settings within two or more configuration files?
Answer : D
When there are conflicting settings within two or more configuration files, the setting with the highest precedence is used. The precedence of configuration files is determined by a combination of the file type, the directory location, and the alphabetical order of the file names.
Which of the following statements accurately describes using SSL to secure the feed from a forwarder?
Answer : A
AboutsecuringyourSplunkconfigurationwithSSL
What is the default character encoding used by Splunk during the input phase?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Data/Configurecharactersetencoding
'Configure character set encoding. Splunk software attempts to apply UTF-8 encoding to your scources by default. If a source foesn't use UTF-8 encoding or is a non-ASCII file, Splunk software tries to convert data from the source to UTF-8 encoding unless you specify a character set to use by setting the CHARSET key in the props.conf file.'
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
A Splunk administrator has been tasked with developing a retention strategy to have frequently accessed data sets on SSD storage and to have older, less frequently accessed data on slower NAS storage. They have set a mount point for the NAS. Which parameter do they need to modify to set the path for the older, less frequently accessed data in indexes.conf?
Which of the following are methods for adding inputs in Splunk? (select all that apply)
Answer : A, B, C
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Configureyourinputs
Add your data to Splunk Enterprise. With Splunk Enterprise, you can add data using Splunk Web or Splunk Apps. In addition to these methods, you also can use the following methods. -The Splunk Command Line Interface (CLI) -The inputs.conf configuration file. When you specify your inputs with Splunk Web or the CLI, the details are saved in a configuartion file on Splunk Enterprise indexer and heavy forwarder instances.
Which of the following accurately describes HTTP Event Collector indexer acknowledgement?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/AboutHECIDXAck
- Section: About channels and sending data
Sending events to HEC with indexer acknowledgment active is similar to sending them with the setting off. There is one crucial difference: when you have indexer acknowledgment turned on, you must specify a channel when you send events. The concept of a channel was introduced in HEC primarily to prevent a fast client from impeding the performance of a slow client. When you assign one channel per client, because channels are treated equally on Splunk Enterprise, one client can't affect another. You must include a matching channel identifier both when sending data to HEC in an HTTP request and when requesting acknowledgment that events contained in the request have been indexed. If you don't, you will receive the error message, 'Data channel is missing.' Each request that includes a token for which indexer acknowledgment has been enabled must include a channel identifier, as shown in the following example cURL statement, where <data> represents the event data portion of the request
The following stanzas in inputs. conf are currently being used by a deployment client:
[udp: //145.175.118.177:1001
Connection_host = dns
sourcetype = syslog
Which of the following statements is true of data that is received via this input?
Answer : D
This is because the input type is UDP, which is an unreliable protocol that does not guarantee delivery, order, or integrity of the data packets. UDP does not have any mechanism to resend or acknowledge the data packets, so if Splunk is restarted, any data that was in transit or in the buffer may be dropped and not indexed.
What is required when adding a native user to Splunk? (select all that apply)
Answer : A, B
According to the Splunk system admin course PDF, When adding native users, Username and Password ARE REQUIRED
When working with an indexer cluster, what changes with the global precedence when comparing to a standalone deployment?
Answer : C
The app local directories move to second in the priority list. This is explained in the Splunk documentation, which states:
In a clustered environment, the precedence of configuration files changes slightly from that of a standalone deployment. The app local directories move to second in the priority list, after the peer-apps local directory. This means that any configuration files in the app local directories on the individual peers are overridden by configuration files of the same name and type in the peer-apps local directory on the master node.
Which additional component is required for a search head cluster?
Answer : A
The deployer. This is a Splunk Enterprise instance that distributes apps and other configurations to the cluster members. It stands outside the cluster and cannot run on the same instance as a cluster member. It can, however, under some circumstances, reside on the same instance as other Splunk Enterprise components, such as a deployment server or an indexer cluster master node.
What is the difference between the two wildcards ... and - for the monitor stanza in inputs, conf?
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Data/Specifyinputpathswithwildcards
... The ellipsis wildcard searches recursively through directories and any number of levels of subdirectories to find matches.
If you specify a folder separator (for example, //var/log/.../file), it does not match the first folder level, only subfolders.
* The asterisk wildcard matches anything in that specific folder path segment.
Unlike ..., * does not recurse through subfolders.
Which of the following is the use case for the deployment server feature of Splunk?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
When enabling data integrity control, where does Splunk Enterprise store the hash files for each bucket?
Answer : B
Data integrity controls in Splunk ensure that indexed data has not been tampered with.
When enabled, Splunk calculates hashes for each bucket and stores these hash files in the rawdata directory of the corresponding bucket.
Incorrect Options:
A, C, D: These directories do not store hash files.
References:
Splunk Docs: Configure data integrity controls
Which Splunk forwarder has a built-in license?
Answer : C
In which phase do indexed extractions in props.conf occur?
Answer : B
The following items in the phases below are listed in the order Splunk applies them (ie LINE_BREAKER occurs before TRUNCATE).
Input phase
inputs.conf
props.conf
CHARSET
NO_BINARY_CHECK
CHECK_METHOD
CHECK_FOR_HEADER (deprecated)
PREFIX_SOURCETYPE
sourcetype
wmi.conf
regmon-filters.conf
Structured parsing phase
props.conf
INDEXED_EXTRACTIONS, and all other structured data header extractions
Parsing phase
props.conf
LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings
TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ, and all other time extraction settings and rules
TRANSFORMS which includes per-event queue filtering, per-event index assignment, per-event routing
SEDCMD
MORE_THAN, LESS_THAN
transforms.conf
stanzas referenced by a TRANSFORMS clause in props.conf
LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH
Configurationparametersandthedatapipeline
How would you configure your distsearch conf to allow you to run the search below? sourcetype=access_combined status=200 action=purchase splunk_setver_group=HOUSTON
A)
B)
C)
D)
Which optional configuration setting in inputs .conf allows you to selectively forward the data to specific indexer(s)?
Answer : A
Specifies a comma-separated list of tcpout group names. Use this setting to selectively forward your data to specific indexers by specifying the tcpout groups that the forwarder should use when forwarding the data. Define the tcpout group names in the outputs.conf file in [tcpout:<tcpout_group_name>] stanzas. The groups present in defaultGroup in [tcpout] stanza in the outputs.conf file.
What happens when the same username exists in Splunk as well as through LDAP?
Answer : C
Splunk platform attempts native authentication first. If authentication fails outside of a local account that doesn't exist, there is no attempt to use LDAP to log in. This is adapted from precedence of Splunk authentication schema.
Within props. conf, which stanzas are valid for data modification? (select all that apply)
When running the command shown below, what is the default path in which deployment server. conf is created?
splunk set deploy-poll deployServer:port
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/Updating/Definedeploymentclasses#Ways_to_define_server_classes 'When you use forwarder management to create a new server class, it saves the server class definition in a copy of serverclass.conf under $SPLUNK_HOME/etc/system/local. If, instead of using forwarder management, you decide to directly edit serverclass.conf, it is recommended that you create the serverclass.conf file in that same directory, $SPLUNK_HOME/etc/system/local.'
When working with an indexer cluster, what changes with the global precedence when comparing to a standalone deployment?
Answer : C
The app local directories move to second in the priority list. This is explained in the Splunk documentation, which states:
In a clustered environment, the precedence of configuration files changes slightly from that of a standalone deployment. The app local directories move to second in the priority list, after the peer-apps local directory. This means that any configuration files in the app local directories on the individual peers are overridden by configuration files of the same name and type in the peer-apps local directory on the master node.
After how many warnings within a rolling 30-day period will a license violation occur with an enforced
Enterprise license?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Aboutlicenseviolations
'Enterprise Trial license. If you get five or more warnings in a rolling 30 days period, you are in violation of your license. Dev/Test license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. Developer license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. BUT for Free license. If you get three or more warnings in a rolling 30 days period, you are in violation of your license.'
Which of the following is a valid distributed search group?
Using the CLI on the forwarder, how could the current forwarder to indexer configuration be viewed?
Which of the following indexes come pre-configured with Splunk Enterprise? (select all that apply)
Which of the following methods will connect a deployment client to a deployment server? (select all that apply)
In case of a conflict between a whitelist and a blacklist input setting, which one is used?
An organization wants to collect Windows performance data from a set of clients, however, installing Splunk
software on these clients is not allowed. What option is available to collect this data in Splunk Enterprise?
Answer : B
'The Splunk platform collects remote Windows data for indexing in one of two ways: From Splunk forwarders, Using Windows Management Instrumentation (WMI). For Splunk Cloud deployments, you must use the Splunk Universal Forwarder on a Windows machines to montior remote Windows data.'
Which network input option provides durable file-system buffering of data to mitigate data loss due to network outages and splunkd restarts?
Answer : C
When running the command shown below, what is the default path in which deployment server. conf is created?
splunk set deploy-poll deployServer:port
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/Updating/Definedeploymentclasses#Ways_to_define_server_classes 'When you use forwarder management to create a new server class, it saves the server class definition in a copy of serverclass.conf under $SPLUNK_HOME/etc/system/local. If, instead of using forwarder management, you decide to directly edit serverclass.conf, it is recommended that you create the serverclass.conf file in that same directory, $SPLUNK_HOME/etc/system/local.'
Event processing occurs at which phase of the data pipeline?
The Splunk administrator wants to ensure data is distributed evenly amongst the indexers. To do this, he runs
the following search over the last 24 hours:
index=*
What field can the administrator check to see the data distribution?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Knowledge/Usedefaultfields splunk_server
The splunk server field contains the name of the Splunk server containing the event. Useful in a distributed Splunk environment. Example: Restrict a search to the main index on a server named remote. splunk_server=remote index=main 404
Which of the following are required when defining an index in indexes. conf? (select all that apply)
Answer : A, B, D
homePath = $SPLUNK_DB/hatchdb/db
coldPath = $SPLUNK_DB/hatchdb/colddb
thawedPath = $SPLUNK_DB/hatchdb/thaweddb
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Indexesconf#PER_INDEX_OPTIONS
Within props. conf, which stanzas are valid for data modification? (select all that apply)
When deploying apps on Universal Forwarders using the deployment server, what is the correct component and location of the app before it is deployed?
Answer : C
The correct answer is C. On Deployment Server, $SPLUNK_HOME/etc/deployment-apps.
A deployment server is a Splunk Enterprise instance that acts as a centralized configuration manager for any number of other instances, called ''deployment clients''. A deployment client can be a universal forwarder, a non-clustered indexer, or a search head1.
A deployment app is a directory that contains any content that you want to download to a set of deployment clients. The content can include a Splunk Enterprise app, a set of Splunk Enterprise configurations, or other content, such as scripts, images, and supporting files2.
You create a deployment app by creating a directory for it on the deployment server. The default location is $SPLUNK_HOME/etc/deployment-apps, but this is configurable through the repositoryLocation attribute in serverclass.conf. Underneath this location, each app must have its own subdirectory. The name of the subdirectory serves as the app name in the forwarder management interface2.
The other options are incorrect because:
A . On Universal Forwarder, $SPLUNK_HOME/etc/apps. This is the location where the deployment app resides after it is downloaded from the deployment server to the universal forwarder. It is not the location of the app before it is deployed2.
B . On Deployment Server, $SPLUNK_HOME/etc/apps. This is the location where the apps that are specific to the deployment server itself reside. It is not the location where the deployment apps for the clients are stored2.
Which Splunk forwarder has a built-in license?
Answer : C
Using SEDCMD in props.conf allows raw data to be modified. With the given event below, which option will mask the first three digits of the AcctID field resulting output: [22/Oct/2018:15:50:21] VendorID=1234 Code=B AcctID=xxx5309
Event:
[22/Oct/2018:15:50:21] VendorID=1234 Code=B AcctID=xxx5309
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Anonymizedata
Scrolling down to the section titled 'Define the sed script in props.conf shows the correct syntax of an example which validates that the number/character /1 immediately preceded the /g
Which configuration file would be used to forward the Splunk internal logs from a search head to the indexer?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/DistSearch/Forwardsearchheaddata
Per the provided Splunk reference URL by @hwangho, scroll to section Forward search head data, subsection titled, 2. Configure the search head as a forwarder. 'Create an outputs.conf file on the search head that configures the search head for load-balanced forwarding across the set of search peers (indexers).'
The following stanzas in inputs. conf are currently being used by a deployment client:
[udp: //145.175.118.177:1001
Connection_host = dns
sourcetype = syslog
Which of the following statements is true of data that is received via this input?
Answer : D
This is because the input type is UDP, which is an unreliable protocol that does not guarantee delivery, order, or integrity of the data packets. UDP does not have any mechanism to resend or acknowledge the data packets, so if Splunk is restarted, any data that was in transit or in the buffer may be dropped and not indexed.
Which of the following is an acceptable channel value when using the HTTP Event Collector indexer acknowledgment capability?
Answer : A
The HTTP Event Collector (HEC) supports indexer acknowledgment to confirm event delivery. Each acknowledgment is associated with a unique GUID (Globally Unique Identifier).
GUID ensures events are not re-indexed in the case of retries.
Incorrect Options:
B, C, D: These are not valid channel values in HEC acknowledgments.
References:
Splunk Docs: Use indexer acknowledgment with HTTP Event Collector
An index stores its data in buckets. Which default directories does Splunk use to store buckets? (Choose all that apply.)
Answer : C, D
Which data pipeline phase is the last opportunity for defining event boundaries?
Answer : C
The parsing phase is the process of extracting fields and values from raw data. The parsing phase respects LINE_BREAKER, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings in props.conf. These settings determine how Splunk breaks the data into events based on certain criteria, such as timestamps or regular expressions. The event boundaries are defined by the props.conf file, which can be modified by the administrator. Therefore, the parsing phase is the last opportunity for defining event boundaries.
A non-clustered Splunk environment has three indexers (A,B,C) and two search heads (X, Y). During a search executed on search head X, indexer A crashes. What is Splunk's response?
Answer : A
This is explained in the Splunk documentation1, which states:
If an indexer goes down during a search, the search head notifies you that the results might be incomplete. The search head does not attempt to re-run the search on another indexer.
Which of the following are available input methods when adding a file input in Splunk Web? (Choose all that
apply.)
Answer : A, D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Howdoyouwanttoadddata
The fastest way to add data to your Splunk Cloud instance or Splunk Enterprise deployment is to use Splunk Web. After you access the Add Data page, choose one of three options for getting data into your Splunk platform deployment with Splunk Web: (1) Upload, (2) Monitor, (3) Forward The Upload option lets you upload a file or archive of files for indexing. When you choose Upload option, Splunk Web opens the upload process page. Monitor. For Splunk Enterprise installations, the Monitor option lets you monitor one or more files, directories, network streams, scripts, Event Logs (on Windows hosts only), performance metrics, or any other type of machine data that the Splunk Enterprise instance has access to.
Which forwarder type can parse data prior to forwarding?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Typesofforwarders
'A heavy forwarder parses data before forwarding it and can route data based on criteria such as source or type of event.'
How can native authentication be disabled in Splunk?
Answer : B
Which of the following is a benefit of distributed search?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Whatisdistributedsearch
Parallel reduce search processing If you struggle with extremely large high-cardinality searches, you might be able to apply parallel reduce processing to them to help them complete faster. You must have a distributed search environment to use parallel reduce search processing.
Which of the following enables compression for universal forwarders in outputs. conf ?
A)
B)
C)
D)
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf
# Compression
#
# This example sends compressed events to the remote indexer.
# NOTE: Compression can be enabled TCP or SSL outputs only.
# The receiver input port should also have compression enabled.
[tcpout]
server = splunkServer.example.com:4433
compressed = true
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
Which configuration file would be used to forward the Splunk internal logs from a search head to the indexer?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/DistSearch/Forwardsearchheaddata
Per the provided Splunk reference URL by @hwangho, scroll to section Forward search head data, subsection titled, 2. Configure the search head as a forwarder. 'Create an outputs.conf file on the search head that configures the search head for load-balanced forwarding across the set of search peers (indexers).'
Which of the following indexes come pre-configured with Splunk Enterprise? (select all that apply)
What is the difference between the two wildcards ... and - for the monitor stanza in inputs, conf?
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Data/Specifyinputpathswithwildcards
... The ellipsis wildcard searches recursively through directories and any number of levels of subdirectories to find matches.
If you specify a folder separator (for example, //var/log/.../file), it does not match the first folder level, only subfolders.
* The asterisk wildcard matches anything in that specific folder path segment.
Unlike ..., * does not recurse through subfolders.
After configuring a universal forwarder to communicate with an indexer, which index can be checked via the Splunk Web UI for a successful connection?
Answer : D
Who provides the Application Secret, Integration, and Secret keys, as well as the API Hostname when setting
up Duo for Multi-Factor Authentication in Splunk Enterprise?
Answer : A
Local user accounts created in Splunk store passwords in which file?
Answer : A
'To set the default username and password, place user-seed.conf in $SPLUNK_HOME/etc/system/local. You must restart Splunk to enable configurations. If the $SPLUNK_HOME/etc/passwd file is present, the settings in this file (user-seed.conf) are not used.'
How can native authentication be disabled in Splunk?
Answer : B
What is the correct example to redact a plain-text password from raw events?
Answer : B
The correct answer is B. in props.conf:
[identity]
SEDCMD-redact_pw = s/password=([^,|/s]+)/ ####REACTED####/g
s/password=([^,|/s]+)/ ####REACTED####/g
The g flag at the end means that the replacement is applied globally, not just to the first match.
Option A is incorrect because it uses the REGEX attribute instead of the SEDCMD attribute. The REGEX attribute is used to extract fields from events, not to modify them.
Option C is incorrect because it uses the transforms.conf file instead of the props.conf file. The transforms.conf file is used to define transformations that can be applied to fields or events, such as lookups, evaluations, or replacements. However, these transformations are applied after indexing, not before.
Option D is incorrect because it uses both the wrong attribute and the wrong file. There is no REGEX-redact_pw attribute in the transforms.conf file.
References: 1: Redact data from events - Splunk Documentation
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
When using a directory monitor input, specific source type can be selectively overridden using which configuration file?
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
Which forwarder type can parse data prior to forwarding?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Typesofforwarders
'A heavy forwarder parses data before forwarding it and can route data based on criteria such as source or type of event.'
Consider a company with a Splunk distributed environment in production. The Compliance Department wants to start using Splunk; however, they want to ensure that no one can see their reports or any other knowledge objects. Which Splunk Component can be added to implement this policy for the new team?
Answer : D
What is the correct order of steps in Duo Multifactor Authentication?
Answer : C
Using the provided DUO/Splunk reference URL https://duo.com/docs/splunk
Scroll down to the Network Diagram section and note the following 6 similar steps
1 - SPlunk connection initiated
2 - Primary authentication
3 - Splunk connection established to Duo Security over TCP port 443
4 - Secondary authentication via Duo Security's service
5 - Splunk receives authentication response
6 - Splunk session logged in.
How is data handled by Splunk during the input phase of the data ingestion process?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Deploy/Datapipeline
'In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks in into 64K blocks, and annotates each block with some metadata keys.'
This file has been manually created on a universal forwarder
A new Splunk admin comes in and connects the universal forwarders to a deployment server and deploys the same app with a new
Which file is now monitored?
Answer : B
Which setting allows the configuration of Splunk to allow events to span over more than one line?
In case of a conflict between a whitelist and a blacklist input setting, which one is used?
Which of the following statements describes how distributed search works?
Answer : C
URL https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Configuredistributedsearch
'To activate distributed search, you add search peers, or indexers, to a Splunk Enterprise instance that you desingate as a search head. You do this by specifying each search peer manually.'
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
In inputs. conf, which stanza would mean Splunk was only reading one local file?
Local user accounts created in Splunk store passwords in which file?
Answer : A
'To set the default username and password, place user-seed.conf in $SPLUNK_HOME/etc/system/local. You must restart Splunk to enable configurations. If the $SPLUNK_HOME/etc/passwd file is present, the settings in this file (user-seed.conf) are not used.'
Which of the following is a valid distributed search group?
Which of the following are available input methods when adding a file input in Splunk Web? (Choose all that
apply.)
Answer : A, D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Howdoyouwanttoadddata
The fastest way to add data to your Splunk Cloud instance or Splunk Enterprise deployment is to use Splunk Web. After you access the Add Data page, choose one of three options for getting data into your Splunk platform deployment with Splunk Web: (1) Upload, (2) Monitor, (3) Forward The Upload option lets you upload a file or archive of files for indexing. When you choose Upload option, Splunk Web opens the upload process page. Monitor. For Splunk Enterprise installations, the Monitor option lets you monitor one or more files, directories, network streams, scripts, Event Logs (on Windows hosts only), performance metrics, or any other type of machine data that the Splunk Enterprise instance has access to.
Which of the following must be done to define user permissions when integrating Splunk with LDAP?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.1.3/Security/ConfigureLDAPwithSplunkWeb
'You can map either users or groups, but not both. If you are using groups, all users must be members of an appropriate group. Groups inherit capabilities form the highest level role they're a member of.' 'If your LDAP environment does not have group entries, you can treat each user as its own group.'
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
When using license pools, volume allocations apply to which Splunk components?
Which data pipeline phase is the last opportunity for defining event boundaries?
Answer : C
The parsing phase is the process of extracting fields and values from raw data. The parsing phase respects LINE_BREAKER, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings in props.conf. These settings determine how Splunk breaks the data into events based on certain criteria, such as timestamps or regular expressions. The event boundaries are defined by the props.conf file, which can be modified by the administrator. Therefore, the parsing phase is the last opportunity for defining event boundaries.
To set up a Network input in Splunk, what needs to be specified'?
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
Which Splunk component does a search head primarily communicate with?
Answer : A
This file has been manually created on a universal forwarder
A new Splunk admin comes in and connects the universal forwarders to a deployment server and deploys the same app with a new
Which file is now monitored?
Answer : B
Which optional configuration setting in inputs .conf allows you to selectively forward the data to specific indexer(s)?
Answer : A
Specifies a comma-separated list of tcpout group names. Use this setting to selectively forward your data to specific indexers by specifying the tcpout groups that the forwarder should use when forwarding the data. Define the tcpout group names in the outputs.conf file in [tcpout:<tcpout_group_name>] stanzas. The groups present in defaultGroup in [tcpout] stanza in the outputs.conf file.
Which Splunk forwarder has a built-in license?
Answer : C
What are the minimum required settings when creating a network input in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Inputsconf
[tcp://<remote server>:
*Configures the input to listen on a specific TCP network port.
*If a <remote server> makes a connection to this instance, the input uses this stanza to configure itself.
*If you do not specify <remote server>, this stanza matches all connections on the specified port.
*Generates events with source set to 'tcp:
*If you do not specify a sourcetype, generates events with sourcetype set to 'tcp-raw'
Which layers are involved in Splunk configuration file layering? (select all that apply)
Answer : A, B, C
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles
To determine the order of directories for evaluating configuration file precedence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user: Global. Activities like indexing take place in a global context. They are independent of any app or user. For example, configuration files that determine monitoring or indexing behavior occur outside of the app and user context and are global in nature. App/user. Some activities, like searching, take place in an app or user context. The app and user context is vital to search-time processing, where certain knowledge objects or actions might be valid only for specific users in specific apps.
Immediately after installation, what will a Universal Forwarder do first?
Running this search in a distributed environment:
On what Splunk component does the eval command get executed?
Which of the following indexes come pre-configured with Splunk Enterprise? (select all that apply)
Which of the following statements describe deployment management? (select all that apply)
Answer : A, B
'All Splunk Enterprise instances functioning as management components needs access to an Enterprise license. Management components include the deployment server, the indexer cluster manager node, the search head cluster deployer, and the monitoring console.'
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
A company moves to a distributed architecture to meet the growing demand for the use of Splunk. What parameter can be configured to enable automatic load balancing in the
Universal Forwarder to send data to the indexers?
Answer : D
To enable automatic load balancing, set the stanza to have a server value equal to a comma-separated list of IP addresses and indexer ports for each of the indexers in the environment. For example:
[tcpout] server=10.1.1.1:9997,10.1.1.2:9997,10.1.1.3:9997
The forwarder then distributes data across all of the indexers in the list.
What hardware attribute would need to be changed to increase the number of simultaneous searches (ad-hoc and scheduled) on a single search head?
Answer : B
https://docs.splunk.com/Documentation/Splunk/7.3.1/DistSearch/SHCarchitecture
Scroll down to section titled, How the cluster handles concurrent search quotas, 'Overall search quota. This quota determines the maximum number of historical searches (combined scheduled and ad hoc) that the cluster can run concurrently. This quota is configured with max_Searches_per_cpu and related settings in limits.conf.'
Which is a valid stanza for a network input?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Data/Monitornetworkports
Bypassautomaticsourcetypeassignment
Which default Splunk role could be assigned to provide users with the following capabilities?
Create saved searches
Edit shared objects and alerts
Not allowed to create custom roles
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
Which of the following is an acceptable channel value when using the HTTP Event Collector indexer acknowledgment capability?
Answer : A
The HTTP Event Collector (HEC) supports indexer acknowledgment to confirm event delivery. Each acknowledgment is associated with a unique GUID (Globally Unique Identifier).
GUID ensures events are not re-indexed in the case of retries.
Incorrect Options:
B, C, D: These are not valid channel values in HEC acknowledgments.
References:
Splunk Docs: Use indexer acknowledgment with HTTP Event Collector
When configuring HTTP Event Collector (HEC) input, how would one ensure the events have been indexed?
Answer : A
Per the provided Splunk reference URL
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
'While HEC has precautions in place to prevent data loss, it's impossible to completely prevent such an occurrence, especially in the event of a network failure or hardware crash. This is where indexer acknolwedgment comes in.'
Reference https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
Which of the following accurately describes HTTP Event Collector indexer acknowledgement?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/AboutHECIDXAck
- Section: About channels and sending data
Sending events to HEC with indexer acknowledgment active is similar to sending them with the setting off. There is one crucial difference: when you have indexer acknowledgment turned on, you must specify a channel when you send events. The concept of a channel was introduced in HEC primarily to prevent a fast client from impeding the performance of a slow client. When you assign one channel per client, because channels are treated equally on Splunk Enterprise, one client can't affect another. You must include a matching channel identifier both when sending data to HEC in an HTTP request and when requesting acknowledgment that events contained in the request have been indexed. If you don't, you will receive the error message, 'Data channel is missing.' Each request that includes a token for which indexer acknowledgment has been enabled must include a channel identifier, as shown in the following example cURL statement, where <data> represents the event data portion of the request
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
Where are license files stored?
Answer : C
On the deployment server, administrators can map clients to server classes using client filters. Which of the
following statements is accurate?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.1/Updating/Filterclients
same/td-p/390910
Which Splunk forwarder type allows parsing of data before forwarding to an indexer?
Answer : C
Immediately after installation, what will a Universal Forwarder do first?
Which configuration file would be used to forward the Splunk internal logs from a search head to the indexer?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/DistSearch/Forwardsearchheaddata
Per the provided Splunk reference URL by @hwangho, scroll to section Forward search head data, subsection titled, 2. Configure the search head as a forwarder. 'Create an outputs.conf file on the search head that configures the search head for load-balanced forwarding across the set of search peers (indexers).'
In which Splunk configuration is the SEDCMD used?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Forwarddatatothird-partysystemsd
'You can specify a SEDCMD configuration in props.conf to address data that contains characters that the third-party server cannot process. '
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
A non-clustered Splunk environment has three indexers (A,B,C) and two search heads (X, Y). During a search executed on search head X, indexer A crashes. What is Splunk's response?
Answer : A
This is explained in the Splunk documentation1, which states:
If an indexer goes down during a search, the search head notifies you that the results might be incomplete. The search head does not attempt to re-run the search on another indexer.
For single line event sourcetypes. it is most efficient to set SHOULD_linemerge to what value?
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking
Attribute : SHOULD_LINEMERGE = [true|false]
Description : When set to true, the Splunk platform combines several input lines into a single event, with configuration based on the settings described in the next section.
Which of the following is accurate regarding the input phase?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline 'The data pipeline segments in depth. INPUT - In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks it into 64K blocks, and annotates each block with some metadata keys. The keys can also include values that are used internally, such as the character encoding of the data stream, and values that control later processing of the data, such as the index into which the events should be stored. PARSING Annotating individual events with metadata copied from the source-wide keys. Transforming event data and metadata according to regex transform rules.'
After an Enterprise Trial license expires, it will automatically convert to a Free license. How many days is an Enterprise Trial license valid before this conversion occurs?
In which scenario would a Splunk Administrator want to enable data integrity check when creating an index?
Answer : D
A new forwarder has been installed with a manually created deploymentclient.conf.
What is the next step to enable the communication between the forwarder and the deployment server?
To set up a Network input in Splunk, what needs to be specified'?
Which feature in Splunk allows Event Breaking, Timestamp extractions, and any advanced configurations
found in props.conf to be validated all through the UI?
Where should apps be located on the deployment server that the clients pull from?
Answer : D
After an app is downloaded, it resides under $SPLUNK_HOME/etc/apps on the deployment clients. But it resided in the $SPLUNK_HOME/etc/deployment-apps location in the deployment server.
What happens when the same username exists in Splunk as well as through LDAP?
Answer : C
Splunk platform attempts native authentication first. If authentication fails outside of a local account that doesn't exist, there is no attempt to use LDAP to log in. This is adapted from precedence of Splunk authentication schema.
Consider the following stanza in inputs.conf:
What will the value of the source filed be for events generated by this scripts input?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Admin/Inputsconf
-Scroll down to source = <string>
*Default: the input file path
What are the values for host and index for [stanza1] used by Splunk during index time, given the following configuration files?
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
Which layers are involved in Splunk configuration file layering? (select all that apply)
Answer : A, B, C
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles
To determine the order of directories for evaluating configuration file precedence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user: Global. Activities like indexing take place in a global context. They are independent of any app or user. For example, configuration files that determine monitoring or indexing behavior occur outside of the app and user context and are global in nature. App/user. Some activities, like searching, take place in an app or user context. The app and user context is vital to search-time processing, where certain knowledge objects or actions might be valid only for specific users in specific apps.
When configuring HTTP Event Collector (HEC) input, how would one ensure the events have been indexed?
Answer : A
Per the provided Splunk reference URL
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
'While HEC has precautions in place to prevent data loss, it's impossible to completely prevent such an occurrence, especially in the event of a network failure or hardware crash. This is where indexer acknolwedgment comes in.'
Reference https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
What is required when adding a native user to Splunk? (select all that apply)
Answer : A, B
According to the Splunk system admin course PDF, When adding native users, Username and Password ARE REQUIRED
In which Splunk configuration is the SEDCMD used?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Forwarddatatothird-partysystemsd
'You can specify a SEDCMD configuration in props.conf to address data that contains characters that the third-party server cannot process. '
Which configuration file would be used to forward the Splunk internal logs from a search head to the indexer?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/DistSearch/Forwardsearchheaddata
Per the provided Splunk reference URL by @hwangho, scroll to section Forward search head data, subsection titled, 2. Configure the search head as a forwarder. 'Create an outputs.conf file on the search head that configures the search head for load-balanced forwarding across the set of search peers (indexers).'
Which valid bucket types are searchable? (select all that apply)
Answer : A, B, C
Hot/warm/cold/thawed bucket types are searchable. Frozen isn't searchable because its either deleted at that state or archived.
How is data handled by Splunk during the input phase of the data ingestion process?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Deploy/Datapipeline
'In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks in into 64K blocks, and annotates each block with some metadata keys.'
Where can scripts for scripted inputs reside on the host file system? (select all that apply)
Answer : A, C, D
'Where to place the scripts for scripted inputs. The script that you refer to in $SCRIPT can reside in only one of the following places on the host file system:
$SPLUNK_HOME/etc/system/bin
$SPLUNK_HOME/etc/apps/<your_App>/bin
$SPLUNK_HOME/bin/scripts
As a best practice, put your script in the bin/ directory that is nearest to the inputs.conf file that calls your script on the host file system.'
Which of the following is a benefit of distributed search?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Whatisdistributedsearch
Parallel reduce search processing If you struggle with extremely large high-cardinality searches, you might be able to apply parallel reduce processing to them to help them complete faster. You must have a distributed search environment to use parallel reduce search processing.
In inputs. conf, which stanza would mean Splunk was only reading one local file?
The priority of layered Splunk configuration files depends on the file's:
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
'To determine the order of directories for evaluating configuration file precendence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user'
Which authentication methods are natively supported within Splunk Enterprise? (select all that apply)
Answer : A, B, C
Splunk authentication: Provides Admin, Power and User by default, and you can define your own roles using a list of capabilities. If you have an Enterprise license, Splunk authentication is enabled by default. See Set up user authentication with Splunk's built-in system for more information. LDAP: Splunk Enterprise supports authentication with its internal authentication services or your existing LDAP server. See Set up user authentication with LDAP for more information. Scripted authentication API: Use scripted authentication to integrate Splunk authentication with an external authentication system, such as RADIUS or PAM. See Set up user authentication with external systems for more information. Note: Authentication, including native authentication, LDAP, and scripted authentication, is not available in Splunk Free.
How often does Splunk recheck the LDAP server?
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
The universal forwarder has which capabilities when sending data? (select all that apply)
Which of the following must be done to define user permissions when integrating Splunk with LDAP?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.1.3/Security/ConfigureLDAPwithSplunkWeb
'You can map either users or groups, but not both. If you are using groups, all users must be members of an appropriate group. Groups inherit capabilities form the highest level role they're a member of.' 'If your LDAP environment does not have group entries, you can treat each user as its own group.'
Which setting allows the configuration of Splunk to allow events to span over more than one line?
Consider a company with a Splunk distributed environment in production. The Compliance Department wants to start using Splunk; however, they want to ensure that no one can see their reports or any other knowledge objects. Which Splunk Component can be added to implement this policy for the new team?
Answer : D
A company moves to a distributed architecture to meet the growing demand for the use of Splunk. What parameter can be configured to enable automatic load balancing in the
Universal Forwarder to send data to the indexers?
Answer : D
To enable automatic load balancing, set the stanza to have a server value equal to a comma-separated list of IP addresses and indexer ports for each of the indexers in the environment. For example:
[tcpout] server=10.1.1.1:9997,10.1.1.2:9997,10.1.1.3:9997
The forwarder then distributes data across all of the indexers in the list.
When using a directory monitor input, specific source type can be selectively overridden using which configuration file?
Which Splunk forwarder type allows parsing of data before forwarding to an indexer?
Answer : C
How often does Splunk recheck the LDAP server?
Which option on the Add Data menu is most useful for testing data ingestion without creating inputs.conf?
Answer : A
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
How would you configure your distsearch conf to allow you to run the search below? sourcetype=access_combined status=200 action=purchase splunk_setver_group=HOUSTON
A)
B)
C)
D)
When would the following command be used?
When running the command shown below, what is the default path in which deployment server. conf is created?
splunk set deploy-poll deployServer:port
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/Updating/Definedeploymentclasses#Ways_to_define_server_classes 'When you use forwarder management to create a new server class, it saves the server class definition in a copy of serverclass.conf under $SPLUNK_HOME/etc/system/local. If, instead of using forwarder management, you decide to directly edit serverclass.conf, it is recommended that you create the serverclass.conf file in that same directory, $SPLUNK_HOME/etc/system/local.'
Which authentication methods are natively supported within Splunk Enterprise? (select all that apply)
Answer : A, B, C
Splunk authentication: Provides Admin, Power and User by default, and you can define your own roles using a list of capabilities. If you have an Enterprise license, Splunk authentication is enabled by default. See Set up user authentication with Splunk's built-in system for more information. LDAP: Splunk Enterprise supports authentication with its internal authentication services or your existing LDAP server. See Set up user authentication with LDAP for more information. Scripted authentication API: Use scripted authentication to integrate Splunk authentication with an external authentication system, such as RADIUS or PAM. See Set up user authentication with external systems for more information. Note: Authentication, including native authentication, LDAP, and scripted authentication, is not available in Splunk Free.
Which of the following accurately describes HTTP Event Collector indexer acknowledgement?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/AboutHECIDXAck
- Section: About channels and sending data
Sending events to HEC with indexer acknowledgment active is similar to sending them with the setting off. There is one crucial difference: when you have indexer acknowledgment turned on, you must specify a channel when you send events. The concept of a channel was introduced in HEC primarily to prevent a fast client from impeding the performance of a slow client. When you assign one channel per client, because channels are treated equally on Splunk Enterprise, one client can't affect another. You must include a matching channel identifier both when sending data to HEC in an HTTP request and when requesting acknowledgment that events contained in the request have been indexed. If you don't, you will receive the error message, 'Data channel is missing.' Each request that includes a token for which indexer acknowledgment has been enabled must include a channel identifier, as shown in the following example cURL statement, where <data> represents the event data portion of the request
A company moves to a distributed architecture to meet the growing demand for the use of Splunk. What parameter can be configured to enable automatic load balancing in the
Universal Forwarder to send data to the indexers?
Answer : D
To enable automatic load balancing, set the stanza to have a server value equal to a comma-separated list of IP addresses and indexer ports for each of the indexers in the environment. For example:
[tcpout] server=10.1.1.1:9997,10.1.1.2:9997,10.1.1.3:9997
The forwarder then distributes data across all of the indexers in the list.
In this example, if useACK is set to true and the maxQueueSize is set to 7MB, what is the size of the wait queue on this universal forwarder?
On the deployment server, administrators can map clients to server classes using client filters. Which of the
following statements is accurate?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.1/Updating/Filterclients
same/td-p/390910
What hardware attribute would need to be changed to increase the number of simultaneous searches (ad-hoc and scheduled) on a single search head?
Answer : B
https://docs.splunk.com/Documentation/Splunk/7.3.1/DistSearch/SHCarchitecture
Scroll down to section titled, How the cluster handles concurrent search quotas, 'Overall search quota. This quota determines the maximum number of historical searches (combined scheduled and ad hoc) that the cluster can run concurrently. This quota is configured with max_Searches_per_cpu and related settings in limits.conf.'
Which of the following types of data count against the license daily quota?
Immediately after installation, what will a Universal Forwarder do first?
Which of the following are supported options when configuring optional network inputs?
Which configuration files are used to transform raw data ingested by Splunk? (Choose all that apply.)
Answer : A, D
use transformations with props.conf and transforms.conf to:
-- Mask or delete raw data as it is being indexed
--Override sourcetype or host based upon event values
-- Route events to specific indexes based on event content
-- Prevent unwanted events from being indexed
What are the minimum required settings when creating a network input in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Inputsconf
[tcp://<remote server>:
*Configures the input to listen on a specific TCP network port.
*If a <remote server> makes a connection to this instance, the input uses this stanza to configure itself.
*If you do not specify <remote server>, this stanza matches all connections on the specified port.
*Generates events with source set to 'tcp:
*If you do not specify a sourcetype, generates events with sourcetype set to 'tcp-raw'
What happens when there are conflicting settings within two or more configuration files?
Answer : D
When there are conflicting settings within two or more configuration files, the setting with the highest precedence is used. The precedence of configuration files is determined by a combination of the file type, the directory location, and the alphabetical order of the file names.
After automatic load balancing is enabled on a forwarder, the time interval for switching indexers can be updated by using which of the following attributes?
Answer : C
In which phase do indexed extractions in props.conf occur?
Answer : B
The following items in the phases below are listed in the order Splunk applies them (ie LINE_BREAKER occurs before TRUNCATE).
Input phase
inputs.conf
props.conf
CHARSET
NO_BINARY_CHECK
CHECK_METHOD
CHECK_FOR_HEADER (deprecated)
PREFIX_SOURCETYPE
sourcetype
wmi.conf
regmon-filters.conf
Structured parsing phase
props.conf
INDEXED_EXTRACTIONS, and all other structured data header extractions
Parsing phase
props.conf
LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings
TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ, and all other time extraction settings and rules
TRANSFORMS which includes per-event queue filtering, per-event index assignment, per-event routing
SEDCMD
MORE_THAN, LESS_THAN
transforms.conf
stanzas referenced by a TRANSFORMS clause in props.conf
LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH
Configurationparametersandthedatapipeline
Where should apps be located on the deployment server that the clients pull from?
Answer : D
After an app is downloaded, it resides under $SPLUNK_HOME/etc/apps on the deployment clients. But it resided in the $SPLUNK_HOME/etc/deployment-apps location in the deployment server.
When deploying apps, which attribute in the forwarder management interface determines the apps that clients install?
Answer : C
<https://docs.splunk.com/Documentation/Splunk/8.0.6/Updating/Deploymentserverarchitecture>
https://docs.splunk.com/Splexicon:Serverclass
What are the values for host and index for [stanza1] used by Splunk during index time, given the following configuration files?
After an Enterprise Trial license expires, it will automatically convert to a Free license. How many days is an Enterprise Trial license valid before this conversion occurs?
Which of the following statements apply to directory inputs? {select all that apply)
Answer : A, C
Which of the following applies only to Splunk index data integrity check?
Answer : C
What is the default value of LINE_BREAKER?
Answer : B
Line breaking, which uses theLINE_BREAKERsetting to split the incoming stream of data into separate lines. By default, theLINE_BREAKERvalue is any sequence of newlines and carriage returns. In regular expression format, this is represented as the following string:([\r\n]+). You don't normally need to adjust this setting, but in cases where it's necessary, you must configure it in the props.conf configuration file on the forwarder that sends the data to Splunk Cloud Platform or a Splunk Enterprise indexer. TheLINE_BREAKERsetting expects a value in regular expression format.
How is data handled by Splunk during the input phase of the data ingestion process?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Deploy/Datapipeline
'In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks in into 64K blocks, and annotates each block with some metadata keys.'
In this example, if useACK is set to true and the maxQueueSize is set to 7MB, what is the size of the wait queue on this universal forwarder?
A Splunk administrator has been tasked with developing a retention strategy to have frequently accessed data sets on SSD storage and to have older, less frequently accessed data on slower NAS storage. They have set a mount point for the NAS. Which parameter do they need to modify to set the path for the older, less frequently accessed data in indexes.conf?
How do you remove missing forwarders from the Monitoring Console?
Answer : D
Which of the following are available input methods when adding a file input in Splunk Web? (Choose all that
apply.)
Answer : A, D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Howdoyouwanttoadddata
The fastest way to add data to your Splunk Cloud instance or Splunk Enterprise deployment is to use Splunk Web. After you access the Add Data page, choose one of three options for getting data into your Splunk platform deployment with Splunk Web: (1) Upload, (2) Monitor, (3) Forward The Upload option lets you upload a file or archive of files for indexing. When you choose Upload option, Splunk Web opens the upload process page. Monitor. For Splunk Enterprise installations, the Monitor option lets you monitor one or more files, directories, network streams, scripts, Event Logs (on Windows hosts only), performance metrics, or any other type of machine data that the Splunk Enterprise instance has access to.
Which configuration file would be used to forward the Splunk internal logs from a search head to the indexer?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/DistSearch/Forwardsearchheaddata
Per the provided Splunk reference URL by @hwangho, scroll to section Forward search head data, subsection titled, 2. Configure the search head as a forwarder. 'Create an outputs.conf file on the search head that configures the search head for load-balanced forwarding across the set of search peers (indexers).'
When configuring HTTP Event Collector (HEC) input, how would one ensure the events have been indexed?
Answer : A
Per the provided Splunk reference URL
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
'While HEC has precautions in place to prevent data loss, it's impossible to completely prevent such an occurrence, especially in the event of a network failure or hardware crash. This is where indexer acknolwedgment comes in.'
Reference https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
Event processing occurs at which phase of the data pipeline?
Which Splunk forwarder has a built-in license?
Answer : C
Which Splunk forwarder type allows parsing of data before forwarding to an indexer?
Answer : C
Which of the following apply to how distributed search works? (select all that apply)
Answer : A, C, D
Users log on to the search head and run reports: -- The search head dispatches searches to the peers -- Peers run searches in parallel and return their portion of results -- The search head consolidates the individual results and prepares reports
When running the command shown below, what is the default path in which deployment server. conf is created?
splunk set deploy-poll deployServer:port
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/Updating/Definedeploymentclasses#Ways_to_define_server_classes 'When you use forwarder management to create a new server class, it saves the server class definition in a copy of serverclass.conf under $SPLUNK_HOME/etc/system/local. If, instead of using forwarder management, you decide to directly edit serverclass.conf, it is recommended that you create the serverclass.conf file in that same directory, $SPLUNK_HOME/etc/system/local.'
Which of the following monitor inputs stanza headers would match all of the following files?
/var/log/www1/secure.log
/var/log/www/secure.l
/var/log/www/logs/secure.logs
/var/log/www2/secure.log
Answer : C
Which artifact is required in the request header when creating an HTTP event?
Which of the following accurately describes HTTP Event Collector indexer acknowledgement?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/AboutHECIDXAck
- Section: About channels and sending data
Sending events to HEC with indexer acknowledgment active is similar to sending them with the setting off. There is one crucial difference: when you have indexer acknowledgment turned on, you must specify a channel when you send events. The concept of a channel was introduced in HEC primarily to prevent a fast client from impeding the performance of a slow client. When you assign one channel per client, because channels are treated equally on Splunk Enterprise, one client can't affect another. You must include a matching channel identifier both when sending data to HEC in an HTTP request and when requesting acknowledgment that events contained in the request have been indexed. If you don't, you will receive the error message, 'Data channel is missing.' Each request that includes a token for which indexer acknowledgment has been enabled must include a channel identifier, as shown in the following example cURL statement, where <data> represents the event data portion of the request
Which feature of Splunk's role configuration can be used to aggregate multiple roles intended for groups of
users?
Using the CLI on the forwarder, how could the current forwarder to indexer configuration be viewed?
Which of the following are reasons to create separate indexes? (Choose all that apply.)
Answer : A, C
Different retention times: You can set different retention policies for different indexes, depending on how long you want to keep the data. For example, you can have an index for security data that has a longer retention time than an index for performance data that has a shorter retention time.
Restrict user permissions: You can set different access permissions for different indexes, depending on who needs to see the data. For example, you can have an index for sensitive data that is only accessible by certain users or roles, and an index for public data that is accessible by everyone.
Which option accurately describes the purpose of the HTTP Event Collector (HEC)?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/UsetheHTTPEventCollector
'The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. HEC uses a token-based authentication model. You can generate a token and then configure a logging library or HTTP client with the token to send data to HEC in a specific format. This process eliminates the need for a Splunk forwarder when you send application events.'
Which of the following statements describe deployment management? (select all that apply)
Answer : A, B
'All Splunk Enterprise instances functioning as management components needs access to an Enterprise license. Management components include the deployment server, the indexer cluster manager node, the search head cluster deployer, and the monitoring console.'
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
Consider the following stanza in inputs.conf:
What will the value of the source filed be for events generated by this scripts input?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Admin/Inputsconf
-Scroll down to source = <string>
*Default: the input file path
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
Which of the following is a benefit of distributed search?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Whatisdistributedsearch
Parallel reduce search processing If you struggle with extremely large high-cardinality searches, you might be able to apply parallel reduce processing to them to help them complete faster. You must have a distributed search environment to use parallel reduce search processing.
What is the default value of LINE_BREAKER?
Answer : B
Line breaking, which uses theLINE_BREAKERsetting to split the incoming stream of data into separate lines. By default, theLINE_BREAKERvalue is any sequence of newlines and carriage returns. In regular expression format, this is represented as the following string:([\r\n]+). You don't normally need to adjust this setting, but in cases where it's necessary, you must configure it in the props.conf configuration file on the forwarder that sends the data to Splunk Cloud Platform or a Splunk Enterprise indexer. TheLINE_BREAKERsetting expects a value in regular expression format.
When Splunk is integrated with LDAP, which attribute can be changed in the Splunk UI for an LDAP user?
Which of the following are required when defining an index in indexes. conf? (select all that apply)
Answer : A, B, D
homePath = $SPLUNK_DB/hatchdb/db
coldPath = $SPLUNK_DB/hatchdb/colddb
thawedPath = $SPLUNK_DB/hatchdb/thaweddb
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Indexesconf#PER_INDEX_OPTIONS
Which Splunk forwarder has a built-in license?
Answer : C
A new forwarder has been installed with a manually created deploymentclient.conf.
What is the next step to enable the communication between the forwarder and the deployment server?
The following stanzas in inputs. conf are currently being used by a deployment client:
[udp: //145.175.118.177:1001
Connection_host = dns
sourcetype = syslog
Which of the following statements is true of data that is received via this input?
Answer : D
This is because the input type is UDP, which is an unreliable protocol that does not guarantee delivery, order, or integrity of the data packets. UDP does not have any mechanism to resend or acknowledge the data packets, so if Splunk is restarted, any data that was in transit or in the buffer may be dropped and not indexed.
Which default Splunk role could be assigned to provide users with the following capabilities?
Create saved searches
Edit shared objects and alerts
Not allowed to create custom roles
The priority of layered Splunk configuration files depends on the file's:
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
'To determine the order of directories for evaluating configuration file precendence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user'
Which layers are involved in Splunk configuration file layering? (select all that apply)
Answer : A, B, C
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles
To determine the order of directories for evaluating configuration file precedence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user: Global. Activities like indexing take place in a global context. They are independent of any app or user. For example, configuration files that determine monitoring or indexing behavior occur outside of the app and user context and are global in nature. App/user. Some activities, like searching, take place in an app or user context. The app and user context is vital to search-time processing, where certain knowledge objects or actions might be valid only for specific users in specific apps.
How is a remote monitor input distributed to forwarders?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/Usingforwardingagents
Scroll down to the section Titled, How to configure forwarder inputs, and subsection Here are the main ways that you can configure data inputs on a forwarder Install the app or add-on that contains the inputs you wants
In case of a conflict between a whitelist and a blacklist input setting, which one is used?
Which of the following are supported configuration methods to add inputs on a forwarder? (select all that apply)
Answer : A, B, D
https://docs.splunk.com/Documentation/Forwarder/8.2.1/Forwarder/HowtoforwarddatatoSplunkEnterprise
'You can collect data on the universal forwarder using several methods. Define inputs on the universal forwarder with the CLI. You can use the CLI to define inputs on the universal forwarder. After you define the inputs, the universal forwarder collects data based on those definitions as long as it has access to the data that you want to monitor. Define inputs on the universal forwarder with configuration files. If the input you want to configure does not have a CLI argument for it, you can configure inputs with configuration files. Create an inputs.conf file in the directory, $SPLUNK_HOME/etc/system/local
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
Which of the following methods will connect a deployment client to a deployment server? (select all that apply)
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
Using the CLI on the forwarder, how could the current forwarder to indexer configuration be viewed?
Which option on the Add Data menu is most useful for testing data ingestion without creating inputs.conf?
Answer : A
The LINE_BREAKER attribute is configured in which configuration file?
Answer : A
During search time, which directory of configuration files has the highest precedence?
Answer : D
Adding further clarity and quoting same Splunk reference URL from @giubal'
'To keep configuration settings consistent across peer nodes, configuration files are managed from the cluster master, which pushes the files to the slave-app directories on the peer nodes. Files in the slave-app directories have the highest precedence in a cluster peer's configuration. Here is the expanded precedence order for cluster peers:
1.Slave-app local directories -- highest priority
2. System local directory
3. App local directories
4. Slave-app default directories
5. App default directories
6. System default directory --lowest priority
Which of the following statements accurately describes using SSL to secure the feed from a forwarder?
Answer : A
AboutsecuringyourSplunkconfigurationwithSSL
A Splunk administrator has been tasked with developing a retention strategy to have frequently accessed data sets on SSD storage and to have older, less frequently accessed data on slower NAS storage. They have set a mount point for the NAS. Which parameter do they need to modify to set the path for the older, less frequently accessed data in indexes.conf?
User role inheritance allows what to be inherited from the parent role? (select all that apply)
Which feature in Splunk allows Event Breaking, Timestamp extractions, and any advanced configurations
found in props.conf to be validated all through the UI?
Which of the following is a benefit of distributed search?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Whatisdistributedsearch
Parallel reduce search processing If you struggle with extremely large high-cardinality searches, you might be able to apply parallel reduce processing to them to help them complete faster. You must have a distributed search environment to use parallel reduce search processing.
Which file will be matched for the following monitor stanza in inputs. conf?
[monitor: ///var/log/*/bar/*. txt]
Answer : C
The correct answer is C. /var/log/host_460352847/bar/file/foo.txt.
The monitor stanza in inputs.conf is used to configure Splunk to monitor files and directories for new data. The monitor stanza has the following syntax1:
[monitor://<input path>]
The input path can be a file or a directory, and it can include wildcards (*) and regular expressions. The wildcards match any number of characters, including none, while the regular expressions match patterns of characters. The input path is case-sensitive and must be enclosed in double quotes if it contains spaces1.
In this case, the input path is /var/log//bar/.txt, which means Splunk will monitor any file with the .txt extension that is located in a subdirectory named bar under the /var/log directory. The subdirectory bar can be at any level under the /var/log directory, and the * wildcard will match any characters before or after the bar and .txt parts1.
Therefore, the file /var/log/host_460352847/bar/file/foo.txt will be matched by the monitor stanza, as it meets the criteria. The other files will not be matched, because:
A . /var/log/host_460352847/temp/bar/file/csv/foo.txt has a .csv extension, not a .txt extension.
B . /var/log/host_460352847/bar/foo.txt is not located in a subdirectory under the bar directory, but directly in the bar directory.
D . /var/log/host_460352847/temp/bar/file/foo.txt is located in a subdirectory named file under the bar directory, not directly in the bar directory.
A new forwarder has been installed with a manually created deploymentclient.conf.
What is the next step to enable the communication between the forwarder and the deployment server?
When running a real-time search, search results are pulled from which Splunk component?
Answer : D
Using the Splunk reference URL https://docs.splunk.com/Splexicon:Searchpeer
'search peer is a splunk platform instance that responds to search requests from a search head. The term 'search peer' is usally synonymous with the indexer role in a distributed search topology. However, other instance types also have access to indexed data, particularly internal diagnostic data, and thus function as search peers when they respond to search requests for that data.'
Which of the following is accurate regarding the input phase?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline 'The data pipeline segments in depth. INPUT - In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks it into 64K blocks, and annotates each block with some metadata keys. The keys can also include values that are used internally, such as the character encoding of the data stream, and values that control later processing of the data, such as the index into which the events should be stored. PARSING Annotating individual events with metadata copied from the source-wide keys. Transforming event data and metadata according to regex transform rules.'
When enabling data integrity control, where does Splunk Enterprise store the hash files for each bucket?
Answer : B
Data integrity controls in Splunk ensure that indexed data has not been tampered with.
When enabled, Splunk calculates hashes for each bucket and stores these hash files in the rawdata directory of the corresponding bucket.
Incorrect Options:
A, C, D: These directories do not store hash files.
References:
Splunk Docs: Configure data integrity controls
Which Splunk component requires a Forwarder license?
Answer : B
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
Which of the following is valid distribute search group?
A)
B)
C)
D)
Answer : D
Which of the following statements describes how distributed search works?
Answer : C
URL https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Configuredistributedsearch
'To activate distributed search, you add search peers, or indexers, to a Splunk Enterprise instance that you desingate as a search head. You do this by specifying each search peer manually.'
Which Splunk configuration file is used to enable data integrity checking?
The Splunk administrator wants to ensure data is distributed evenly amongst the indexers. To do this, he runs
the following search over the last 24 hours:
index=*
What field can the administrator check to see the data distribution?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Knowledge/Usedefaultfields splunk_server
The splunk server field contains the name of the Splunk server containing the event. Useful in a distributed Splunk environment. Example: Restrict a search to the main index on a server named remote. splunk_server=remote index=main 404
During search time, which directory of configuration files has the highest precedence?
Answer : D
Adding further clarity and quoting same Splunk reference URL from @giubal'
'To keep configuration settings consistent across peer nodes, configuration files are managed from the cluster master, which pushes the files to the slave-app directories on the peer nodes. Files in the slave-app directories have the highest precedence in a cluster peer's configuration. Here is the expanded precedence order for cluster peers:
1.Slave-app local directories -- highest priority
2. System local directory
3. App local directories
4. Slave-app default directories
5. App default directories
6. System default directory --lowest priority
Event processing occurs at which phase of the data pipeline?
Which of the following statements apply to directory inputs? {select all that apply)
Answer : A, C
How do you remove missing forwarders from the Monitoring Console?
Answer : D
Consider a company with a Splunk distributed environment in production. The Compliance Department wants to start using Splunk; however, they want to ensure that no one can see their reports or any other knowledge objects. Which Splunk Component can be added to implement this policy for the new team?
Answer : D
Using the CLI on the forwarder, how could the current forwarder to indexer configuration be viewed?
Which Splunk component requires a Forwarder license?
Answer : B
Which parent directory contains the configuration files in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Configurationfiledirectories
Section titled, Configuration file directories, states 'A detailed list of settings for each configuration file is provided in the .spec file names for that configuration file. You can find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc system/README folder of your Splunk Enterprise installation...'
The universal forwarder has which capabilities when sending data? (select all that apply)
Which of the following indexes come pre-configured with Splunk Enterprise? (select all that apply)
When running the command shown below, what is the default path in which deployment server. conf is created?
splunk set deploy-poll deployServer:port
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/Updating/Definedeploymentclasses#Ways_to_define_server_classes 'When you use forwarder management to create a new server class, it saves the server class definition in a copy of serverclass.conf under $SPLUNK_HOME/etc/system/local. If, instead of using forwarder management, you decide to directly edit serverclass.conf, it is recommended that you create the serverclass.conf file in that same directory, $SPLUNK_HOME/etc/system/local.'
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
When configuring HTTP Event Collector (HEC) input, how would one ensure the events have been indexed?
Answer : A
Per the provided Splunk reference URL
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
'While HEC has precautions in place to prevent data loss, it's impossible to completely prevent such an occurrence, especially in the event of a network failure or hardware crash. This is where indexer acknolwedgment comes in.'
Reference https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
Which default Splunk role could be assigned to provide users with the following capabilities?
Create saved searches
Edit shared objects and alerts
Not allowed to create custom roles
Which layers are involved in Splunk configuration file layering? (select all that apply)
Answer : A, B, C
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles
To determine the order of directories for evaluating configuration file precedence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user: Global. Activities like indexing take place in a global context. They are independent of any app or user. For example, configuration files that determine monitoring or indexing behavior occur outside of the app and user context and are global in nature. App/user. Some activities, like searching, take place in an app or user context. The app and user context is vital to search-time processing, where certain knowledge objects or actions might be valid only for specific users in specific apps.
What are the required stanza attributes when configuring the transforms. conf to manipulate or remove events?
Answer : C
REGEX = <regular expression>
* Enter a regular expression to operate on your data.
FORMAT = <string>
* NOTE: This option is valid for both index-time and search-time field extraction. Index-time field extraction configuration require the FORMAT settings. The FORMAT settings is optional for search-time field extraction configurations.
* This setting specifies the format of the event, including any field names or values you want to add.
DEST_KEY = <key>
* NOTE: This setting is only valid for index-time field extractions.
* Specifies where SPLUNK software stores the expanded FORMAT results in accordance with the REGEX match.
A Splunk administrator has been tasked with developing a retention strategy to have frequently accessed data sets on SSD storage and to have older, less frequently accessed data on slower NAS storage. They have set a mount point for the NAS. Which parameter do they need to modify to set the path for the older, less frequently accessed data in indexes.conf?
Which of the following apply to how distributed search works? (select all that apply)
Answer : A, C, D
Users log on to the search head and run reports: -- The search head dispatches searches to the peers -- Peers run searches in parallel and return their portion of results -- The search head consolidates the individual results and prepares reports
Which of the following methods will connect a deployment client to a deployment server? (select all that apply)
In this source definition the MAX_TIMESTAMP_LOOKHEAD is missing. Which value would fit best?
Event example:
Answer : D
https://docs.splunk.com/Documentation/Splunk/6.2.0/Data/Configuretimestamprecognition
'Specify how far (how many characters) into an event Splunk software should look for a timestamp.' since TIME_PREFIX = ^ and timestamp is from 0-29 position, so D=30 will pick up the WHOLE timestamp correctly.
Within props. conf, which stanzas are valid for data modification? (select all that apply)
Running this search in a distributed environment:
On what Splunk component does the eval command get executed?
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
Which network input option provides durable file-system buffering of data to mitigate data loss due to network outages and splunkd restarts?
Answer : C
For single line event sourcetypes. it is most efficient to set SHOULD_linemerge to what value?
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking
Attribute : SHOULD_LINEMERGE = [true|false]
Description : When set to true, the Splunk platform combines several input lines into a single event, with configuration based on the settings described in the next section.
Which of the following accurately describes HTTP Event Collector indexer acknowledgement?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/AboutHECIDXAck
- Section: About channels and sending data
Sending events to HEC with indexer acknowledgment active is similar to sending them with the setting off. There is one crucial difference: when you have indexer acknowledgment turned on, you must specify a channel when you send events. The concept of a channel was introduced in HEC primarily to prevent a fast client from impeding the performance of a slow client. When you assign one channel per client, because channels are treated equally on Splunk Enterprise, one client can't affect another. You must include a matching channel identifier both when sending data to HEC in an HTTP request and when requesting acknowledgment that events contained in the request have been indexed. If you don't, you will receive the error message, 'Data channel is missing.' Each request that includes a token for which indexer acknowledgment has been enabled must include a channel identifier, as shown in the following example cURL statement, where <data> represents the event data portion of the request
Which of the following applies only to Splunk index data integrity check?
Answer : C
Which forwarder type can parse data prior to forwarding?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Typesofforwarders
'A heavy forwarder parses data before forwarding it and can route data based on criteria such as source or type of event.'
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
How is data handled by Splunk during the input phase of the data ingestion process?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Deploy/Datapipeline
'In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks in into 64K blocks, and annotates each block with some metadata keys.'
A Universal Forwarder is collecting two separate sources of data (A,B). Source A is being routed through a Heavy Forwarder and then to an indexer. Source B is being routed directly to the indexer. Both sets of data require the masking of raw text strings before being written to disk. What does the administrator need to do to
ensure that the masking takes place successfully?
Answer : D
The correct answer is D. Place both props . conf and transforms . conf on the Heavy Forwarder for source A, and place both props . conf and transforms . conf on the indexer for source B.
For source A, the data is routed through a heavy forwarder, which can parse the data before sending it to the indexer. Therefore, you need to place both props.conf and transforms.conf on the heavy forwarder for source A, so that the masking takes place before indexing.
For source B, the data is routed directly to the indexer, which parses and indexes the data. Therefore, you need to place both props.conf and transforms.conf on the indexer for source B, so that the masking takes place before indexing.
When deploying apps, which attribute in the forwarder management interface determines the apps that clients install?
Answer : C
<https://docs.splunk.com/Documentation/Splunk/8.0.6/Updating/Deploymentserverarchitecture>
https://docs.splunk.com/Splexicon:Serverclass
Who provides the Application Secret, Integration, and Secret keys, as well as the API Hostname when setting
up Duo for Multi-Factor Authentication in Splunk Enterprise?
Answer : A
When running a real-time search, search results are pulled from which Splunk component?
Answer : D
Using the Splunk reference URL https://docs.splunk.com/Splexicon:Searchpeer
'search peer is a splunk platform instance that responds to search requests from a search head. The term 'search peer' is usally synonymous with the indexer role in a distributed search topology. However, other instance types also have access to indexed data, particularly internal diagnostic data, and thus function as search peers when they respond to search requests for that data.'
The universal forwarder has which capabilities when sending data? (select all that apply)
What happens when there are conflicting settings within two or more configuration files?
Answer : D
When there are conflicting settings within two or more configuration files, the setting with the highest precedence is used. The precedence of configuration files is determined by a combination of the file type, the directory location, and the alphabetical order of the file names.
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
Which is a valid stanza for a network input?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Data/Monitornetworkports
Bypassautomaticsourcetypeassignment
A new forwarder has been installed with a manually created deploymentclient.conf.
What is the next step to enable the communication between the forwarder and the deployment server?
There is a file with a vast amount of old dat
a. Which of the following inputs.conf attributes would allow an admin to monitor the file for updates without indexing the pre-existing data?
Answer : D
IgnoreOlderThan: This setting filters files for indexing based on their age. It does not prevent indexing of old data already in the file.
allowList: This setting allows specifying patterns to include files for monitoring, but it does not control indexing of pre-existing data.
monitor: This is the default method for monitoring files but does not address indexing pre-existing data.
followTail: This attribute, when set in inputs.conf, ensures that Splunk starts reading a file from the end (tail) and does not index existing old data. It is ideal for scenarios with large files where only new updates are relevant.
References:
Splunk Docs: Monitor text files
Splunk Docs: Configure followTail in inputs.conf
During search time, which directory of configuration files has the highest precedence?
Answer : D
Adding further clarity and quoting same Splunk reference URL from @giubal'
'To keep configuration settings consistent across peer nodes, configuration files are managed from the cluster master, which pushes the files to the slave-app directories on the peer nodes. Files in the slave-app directories have the highest precedence in a cluster peer's configuration. Here is the expanded precedence order for cluster peers:
1.Slave-app local directories -- highest priority
2. System local directory
3. App local directories
4. Slave-app default directories
5. App default directories
6. System default directory --lowest priority
An organization wants to collect Windows performance data from a set of clients, however, installing Splunk
software on these clients is not allowed. What option is available to collect this data in Splunk Enterprise?
Answer : B
'The Splunk platform collects remote Windows data for indexing in one of two ways: From Splunk forwarders, Using Windows Management Instrumentation (WMI). For Splunk Cloud deployments, you must use the Splunk Universal Forwarder on a Windows machines to montior remote Windows data.'
Which parent directory contains the configuration files in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Configurationfiledirectories
Section titled, Configuration file directories, states 'A detailed list of settings for each configuration file is provided in the .spec file names for that configuration file. You can find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc system/README folder of your Splunk Enterprise installation...'
A company moves to a distributed architecture to meet the growing demand for the use of Splunk. What parameter can be configured to enable automatic load balancing in the
Universal Forwarder to send data to the indexers?
Answer : D
To enable automatic load balancing, set the stanza to have a server value equal to a comma-separated list of IP addresses and indexer ports for each of the indexers in the environment. For example:
[tcpout] server=10.1.1.1:9997,10.1.1.2:9997,10.1.1.3:9997
The forwarder then distributes data across all of the indexers in the list.
Running this search in a distributed environment:
On what Splunk component does the eval command get executed?
The LINE_BREAKER attribute is configured in which configuration file?
Answer : A
Immediately after installation, what will a Universal Forwarder do first?
Which artifact is required in the request header when creating an HTTP event?
Which option accurately describes the purpose of the HTTP Event Collector (HEC)?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/UsetheHTTPEventCollector
'The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. HEC uses a token-based authentication model. You can generate a token and then configure a logging library or HTTP client with the token to send data to HEC in a specific format. This process eliminates the need for a Splunk forwarder when you send application events.'
Which of the following are available input methods when adding a file input in Splunk Web? (Choose all that
apply.)
Answer : A, D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Howdoyouwanttoadddata
The fastest way to add data to your Splunk Cloud instance or Splunk Enterprise deployment is to use Splunk Web. After you access the Add Data page, choose one of three options for getting data into your Splunk platform deployment with Splunk Web: (1) Upload, (2) Monitor, (3) Forward The Upload option lets you upload a file or archive of files for indexing. When you choose Upload option, Splunk Web opens the upload process page. Monitor. For Splunk Enterprise installations, the Monitor option lets you monitor one or more files, directories, network streams, scripts, Event Logs (on Windows hosts only), performance metrics, or any other type of machine data that the Splunk Enterprise instance has access to.
What hardware attribute would need to be changed to increase the number of simultaneous searches (ad-hoc and scheduled) on a single search head?
Answer : B
https://docs.splunk.com/Documentation/Splunk/7.3.1/DistSearch/SHCarchitecture
Scroll down to section titled, How the cluster handles concurrent search quotas, 'Overall search quota. This quota determines the maximum number of historical searches (combined scheduled and ad hoc) that the cluster can run concurrently. This quota is configured with max_Searches_per_cpu and related settings in limits.conf.'
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
What is the correct example to redact a plain-text password from raw events?
Answer : B
The correct answer is B. in props.conf:
[identity]
SEDCMD-redact_pw = s/password=([^,|/s]+)/ ####REACTED####/g
s/password=([^,|/s]+)/ ####REACTED####/g
The g flag at the end means that the replacement is applied globally, not just to the first match.
Option A is incorrect because it uses the REGEX attribute instead of the SEDCMD attribute. The REGEX attribute is used to extract fields from events, not to modify them.
Option C is incorrect because it uses the transforms.conf file instead of the props.conf file. The transforms.conf file is used to define transformations that can be applied to fields or events, such as lookups, evaluations, or replacements. However, these transformations are applied after indexing, not before.
Option D is incorrect because it uses both the wrong attribute and the wrong file. There is no REGEX-redact_pw attribute in the transforms.conf file.
References: 1: Redact data from events - Splunk Documentation
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
Using the CLI on the forwarder, how could the current forwarder to indexer configuration be viewed?
When using license pools, volume allocations apply to which Splunk components?
A company moves to a distributed architecture to meet the growing demand for the use of Splunk. What parameter can be configured to enable automatic load balancing in the
Universal Forwarder to send data to the indexers?
Answer : D
To enable automatic load balancing, set the stanza to have a server value equal to a comma-separated list of IP addresses and indexer ports for each of the indexers in the environment. For example:
[tcpout] server=10.1.1.1:9997,10.1.1.2:9997,10.1.1.3:9997
The forwarder then distributes data across all of the indexers in the list.
Which parent directory contains the configuration files in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Configurationfiledirectories
Section titled, Configuration file directories, states 'A detailed list of settings for each configuration file is provided in the .spec file names for that configuration file. You can find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc system/README folder of your Splunk Enterprise installation...'
Which of the following accurately describes HTTP Event Collector indexer acknowledgement?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/AboutHECIDXAck
- Section: About channels and sending data
Sending events to HEC with indexer acknowledgment active is similar to sending them with the setting off. There is one crucial difference: when you have indexer acknowledgment turned on, you must specify a channel when you send events. The concept of a channel was introduced in HEC primarily to prevent a fast client from impeding the performance of a slow client. When you assign one channel per client, because channels are treated equally on Splunk Enterprise, one client can't affect another. You must include a matching channel identifier both when sending data to HEC in an HTTP request and when requesting acknowledgment that events contained in the request have been indexed. If you don't, you will receive the error message, 'Data channel is missing.' Each request that includes a token for which indexer acknowledgment has been enabled must include a channel identifier, as shown in the following example cURL statement, where <data> represents the event data portion of the request
In case of a conflict between a whitelist and a blacklist input setting, which one is used?
In inputs. conf, which stanza would mean Splunk was only reading one local file?
Which of the following indexes come pre-configured with Splunk Enterprise? (select all that apply)
Consider the following stanza in inputs.conf:
What will the value of the source filed be for events generated by this scripts input?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Admin/Inputsconf
-Scroll down to source = <string>
*Default: the input file path
An index stores its data in buckets. Which default directories does Splunk use to store buckets? (Choose all that apply.)
Answer : C, D
Which of the following are required when defining an index in indexes. conf? (select all that apply)
Answer : A, B, D
homePath = $SPLUNK_DB/hatchdb/db
coldPath = $SPLUNK_DB/hatchdb/colddb
thawedPath = $SPLUNK_DB/hatchdb/thaweddb
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Indexesconf#PER_INDEX_OPTIONS
Which option on the Add Data menu is most useful for testing data ingestion without creating inputs.conf?
Answer : A
The LINE_BREAKER attribute is configured in which configuration file?
Answer : A
Which Splunk forwarder type allows parsing of data before forwarding to an indexer?
Answer : C
What is the difference between the two wildcards ... and - for the monitor stanza in inputs, conf?
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Data/Specifyinputpathswithwildcards
... The ellipsis wildcard searches recursively through directories and any number of levels of subdirectories to find matches.
If you specify a folder separator (for example, //var/log/.../file), it does not match the first folder level, only subfolders.
* The asterisk wildcard matches anything in that specific folder path segment.
Unlike ..., * does not recurse through subfolders.
Which of the following authentication types requires scripting in Splunk?
Answer : D
https://answers.splunk.com/answers/131127/scripted-authentication.html
Scripted Authentication: An option for Splunk Enterprise authentication. You can use an authentication system that you have in place (such as PAM or RADIUS) by configuring authentication.conf to use a script instead of using LDAP or Splunk Enterprise default authentication.
Which configuration files are used to transform raw data ingested by Splunk? (Choose all that apply.)
Answer : A, D
use transformations with props.conf and transforms.conf to:
-- Mask or delete raw data as it is being indexed
--Override sourcetype or host based upon event values
-- Route events to specific indexes based on event content
-- Prevent unwanted events from being indexed
The Splunk administrator wants to ensure data is distributed evenly amongst the indexers. To do this, he runs
the following search over the last 24 hours:
index=*
What field can the administrator check to see the data distribution?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Knowledge/Usedefaultfields splunk_server
The splunk server field contains the name of the Splunk server containing the event. Useful in a distributed Splunk environment. Example: Restrict a search to the main index on a server named remote. splunk_server=remote index=main 404
After an Enterprise Trial license expires, it will automatically convert to a Free license. How many days is an Enterprise Trial license valid before this conversion occurs?
What are the required stanza attributes when configuring the transforms. conf to manipulate or remove events?
Answer : C
REGEX = <regular expression>
* Enter a regular expression to operate on your data.
FORMAT = <string>
* NOTE: This option is valid for both index-time and search-time field extraction. Index-time field extraction configuration require the FORMAT settings. The FORMAT settings is optional for search-time field extraction configurations.
* This setting specifies the format of the event, including any field names or values you want to add.
DEST_KEY = <key>
* NOTE: This setting is only valid for index-time field extractions.
* Specifies where SPLUNK software stores the expanded FORMAT results in accordance with the REGEX match.
Which setting allows the configuration of Splunk to allow events to span over more than one line?
What are the values for host and index for [stanza1] used by Splunk during index time, given the following configuration files?
Which of the following is an acceptable channel value when using the HTTP Event Collector indexer acknowledgment capability?
Answer : A
The HTTP Event Collector (HEC) supports indexer acknowledgment to confirm event delivery. Each acknowledgment is associated with a unique GUID (Globally Unique Identifier).
GUID ensures events are not re-indexed in the case of retries.
Incorrect Options:
B, C, D: These are not valid channel values in HEC acknowledgments.
References:
Splunk Docs: Use indexer acknowledgment with HTTP Event Collector
Load balancing on a Universal Forwarder is not scaling correctly. The forwarder's outputs. and the tcpout stanza are setup correctly. What else could be the cause of this scaling issue? (select all that apply)
Answer : A, C
The possible causes of the load balancing issue on the Universal Forwarder are A and C. The receiving port and the DNS record are both factors that affect the ability of the Universal Forwarder to distribute data across multiple receivers. If the receiving port is not properly set up to listen on the right port, or if the DNS record used is not set up with a valid list of IP addresses, the Universal Forwarder might fail to connect to some or all of the receivers, resulting in poor load balancing.
Which default Splunk role could be assigned to provide users with the following capabilities?
Create saved searches
Edit shared objects and alerts
Not allowed to create custom roles
When would the following command be used?
When configuring HTTP Event Collector (HEC) input, how would one ensure the events have been indexed?
Answer : A
Per the provided Splunk reference URL
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
'While HEC has precautions in place to prevent data loss, it's impossible to completely prevent such an occurrence, especially in the event of a network failure or hardware crash. This is where indexer acknolwedgment comes in.'
Reference https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
Which layers are involved in Splunk configuration file layering? (select all that apply)
Answer : A, B, C
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles
To determine the order of directories for evaluating configuration file precedence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user: Global. Activities like indexing take place in a global context. They are independent of any app or user. For example, configuration files that determine monitoring or indexing behavior occur outside of the app and user context and are global in nature. App/user. Some activities, like searching, take place in an app or user context. The app and user context is vital to search-time processing, where certain knowledge objects or actions might be valid only for specific users in specific apps.
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
The priority of layered Splunk configuration files depends on the file's:
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
'To determine the order of directories for evaluating configuration file precendence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user'
When does a warm bucket roll over to a cold bucket?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/HowSplunkstoresindexes
Once further conditions are met (for example, the index reaches some maximum number of warm buckets), the indexer begins to roll the warm buckets to cold, based on their age. It always selects the oldest warm bucket to roll to cold. Buckets continue to roll to cold as they age in this manner. Cold buckets reside in a different location from hot and warm buckets. You can configure the location so that cold buckets reside on cheaper storage.
166653
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
Which of the following statements apply to directory inputs? {select all that apply)
Answer : A, C
What is the difference between the two wildcards ... and - for the monitor stanza in inputs, conf?
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Data/Specifyinputpathswithwildcards
... The ellipsis wildcard searches recursively through directories and any number of levels of subdirectories to find matches.
If you specify a folder separator (for example, //var/log/.../file), it does not match the first folder level, only subfolders.
* The asterisk wildcard matches anything in that specific folder path segment.
Unlike ..., * does not recurse through subfolders.
Which Splunk forwarder type allows parsing of data before forwarding to an indexer?
Answer : C
Which authentication methods are natively supported within Splunk Enterprise? (select all that apply)
Answer : A, B, C
Splunk authentication: Provides Admin, Power and User by default, and you can define your own roles using a list of capabilities. If you have an Enterprise license, Splunk authentication is enabled by default. See Set up user authentication with Splunk's built-in system for more information. LDAP: Splunk Enterprise supports authentication with its internal authentication services or your existing LDAP server. See Set up user authentication with LDAP for more information. Scripted authentication API: Use scripted authentication to integrate Splunk authentication with an external authentication system, such as RADIUS or PAM. See Set up user authentication with external systems for more information. Note: Authentication, including native authentication, LDAP, and scripted authentication, is not available in Splunk Free.
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
What is the correct example to redact a plain-text password from raw events?
Answer : B
The correct answer is B. in props.conf:
[identity]
SEDCMD-redact_pw = s/password=([^,|/s]+)/ ####REACTED####/g
s/password=([^,|/s]+)/ ####REACTED####/g
The g flag at the end means that the replacement is applied globally, not just to the first match.
Option A is incorrect because it uses the REGEX attribute instead of the SEDCMD attribute. The REGEX attribute is used to extract fields from events, not to modify them.
Option C is incorrect because it uses the transforms.conf file instead of the props.conf file. The transforms.conf file is used to define transformations that can be applied to fields or events, such as lookups, evaluations, or replacements. However, these transformations are applied after indexing, not before.
Option D is incorrect because it uses both the wrong attribute and the wrong file. There is no REGEX-redact_pw attribute in the transforms.conf file.
References: 1: Redact data from events - Splunk Documentation
Which Splunk component requires a Forwarder license?
Answer : B
The priority of layered Splunk configuration files depends on the file's:
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
'To determine the order of directories for evaluating configuration file precendence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user'
How would you configure your distsearch conf to allow you to run the search below? sourcetype=access_combined status=200 action=purchase splunk_setver_group=HOUSTON
A)
B)
C)
D)
During search time, which directory of configuration files has the highest precedence?
Answer : D
Adding further clarity and quoting same Splunk reference URL from @giubal'
'To keep configuration settings consistent across peer nodes, configuration files are managed from the cluster master, which pushes the files to the slave-app directories on the peer nodes. Files in the slave-app directories have the highest precedence in a cluster peer's configuration. Here is the expanded precedence order for cluster peers:
1.Slave-app local directories -- highest priority
2. System local directory
3. App local directories
4. Slave-app default directories
5. App default directories
6. System default directory --lowest priority
Which of the following applies only to Splunk index data integrity check?
Answer : C
How often does Splunk recheck the LDAP server?
Which feature in Splunk allows Event Breaking, Timestamp extractions, and any advanced configurations
found in props.conf to be validated all through the UI?
The universal forwarder has which capabilities when sending data? (select all that apply)
After how many warnings within a rolling 30-day period will a license violation occur with an enforced
Enterprise license?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Aboutlicenseviolations
'Enterprise Trial license. If you get five or more warnings in a rolling 30 days period, you are in violation of your license. Dev/Test license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. Developer license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. BUT for Free license. If you get three or more warnings in a rolling 30 days period, you are in violation of your license.'
Which of the following statements describes how distributed search works?
Answer : C
URL https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Configuredistributedsearch
'To activate distributed search, you add search peers, or indexers, to a Splunk Enterprise instance that you desingate as a search head. You do this by specifying each search peer manually.'
Which of the methods listed below supports muti-factor authentication?
Running this search in a distributed environment:
On what Splunk component does the eval command get executed?
Local user accounts created in Splunk store passwords in which file?
Answer : A
'To set the default username and password, place user-seed.conf in $SPLUNK_HOME/etc/system/local. You must restart Splunk to enable configurations. If the $SPLUNK_HOME/etc/passwd file is present, the settings in this file (user-seed.conf) are not used.'
How is a remote monitor input distributed to forwarders?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/Usingforwardingagents
Scroll down to the section Titled, How to configure forwarder inputs, and subsection Here are the main ways that you can configure data inputs on a forwarder Install the app or add-on that contains the inputs you wants
For single line event sourcetypes. it is most efficient to set SHOULD_linemerge to what value?
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking
Attribute : SHOULD_LINEMERGE = [true|false]
Description : When set to true, the Splunk platform combines several input lines into a single event, with configuration based on the settings described in the next section.
Which Splunk configuration file is used to enable data integrity checking?
Which Splunk component requires a Forwarder license?
Answer : B
When does a warm bucket roll over to a cold bucket?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/HowSplunkstoresindexes
Once further conditions are met (for example, the index reaches some maximum number of warm buckets), the indexer begins to roll the warm buckets to cold, based on their age. It always selects the oldest warm bucket to roll to cold. Buckets continue to roll to cold as they age in this manner. Cold buckets reside in a different location from hot and warm buckets. You can configure the location so that cold buckets reside on cheaper storage.
166653
Within props. conf, which stanzas are valid for data modification? (select all that apply)
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
After how many warnings within a rolling 30-day period will a license violation occur with an enforced
Enterprise license?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Aboutlicenseviolations
'Enterprise Trial license. If you get five or more warnings in a rolling 30 days period, you are in violation of your license. Dev/Test license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. Developer license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. BUT for Free license. If you get three or more warnings in a rolling 30 days period, you are in violation of your license.'
After automatic load balancing is enabled on a forwarder, the time interval for switching indexers can be updated by using which of the following attributes?
Answer : C
Where should apps be located on the deployment server that the clients pull from?
Answer : D
After an app is downloaded, it resides under $SPLUNK_HOME/etc/apps on the deployment clients. But it resided in the $SPLUNK_HOME/etc/deployment-apps location in the deployment server.
When running the command shown below, what is the default path in which deployment server. conf is created?
splunk set deploy-poll deployServer:port
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/Updating/Definedeploymentclasses#Ways_to_define_server_classes 'When you use forwarder management to create a new server class, it saves the server class definition in a copy of serverclass.conf under $SPLUNK_HOME/etc/system/local. If, instead of using forwarder management, you decide to directly edit serverclass.conf, it is recommended that you create the serverclass.conf file in that same directory, $SPLUNK_HOME/etc/system/local.'
Which Splunk indexer operating system platform is supported when sending logs from a Windows universal forwarder?
Answer : A
'The forwarder/indexer relationship can be considered platform agnostic (within the sphere of supported platforms) because they exchange their data handshake (and the data, if you wish) over TCP.
Which of the following apply to how distributed search works? (select all that apply)
Answer : A, C, D
Users log on to the search head and run reports: -- The search head dispatches searches to the peers -- Peers run searches in parallel and return their portion of results -- The search head consolidates the individual results and prepares reports
User role inheritance allows what to be inherited from the parent role? (select all that apply)
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
What is the difference between the two wildcards ... and - for the monitor stanza in inputs, conf?
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Data/Specifyinputpathswithwildcards
... The ellipsis wildcard searches recursively through directories and any number of levels of subdirectories to find matches.
If you specify a folder separator (for example, //var/log/.../file), it does not match the first folder level, only subfolders.
* The asterisk wildcard matches anything in that specific folder path segment.
Unlike ..., * does not recurse through subfolders.
Which Splunk component does a search head primarily communicate with?
Answer : A
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
Which of the following are reasons to create separate indexes? (Choose all that apply.)
Answer : A, C
Different retention times: You can set different retention policies for different indexes, depending on how long you want to keep the data. For example, you can have an index for security data that has a longer retention time than an index for performance data that has a shorter retention time.
Restrict user permissions: You can set different access permissions for different indexes, depending on who needs to see the data. For example, you can have an index for sensitive data that is only accessible by certain users or roles, and an index for public data that is accessible by everyone.
There is a file with a vast amount of old dat
a. Which of the following inputs.conf attributes would allow an admin to monitor the file for updates without indexing the pre-existing data?
Answer : D
IgnoreOlderThan: This setting filters files for indexing based on their age. It does not prevent indexing of old data already in the file.
allowList: This setting allows specifying patterns to include files for monitoring, but it does not control indexing of pre-existing data.
monitor: This is the default method for monitoring files but does not address indexing pre-existing data.
followTail: This attribute, when set in inputs.conf, ensures that Splunk starts reading a file from the end (tail) and does not index existing old data. It is ideal for scenarios with large files where only new updates are relevant.
References:
Splunk Docs: Monitor text files
Splunk Docs: Configure followTail in inputs.conf
What is the correct example to redact a plain-text password from raw events?
Answer : B
The correct answer is B. in props.conf:
[identity]
SEDCMD-redact_pw = s/password=([^,|/s]+)/ ####REACTED####/g
s/password=([^,|/s]+)/ ####REACTED####/g
The g flag at the end means that the replacement is applied globally, not just to the first match.
Option A is incorrect because it uses the REGEX attribute instead of the SEDCMD attribute. The REGEX attribute is used to extract fields from events, not to modify them.
Option C is incorrect because it uses the transforms.conf file instead of the props.conf file. The transforms.conf file is used to define transformations that can be applied to fields or events, such as lookups, evaluations, or replacements. However, these transformations are applied after indexing, not before.
Option D is incorrect because it uses both the wrong attribute and the wrong file. There is no REGEX-redact_pw attribute in the transforms.conf file.
References: 1: Redact data from events - Splunk Documentation
Which of the following are required when defining an index in indexes. conf? (select all that apply)
Answer : A, B, D
homePath = $SPLUNK_DB/hatchdb/db
coldPath = $SPLUNK_DB/hatchdb/colddb
thawedPath = $SPLUNK_DB/hatchdb/thaweddb
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Indexesconf#PER_INDEX_OPTIONS
Which of the following authentication types requires scripting in Splunk?
Answer : D
https://answers.splunk.com/answers/131127/scripted-authentication.html
Scripted Authentication: An option for Splunk Enterprise authentication. You can use an authentication system that you have in place (such as PAM or RADIUS) by configuring authentication.conf to use a script instead of using LDAP or Splunk Enterprise default authentication.
In case of a conflict between a whitelist and a blacklist input setting, which one is used?
A company moves to a distributed architecture to meet the growing demand for the use of Splunk. What parameter can be configured to enable automatic load balancing in the
Universal Forwarder to send data to the indexers?
Answer : D
To enable automatic load balancing, set the stanza to have a server value equal to a comma-separated list of IP addresses and indexer ports for each of the indexers in the environment. For example:
[tcpout] server=10.1.1.1:9997,10.1.1.2:9997,10.1.1.3:9997
The forwarder then distributes data across all of the indexers in the list.
A Splunk administrator has been tasked with developing a retention strategy to have frequently accessed data sets on SSD storage and to have older, less frequently accessed data on slower NAS storage. They have set a mount point for the NAS. Which parameter do they need to modify to set the path for the older, less frequently accessed data in indexes.conf?
When configuring HTTP Event Collector (HEC) input, how would one ensure the events have been indexed?
Answer : A
Per the provided Splunk reference URL
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
'While HEC has precautions in place to prevent data loss, it's impossible to completely prevent such an occurrence, especially in the event of a network failure or hardware crash. This is where indexer acknolwedgment comes in.'
Reference https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
What happens when there are conflicting settings within two or more configuration files?
Answer : D
When there are conflicting settings within two or more configuration files, the setting with the highest precedence is used. The precedence of configuration files is determined by a combination of the file type, the directory location, and the alphabetical order of the file names.
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
When indexing a data source, which fields are considered metadata?
Answer : D
Which of the following statements describe deployment management? (select all that apply)
Answer : A, B
'All Splunk Enterprise instances functioning as management components needs access to an Enterprise license. Management components include the deployment server, the indexer cluster manager node, the search head cluster deployer, and the monitoring console.'
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
A Universal Forwarder is collecting two separate sources of data (A,B). Source A is being routed through a Heavy Forwarder and then to an indexer. Source B is being routed directly to the indexer. Both sets of data require the masking of raw text strings before being written to disk. What does the administrator need to do to
ensure that the masking takes place successfully?
Answer : D
The correct answer is D. Place both props . conf and transforms . conf on the Heavy Forwarder for source A, and place both props . conf and transforms . conf on the indexer for source B.
For source A, the data is routed through a heavy forwarder, which can parse the data before sending it to the indexer. Therefore, you need to place both props.conf and transforms.conf on the heavy forwarder for source A, so that the masking takes place before indexing.
For source B, the data is routed directly to the indexer, which parses and indexes the data. Therefore, you need to place both props.conf and transforms.conf on the indexer for source B, so that the masking takes place before indexing.
Using SEDCMD in props.conf allows raw data to be modified. With the given event below, which option will mask the first three digits of the AcctID field resulting output: [22/Oct/2018:15:50:21] VendorID=1234 Code=B AcctID=xxx5309
Event:
[22/Oct/2018:15:50:21] VendorID=1234 Code=B AcctID=xxx5309
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Anonymizedata
Scrolling down to the section titled 'Define the sed script in props.conf shows the correct syntax of an example which validates that the number/character /1 immediately preceded the /g
Which of the following enables compression for universal forwarders in outputs. conf ?
A)
B)
C)
D)
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf
# Compression
#
# This example sends compressed events to the remote indexer.
# NOTE: Compression can be enabled TCP or SSL outputs only.
# The receiver input port should also have compression enabled.
[tcpout]
server = splunkServer.example.com:4433
compressed = true
Immediately after installation, what will a Universal Forwarder do first?
How is a remote monitor input distributed to forwarders?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/Usingforwardingagents
Scroll down to the section Titled, How to configure forwarder inputs, and subsection Here are the main ways that you can configure data inputs on a forwarder Install the app or add-on that contains the inputs you wants
Where are license files stored?
Answer : C
How would you configure your distsearch conf to allow you to run the search below? sourcetype=access_combined status=200 action=purchase splunk_setver_group=HOUSTON
A)
B)
C)
D)
Which of the following are available input methods when adding a file input in Splunk Web? (Choose all that
apply.)
Answer : A, D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Howdoyouwanttoadddata
The fastest way to add data to your Splunk Cloud instance or Splunk Enterprise deployment is to use Splunk Web. After you access the Add Data page, choose one of three options for getting data into your Splunk platform deployment with Splunk Web: (1) Upload, (2) Monitor, (3) Forward The Upload option lets you upload a file or archive of files for indexing. When you choose Upload option, Splunk Web opens the upload process page. Monitor. For Splunk Enterprise installations, the Monitor option lets you monitor one or more files, directories, network streams, scripts, Event Logs (on Windows hosts only), performance metrics, or any other type of machine data that the Splunk Enterprise instance has access to.
Which Splunk forwarder has a built-in license?
Answer : C
Which of the following statements apply to directory inputs? {select all that apply)
Answer : A, C
Which of the following types of data count against the license daily quota?
Which of the following are reasons to create separate indexes? (Choose all that apply.)
Answer : A, C
Different retention times: You can set different retention policies for different indexes, depending on how long you want to keep the data. For example, you can have an index for security data that has a longer retention time than an index for performance data that has a shorter retention time.
Restrict user permissions: You can set different access permissions for different indexes, depending on who needs to see the data. For example, you can have an index for sensitive data that is only accessible by certain users or roles, and an index for public data that is accessible by everyone.
The CLI command splunk add forward-server indexer:
which configuration file?
Answer : C
The CLI command 'Splunk add forward-server indexer:<receiving-port>' is used to define the indexer and the listening port on forwards. The command creates this kind of entry '[tcpout-server://<ip address>:
https://docs.splunk.com/Documentation/Forwarder/8.2.2/Forwarder/Configureforwardingwithoutputs.conf
Which of the following is an acceptable channel value when using the HTTP Event Collector indexer acknowledgment capability?
Answer : A
The HTTP Event Collector (HEC) supports indexer acknowledgment to confirm event delivery. Each acknowledgment is associated with a unique GUID (Globally Unique Identifier).
GUID ensures events are not re-indexed in the case of retries.
Incorrect Options:
B, C, D: These are not valid channel values in HEC acknowledgments.
References:
Splunk Docs: Use indexer acknowledgment with HTTP Event Collector
When Splunk is integrated with LDAP, which attribute can be changed in the Splunk UI for an LDAP user?
Which parent directory contains the configuration files in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Configurationfiledirectories
Section titled, Configuration file directories, states 'A detailed list of settings for each configuration file is provided in the .spec file names for that configuration file. You can find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc system/README folder of your Splunk Enterprise installation...'
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
Which of the following methods will connect a deployment client to a deployment server? (select all that apply)
When configuring HTTP Event Collector (HEC) input, how would one ensure the events have been indexed?
Answer : A
Per the provided Splunk reference URL
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
'While HEC has precautions in place to prevent data loss, it's impossible to completely prevent such an occurrence, especially in the event of a network failure or hardware crash. This is where indexer acknolwedgment comes in.'
Reference https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
Which artifact is required in the request header when creating an HTTP event?
How would you configure your distsearch conf to allow you to run the search below? sourcetype=access_combined status=200 action=purchase splunk_setver_group=HOUSTON
A)
B)
C)
D)
When deploying apps on Universal Forwarders using the deployment server, what is the correct component and location of the app before it is deployed?
Answer : C
The correct answer is C. On Deployment Server, $SPLUNK_HOME/etc/deployment-apps.
A deployment server is a Splunk Enterprise instance that acts as a centralized configuration manager for any number of other instances, called ''deployment clients''. A deployment client can be a universal forwarder, a non-clustered indexer, or a search head1.
A deployment app is a directory that contains any content that you want to download to a set of deployment clients. The content can include a Splunk Enterprise app, a set of Splunk Enterprise configurations, or other content, such as scripts, images, and supporting files2.
You create a deployment app by creating a directory for it on the deployment server. The default location is $SPLUNK_HOME/etc/deployment-apps, but this is configurable through the repositoryLocation attribute in serverclass.conf. Underneath this location, each app must have its own subdirectory. The name of the subdirectory serves as the app name in the forwarder management interface2.
The other options are incorrect because:
A . On Universal Forwarder, $SPLUNK_HOME/etc/apps. This is the location where the deployment app resides after it is downloaded from the deployment server to the universal forwarder. It is not the location of the app before it is deployed2.
B . On Deployment Server, $SPLUNK_HOME/etc/apps. This is the location where the apps that are specific to the deployment server itself reside. It is not the location where the deployment apps for the clients are stored2.
Which of the following are available input methods when adding a file input in Splunk Web? (Choose all that
apply.)
Answer : A, D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Howdoyouwanttoadddata
The fastest way to add data to your Splunk Cloud instance or Splunk Enterprise deployment is to use Splunk Web. After you access the Add Data page, choose one of three options for getting data into your Splunk platform deployment with Splunk Web: (1) Upload, (2) Monitor, (3) Forward The Upload option lets you upload a file or archive of files for indexing. When you choose Upload option, Splunk Web opens the upload process page. Monitor. For Splunk Enterprise installations, the Monitor option lets you monitor one or more files, directories, network streams, scripts, Event Logs (on Windows hosts only), performance metrics, or any other type of machine data that the Splunk Enterprise instance has access to.
Using the CLI on the forwarder, how could the current forwarder to indexer configuration be viewed?
User role inheritance allows what to be inherited from the parent role? (select all that apply)
Which authentication methods are natively supported within Splunk Enterprise? (select all that apply)
Answer : A, B, C
Splunk authentication: Provides Admin, Power and User by default, and you can define your own roles using a list of capabilities. If you have an Enterprise license, Splunk authentication is enabled by default. See Set up user authentication with Splunk's built-in system for more information. LDAP: Splunk Enterprise supports authentication with its internal authentication services or your existing LDAP server. See Set up user authentication with LDAP for more information. Scripted authentication API: Use scripted authentication to integrate Splunk authentication with an external authentication system, such as RADIUS or PAM. See Set up user authentication with external systems for more information. Note: Authentication, including native authentication, LDAP, and scripted authentication, is not available in Splunk Free.
The following stanzas in inputs. conf are currently being used by a deployment client:
[udp: //145.175.118.177:1001
Connection_host = dns
sourcetype = syslog
Which of the following statements is true of data that is received via this input?
Answer : D
This is because the input type is UDP, which is an unreliable protocol that does not guarantee delivery, order, or integrity of the data packets. UDP does not have any mechanism to resend or acknowledge the data packets, so if Splunk is restarted, any data that was in transit or in the buffer may be dropped and not indexed.
Which of the following statements accurately describes using SSL to secure the feed from a forwarder?
Answer : A
AboutsecuringyourSplunkconfigurationwithSSL
The Splunk administrator wants to ensure data is distributed evenly amongst the indexers. To do this, he runs
the following search over the last 24 hours:
index=*
What field can the administrator check to see the data distribution?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Knowledge/Usedefaultfields splunk_server
The splunk server field contains the name of the Splunk server containing the event. Useful in a distributed Splunk environment. Example: Restrict a search to the main index on a server named remote. splunk_server=remote index=main 404
Which Splunk forwarder has a built-in license?
Answer : C
Event processing occurs at which phase of the data pipeline?
An organization wants to collect Windows performance data from a set of clients, however, installing Splunk
software on these clients is not allowed. What option is available to collect this data in Splunk Enterprise?
Answer : B
'The Splunk platform collects remote Windows data for indexing in one of two ways: From Splunk forwarders, Using Windows Management Instrumentation (WMI). For Splunk Cloud deployments, you must use the Splunk Universal Forwarder on a Windows machines to montior remote Windows data.'
After how many warnings within a rolling 30-day period will a license violation occur with an enforced
Enterprise license?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Aboutlicenseviolations
'Enterprise Trial license. If you get five or more warnings in a rolling 30 days period, you are in violation of your license. Dev/Test license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. Developer license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. BUT for Free license. If you get three or more warnings in a rolling 30 days period, you are in violation of your license.'
Load balancing on a Universal Forwarder is not scaling correctly. The forwarder's outputs. and the tcpout stanza are setup correctly. What else could be the cause of this scaling issue? (select all that apply)
Answer : A, C
The possible causes of the load balancing issue on the Universal Forwarder are A and C. The receiving port and the DNS record are both factors that affect the ability of the Universal Forwarder to distribute data across multiple receivers. If the receiving port is not properly set up to listen on the right port, or if the DNS record used is not set up with a valid list of IP addresses, the Universal Forwarder might fail to connect to some or all of the receivers, resulting in poor load balancing.
Which configuration files are used to transform raw data ingested by Splunk? (Choose all that apply.)
Answer : A, D
use transformations with props.conf and transforms.conf to:
-- Mask or delete raw data as it is being indexed
--Override sourcetype or host based upon event values
-- Route events to specific indexes based on event content
-- Prevent unwanted events from being indexed
A Universal Forwarder is collecting two separate sources of data (A,B). Source A is being routed through a Heavy Forwarder and then to an indexer. Source B is being routed directly to the indexer. Both sets of data require the masking of raw text strings before being written to disk. What does the administrator need to do to
ensure that the masking takes place successfully?
Answer : D
The correct answer is D. Place both props . conf and transforms . conf on the Heavy Forwarder for source A, and place both props . conf and transforms . conf on the indexer for source B.
For source A, the data is routed through a heavy forwarder, which can parse the data before sending it to the indexer. Therefore, you need to place both props.conf and transforms.conf on the heavy forwarder for source A, so that the masking takes place before indexing.
For source B, the data is routed directly to the indexer, which parses and indexes the data. Therefore, you need to place both props.conf and transforms.conf on the indexer for source B, so that the masking takes place before indexing.
When using a directory monitor input, specific source type can be selectively overridden using which configuration file?
Which Splunk component does a search head primarily communicate with?
Answer : A
How would you configure your distsearch conf to allow you to run the search below? sourcetype=access_combined status=200 action=purchase splunk_setver_group=HOUSTON
A)
B)
C)
D)
What happens when there are conflicting settings within two or more configuration files?
Answer : D
When there are conflicting settings within two or more configuration files, the setting with the highest precedence is used. The precedence of configuration files is determined by a combination of the file type, the directory location, and the alphabetical order of the file names.
When working with an indexer cluster, what changes with the global precedence when comparing to a standalone deployment?
Answer : C
The app local directories move to second in the priority list. This is explained in the Splunk documentation, which states:
In a clustered environment, the precedence of configuration files changes slightly from that of a standalone deployment. The app local directories move to second in the priority list, after the peer-apps local directory. This means that any configuration files in the app local directories on the individual peers are overridden by configuration files of the same name and type in the peer-apps local directory on the master node.
To set up a Network input in Splunk, what needs to be specified'?
Which of the following must be done to define user permissions when integrating Splunk with LDAP?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.1.3/Security/ConfigureLDAPwithSplunkWeb
'You can map either users or groups, but not both. If you are using groups, all users must be members of an appropriate group. Groups inherit capabilities form the highest level role they're a member of.' 'If your LDAP environment does not have group entries, you can treat each user as its own group.'
What is the correct curl to send multiple events through HTTP Event Collector?
Answer : B
curl ''https://mysplunkserver.example.com:8088/services/collector'' \ -H ''Authorization: Splunk DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67'' \ -d '{''event'': ''Hello World''}, {''event'': ''Hola Mundo''}, {''event'': ''Hallo Welt''}'. This is the correct curl command to send multiple events through HTTP Event Collector (HEC), which is a token-based API that allows you to send data to Splunk Enterprise from any application that can make an HTTP request. The command has the following components:
The URL of the HEC endpoint, which consists of the protocol (https), the hostname or IP address of the Splunk server (mysplunkserver.example.com), the port number (8088), and the service name (services/collector).
The header that contains the authorization token, which is a unique identifier that grants access to the HEC endpoint. The token is prefixed with Splunk and enclosed in quotation marks. The token value (DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67) is an example and should be replaced with your own token value.
The data payload that contains the events to be sent, which are JSON objects enclosed in curly braces and separated by commas. Each event object has a mandatory field called event, which contains the raw data to be indexed. The event value can be a string, a number, a boolean, an array, or another JSON object. In this case, the event values are strings that say hello in different languages.
Which of the following methods will connect a deployment client to a deployment server? (select all that apply)
For single line event sourcetypes. it is most efficient to set SHOULD_linemerge to what value?
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking
Attribute : SHOULD_LINEMERGE = [true|false]
Description : When set to true, the Splunk platform combines several input lines into a single event, with configuration based on the settings described in the next section.
On the deployment server, administrators can map clients to server classes using client filters. Which of the
following statements is accurate?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.1/Updating/Filterclients
same/td-p/390910
The priority of layered Splunk configuration files depends on the file's:
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
'To determine the order of directories for evaluating configuration file precendence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user'
Which optional configuration setting in inputs .conf allows you to selectively forward the data to specific indexer(s)?
Answer : A
Specifies a comma-separated list of tcpout group names. Use this setting to selectively forward your data to specific indexers by specifying the tcpout groups that the forwarder should use when forwarding the data. Define the tcpout group names in the outputs.conf file in [tcpout:<tcpout_group_name>] stanzas. The groups present in defaultGroup in [tcpout] stanza in the outputs.conf file.
When using a directory monitor input, specific source type can be selectively overridden using which configuration file?
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
Which of the following statements apply to directory inputs? {select all that apply)
Answer : A, C
Which of the following authentication types requires scripting in Splunk?
Answer : D
https://answers.splunk.com/answers/131127/scripted-authentication.html
Scripted Authentication: An option for Splunk Enterprise authentication. You can use an authentication system that you have in place (such as PAM or RADIUS) by configuring authentication.conf to use a script instead of using LDAP or Splunk Enterprise default authentication.
Which data pipeline phase is the last opportunity for defining event boundaries?
Answer : C
The parsing phase is the process of extracting fields and values from raw data. The parsing phase respects LINE_BREAKER, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings in props.conf. These settings determine how Splunk breaks the data into events based on certain criteria, such as timestamps or regular expressions. The event boundaries are defined by the props.conf file, which can be modified by the administrator. Therefore, the parsing phase is the last opportunity for defining event boundaries.
The Splunk administrator wants to ensure data is distributed evenly amongst the indexers. To do this, he runs
the following search over the last 24 hours:
index=*
What field can the administrator check to see the data distribution?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Knowledge/Usedefaultfields splunk_server
The splunk server field contains the name of the Splunk server containing the event. Useful in a distributed Splunk environment. Example: Restrict a search to the main index on a server named remote. splunk_server=remote index=main 404
After configuring a universal forwarder to communicate with an indexer, which index can be checked via the Splunk Web UI for a successful connection?
Answer : D
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
What hardware attribute would need to be changed to increase the number of simultaneous searches (ad-hoc and scheduled) on a single search head?
Answer : B
https://docs.splunk.com/Documentation/Splunk/7.3.1/DistSearch/SHCarchitecture
Scroll down to section titled, How the cluster handles concurrent search quotas, 'Overall search quota. This quota determines the maximum number of historical searches (combined scheduled and ad hoc) that the cluster can run concurrently. This quota is configured with max_Searches_per_cpu and related settings in limits.conf.'
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
The LINE_BREAKER attribute is configured in which configuration file?
Answer : A
After an Enterprise Trial license expires, it will automatically convert to a Free license. How many days is an Enterprise Trial license valid before this conversion occurs?
In which phase do indexed extractions in props.conf occur?
Answer : B
The following items in the phases below are listed in the order Splunk applies them (ie LINE_BREAKER occurs before TRUNCATE).
Input phase
inputs.conf
props.conf
CHARSET
NO_BINARY_CHECK
CHECK_METHOD
CHECK_FOR_HEADER (deprecated)
PREFIX_SOURCETYPE
sourcetype
wmi.conf
regmon-filters.conf
Structured parsing phase
props.conf
INDEXED_EXTRACTIONS, and all other structured data header extractions
Parsing phase
props.conf
LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings
TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ, and all other time extraction settings and rules
TRANSFORMS which includes per-event queue filtering, per-event index assignment, per-event routing
SEDCMD
MORE_THAN, LESS_THAN
transforms.conf
stanzas referenced by a TRANSFORMS clause in props.conf
LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH
Configurationparametersandthedatapipeline
What are the required stanza attributes when configuring the transforms. conf to manipulate or remove events?
Answer : C
REGEX = <regular expression>
* Enter a regular expression to operate on your data.
FORMAT = <string>
* NOTE: This option is valid for both index-time and search-time field extraction. Index-time field extraction configuration require the FORMAT settings. The FORMAT settings is optional for search-time field extraction configurations.
* This setting specifies the format of the event, including any field names or values you want to add.
DEST_KEY = <key>
* NOTE: This setting is only valid for index-time field extractions.
* Specifies where SPLUNK software stores the expanded FORMAT results in accordance with the REGEX match.
Which option on the Add Data menu is most useful for testing data ingestion without creating inputs.conf?
Answer : A
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
Which of the following apply to how distributed search works? (select all that apply)
Answer : A, C, D
Users log on to the search head and run reports: -- The search head dispatches searches to the peers -- Peers run searches in parallel and return their portion of results -- The search head consolidates the individual results and prepares reports
When does a warm bucket roll over to a cold bucket?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/HowSplunkstoresindexes
Once further conditions are met (for example, the index reaches some maximum number of warm buckets), the indexer begins to roll the warm buckets to cold, based on their age. It always selects the oldest warm bucket to roll to cold. Buckets continue to roll to cold as they age in this manner. Cold buckets reside in a different location from hot and warm buckets. You can configure the location so that cold buckets reside on cheaper storage.
166653
Where should apps be located on the deployment server that the clients pull from?
Answer : D
After an app is downloaded, it resides under $SPLUNK_HOME/etc/apps on the deployment clients. But it resided in the $SPLUNK_HOME/etc/deployment-apps location in the deployment server.
What are the values for host and index for [stanza1] used by Splunk during index time, given the following configuration files?
Which of the following enables compression for universal forwarders in outputs. conf ?
A)
B)
C)
D)
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf
# Compression
#
# This example sends compressed events to the remote indexer.
# NOTE: Compression can be enabled TCP or SSL outputs only.
# The receiver input port should also have compression enabled.
[tcpout]
server = splunkServer.example.com:4433
compressed = true
Which of the following monitor inputs stanza headers would match all of the following files?
/var/log/www1/secure.log
/var/log/www/secure.l
/var/log/www/logs/secure.logs
/var/log/www2/secure.log
Answer : C
In a distributed environment, which Splunk component is used to distribute apps and configurations to the
other Splunk instances?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations
First line says it all: 'The deployment server distributes deployment apps to clients.'
How is data handled by Splunk during the input phase of the data ingestion process?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Deploy/Datapipeline
'In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks in into 64K blocks, and annotates each block with some metadata keys.'
After how many warnings within a rolling 30-day period will a license violation occur with an enforced
Enterprise license?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Aboutlicenseviolations
'Enterprise Trial license. If you get five or more warnings in a rolling 30 days period, you are in violation of your license. Dev/Test license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. Developer license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. BUT for Free license. If you get three or more warnings in a rolling 30 days period, you are in violation of your license.'
What is the difference between the two wildcards ... and - for the monitor stanza in inputs, conf?
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Data/Specifyinputpathswithwildcards
... The ellipsis wildcard searches recursively through directories and any number of levels of subdirectories to find matches.
If you specify a folder separator (for example, //var/log/.../file), it does not match the first folder level, only subfolders.
* The asterisk wildcard matches anything in that specific folder path segment.
Unlike ..., * does not recurse through subfolders.
Which of the following statements accurately describes using SSL to secure the feed from a forwarder?
Answer : A
AboutsecuringyourSplunkconfigurationwithSSL
Which of the following is a valid distributed search group?
If an update is made to an attribute in inputs.conf on a universal forwarder, on which Splunk component
would the fishbucket need to be reset in order to reindex the data?
Answer : A
https://www.splunk.com/en_us/blog/tips-and-tricks/what-is-this-fishbucket-thing.html
'Every Splunk instance has a fishbucket index, except the lightest of hand-tuned lightweight forwarders, and if you index a lot of files it can get quite large. As any other index, you can change the retention policy to control the size via indexes.conf'
Reference https://community.splunk.com/t5/Archive/How-to-reindex-data-from-a-forwarder/td-p/93310
What are the required stanza attributes when configuring the transforms. conf to manipulate or remove events?
Answer : C
REGEX = <regular expression>
* Enter a regular expression to operate on your data.
FORMAT = <string>
* NOTE: This option is valid for both index-time and search-time field extraction. Index-time field extraction configuration require the FORMAT settings. The FORMAT settings is optional for search-time field extraction configurations.
* This setting specifies the format of the event, including any field names or values you want to add.
DEST_KEY = <key>
* NOTE: This setting is only valid for index-time field extractions.
* Specifies where SPLUNK software stores the expanded FORMAT results in accordance with the REGEX match.
Where are license files stored?
Answer : C
User role inheritance allows what to be inherited from the parent role? (select all that apply)
To set up a Network input in Splunk, what needs to be specified'?
A non-clustered Splunk environment has three indexers (A,B,C) and two search heads (X, Y). During a search executed on search head X, indexer A crashes. What is Splunk's response?
Answer : A
This is explained in the Splunk documentation1, which states:
If an indexer goes down during a search, the search head notifies you that the results might be incomplete. The search head does not attempt to re-run the search on another indexer.
Which Splunk indexer operating system platform is supported when sending logs from a Windows universal forwarder?
Answer : A
'The forwarder/indexer relationship can be considered platform agnostic (within the sphere of supported platforms) because they exchange their data handshake (and the data, if you wish) over TCP.
What is the correct example to redact a plain-text password from raw events?
Answer : B
The correct answer is B. in props.conf:
[identity]
SEDCMD-redact_pw = s/password=([^,|/s]+)/ ####REACTED####/g
s/password=([^,|/s]+)/ ####REACTED####/g
The g flag at the end means that the replacement is applied globally, not just to the first match.
Option A is incorrect because it uses the REGEX attribute instead of the SEDCMD attribute. The REGEX attribute is used to extract fields from events, not to modify them.
Option C is incorrect because it uses the transforms.conf file instead of the props.conf file. The transforms.conf file is used to define transformations that can be applied to fields or events, such as lookups, evaluations, or replacements. However, these transformations are applied after indexing, not before.
Option D is incorrect because it uses both the wrong attribute and the wrong file. There is no REGEX-redact_pw attribute in the transforms.conf file.
References: 1: Redact data from events - Splunk Documentation
Consider a company with a Splunk distributed environment in production. The Compliance Department wants to start using Splunk; however, they want to ensure that no one can see their reports or any other knowledge objects. Which Splunk Component can be added to implement this policy for the new team?
Answer : D
Which of the following are methods for adding inputs in Splunk? (select all that apply)
Answer : A, B, C
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Configureyourinputs
Add your data to Splunk Enterprise. With Splunk Enterprise, you can add data using Splunk Web or Splunk Apps. In addition to these methods, you also can use the following methods. -The Splunk Command Line Interface (CLI) -The inputs.conf configuration file. When you specify your inputs with Splunk Web or the CLI, the details are saved in a configuartion file on Splunk Enterprise indexer and heavy forwarder instances.
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
In a distributed environment, which Splunk component is used to distribute apps and configurations to the
other Splunk instances?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations
First line says it all: 'The deployment server distributes deployment apps to clients.'
Which Splunk component distributes apps and certain other configuration updates to search head cluster members?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations First line says it all: 'The deployment server distributes deployment apps to clients.'
Which of the following is valid distribute search group?
A)
B)
C)
D)
Answer : D
The universal forwarder has which capabilities when sending data? (select all that apply)
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
Which of the following applies only to Splunk index data integrity check?
Answer : C
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
Which of the following is a valid distributed search group?
Which configuration files are used to transform raw data ingested by Splunk? (Choose all that apply.)
Answer : A, D
use transformations with props.conf and transforms.conf to:
-- Mask or delete raw data as it is being indexed
--Override sourcetype or host based upon event values
-- Route events to specific indexes based on event content
-- Prevent unwanted events from being indexed
A new forwarder has been installed with a manually created deploymentclient.conf.
What is the next step to enable the communication between the forwarder and the deployment server?
When deploying apps, which attribute in the forwarder management interface determines the apps that clients install?
Answer : C
<https://docs.splunk.com/Documentation/Splunk/8.0.6/Updating/Deploymentserverarchitecture>
https://docs.splunk.com/Splexicon:Serverclass
Where are license files stored?
Answer : C
In a distributed environment, which Splunk component is used to distribute apps and configurations to the
other Splunk instances?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations
First line says it all: 'The deployment server distributes deployment apps to clients.'
The Splunk administrator wants to ensure data is distributed evenly amongst the indexers. To do this, he runs
the following search over the last 24 hours:
index=*
What field can the administrator check to see the data distribution?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Knowledge/Usedefaultfields splunk_server
The splunk server field contains the name of the Splunk server containing the event. Useful in a distributed Splunk environment. Example: Restrict a search to the main index on a server named remote. splunk_server=remote index=main 404
Which of the following apply to how distributed search works? (select all that apply)
Answer : A, C, D
Users log on to the search head and run reports: -- The search head dispatches searches to the peers -- Peers run searches in parallel and return their portion of results -- The search head consolidates the individual results and prepares reports
Which of the following statements apply to directory inputs? {select all that apply)
Answer : A, C
When using a directory monitor input, specific source type can be selectively overridden using which configuration file?
When are knowledge bundles distributed to search peers?
Answer : D
'The search head replicates the knowledge bundle periodically in the background or when initiating a search. ' 'As part of the distributed search process, the search head replicates and distributes its knowledge objects to its search peers, or indexers. Knowledge objects include saved searches, event types, and other entities used in searching accorss indexes. The search head needs to distribute this material to its search peers so that they can properly execute queries on its behalf.'
A log file contains 193 days worth of timestamped events. Which monitor stanza would be used to collect data 45 days old and newer from that log file?
Answer : D
A Splunk administrator has been tasked with developing a retention strategy to have frequently accessed data sets on SSD storage and to have older, less frequently accessed data on slower NAS storage. They have set a mount point for the NAS. Which parameter do they need to modify to set the path for the older, less frequently accessed data in indexes.conf?
Which of the following are supported options when configuring optional network inputs?
There is a file with a vast amount of old dat
a. Which of the following inputs.conf attributes would allow an admin to monitor the file for updates without indexing the pre-existing data?
Answer : D
IgnoreOlderThan: This setting filters files for indexing based on their age. It does not prevent indexing of old data already in the file.
allowList: This setting allows specifying patterns to include files for monitoring, but it does not control indexing of pre-existing data.
monitor: This is the default method for monitoring files but does not address indexing pre-existing data.
followTail: This attribute, when set in inputs.conf, ensures that Splunk starts reading a file from the end (tail) and does not index existing old data. It is ideal for scenarios with large files where only new updates are relevant.
References:
Splunk Docs: Monitor text files
Splunk Docs: Configure followTail in inputs.conf
Who provides the Application Secret, Integration, and Secret keys, as well as the API Hostname when setting
up Duo for Multi-Factor Authentication in Splunk Enterprise?
Answer : A
In which Splunk configuration is the SEDCMD used?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Forwarddatatothird-partysystemsd
'You can specify a SEDCMD configuration in props.conf to address data that contains characters that the third-party server cannot process. '
Which configuration files are used to transform raw data ingested by Splunk? (Choose all that apply.)
Answer : A, D
use transformations with props.conf and transforms.conf to:
-- Mask or delete raw data as it is being indexed
--Override sourcetype or host based upon event values
-- Route events to specific indexes based on event content
-- Prevent unwanted events from being indexed
When running a real-time search, search results are pulled from which Splunk component?
Answer : D
Using the Splunk reference URL https://docs.splunk.com/Splexicon:Searchpeer
'search peer is a splunk platform instance that responds to search requests from a search head. The term 'search peer' is usally synonymous with the indexer role in a distributed search topology. However, other instance types also have access to indexed data, particularly internal diagnostic data, and thus function as search peers when they respond to search requests for that data.'
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
Which of the following statements apply to directory inputs? {select all that apply)
Answer : A, C
Which of the methods listed below supports muti-factor authentication?
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
An organization wants to collect Windows performance data from a set of clients, however, installing Splunk
software on these clients is not allowed. What option is available to collect this data in Splunk Enterprise?
Answer : B
'The Splunk platform collects remote Windows data for indexing in one of two ways: From Splunk forwarders, Using Windows Management Instrumentation (WMI). For Splunk Cloud deployments, you must use the Splunk Universal Forwarder on a Windows machines to montior remote Windows data.'
Which feature of Splunk's role configuration can be used to aggregate multiple roles intended for groups of
users?
An admin updates the Role to Group mapping for external authentication. How does the change affect users that are currently logged into Splunk?
Answer : A
Splunk checks role-to-group mapping only during user login for external authentication (e.g., LDAP, SAML). Users already logged in will continue using their previously assigned roles until they log out and log back in.
The changes to role mapping do not disrupt ongoing sessions.
Incorrect Options:
B: Search is not disabled upon role updates.
C: This is incorrect since existing users are also updated upon the next login.
D: Role updates do not terminate ongoing sessions.
References:
Splunk Docs: Configure user authentication
Which of the following is a benefit of distributed search?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Whatisdistributedsearch
Parallel reduce search processing If you struggle with extremely large high-cardinality searches, you might be able to apply parallel reduce processing to them to help them complete faster. You must have a distributed search environment to use parallel reduce search processing.
When running the command shown below, what is the default path in which deployment server. conf is created?
splunk set deploy-poll deployServer:port
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/Updating/Definedeploymentclasses#Ways_to_define_server_classes 'When you use forwarder management to create a new server class, it saves the server class definition in a copy of serverclass.conf under $SPLUNK_HOME/etc/system/local. If, instead of using forwarder management, you decide to directly edit serverclass.conf, it is recommended that you create the serverclass.conf file in that same directory, $SPLUNK_HOME/etc/system/local.'
In which phase do indexed extractions in props.conf occur?
Answer : B
The following items in the phases below are listed in the order Splunk applies them (ie LINE_BREAKER occurs before TRUNCATE).
Input phase
inputs.conf
props.conf
CHARSET
NO_BINARY_CHECK
CHECK_METHOD
CHECK_FOR_HEADER (deprecated)
PREFIX_SOURCETYPE
sourcetype
wmi.conf
regmon-filters.conf
Structured parsing phase
props.conf
INDEXED_EXTRACTIONS, and all other structured data header extractions
Parsing phase
props.conf
LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings
TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ, and all other time extraction settings and rules
TRANSFORMS which includes per-event queue filtering, per-event index assignment, per-event routing
SEDCMD
MORE_THAN, LESS_THAN
transforms.conf
stanzas referenced by a TRANSFORMS clause in props.conf
LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH
Configurationparametersandthedatapipeline
For single line event sourcetypes. it is most efficient to set SHOULD_linemerge to what value?
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking
Attribute : SHOULD_LINEMERGE = [true|false]
Description : When set to true, the Splunk platform combines several input lines into a single event, with configuration based on the settings described in the next section.
The LINE_BREAKER attribute is configured in which configuration file?
Answer : A
During search time, which directory of configuration files has the highest precedence?
Answer : D
Adding further clarity and quoting same Splunk reference URL from @giubal'
'To keep configuration settings consistent across peer nodes, configuration files are managed from the cluster master, which pushes the files to the slave-app directories on the peer nodes. Files in the slave-app directories have the highest precedence in a cluster peer's configuration. Here is the expanded precedence order for cluster peers:
1.Slave-app local directories -- highest priority
2. System local directory
3. App local directories
4. Slave-app default directories
5. App default directories
6. System default directory --lowest priority
What hardware attribute would need to be changed to increase the number of simultaneous searches (ad-hoc and scheduled) on a single search head?
Answer : B
https://docs.splunk.com/Documentation/Splunk/7.3.1/DistSearch/SHCarchitecture
Scroll down to section titled, How the cluster handles concurrent search quotas, 'Overall search quota. This quota determines the maximum number of historical searches (combined scheduled and ad hoc) that the cluster can run concurrently. This quota is configured with max_Searches_per_cpu and related settings in limits.conf.'
When would the following command be used?
Which Splunk forwarder has a built-in license?
Answer : C
Which Splunk component requires a Forwarder license?
Answer : B
Which of the following statements describe deployment management? (select all that apply)
Answer : A, B
'All Splunk Enterprise instances functioning as management components needs access to an Enterprise license. Management components include the deployment server, the indexer cluster manager node, the search head cluster deployer, and the monitoring console.'
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
In a distributed environment, which Splunk component is used to distribute apps and configurations to the
other Splunk instances?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations
First line says it all: 'The deployment server distributes deployment apps to clients.'
When does a warm bucket roll over to a cold bucket?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/HowSplunkstoresindexes
Once further conditions are met (for example, the index reaches some maximum number of warm buckets), the indexer begins to roll the warm buckets to cold, based on their age. It always selects the oldest warm bucket to roll to cold. Buckets continue to roll to cold as they age in this manner. Cold buckets reside in a different location from hot and warm buckets. You can configure the location so that cold buckets reside on cheaper storage.
166653
An index stores its data in buckets. Which default directories does Splunk use to store buckets? (Choose all that apply.)
Answer : C, D
Which of the following methods will connect a deployment client to a deployment server? (select all that apply)
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
Consider a company with a Splunk distributed environment in production. The Compliance Department wants to start using Splunk; however, they want to ensure that no one can see their reports or any other knowledge objects. Which Splunk Component can be added to implement this policy for the new team?
Answer : D
When configuring HTTP Event Collector (HEC) input, how would one ensure the events have been indexed?
Answer : A
Per the provided Splunk reference URL
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
'While HEC has precautions in place to prevent data loss, it's impossible to completely prevent such an occurrence, especially in the event of a network failure or hardware crash. This is where indexer acknolwedgment comes in.'
Reference https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
Immediately after installation, what will a Universal Forwarder do first?
Using SEDCMD in props.conf allows raw data to be modified. With the given event below, which option will mask the first three digits of the AcctID field resulting output: [22/Oct/2018:15:50:21] VendorID=1234 Code=B AcctID=xxx5309
Event:
[22/Oct/2018:15:50:21] VendorID=1234 Code=B AcctID=xxx5309
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Anonymizedata
Scrolling down to the section titled 'Define the sed script in props.conf shows the correct syntax of an example which validates that the number/character /1 immediately preceded the /g
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
Local user accounts created in Splunk store passwords in which file?
Answer : A
'To set the default username and password, place user-seed.conf in $SPLUNK_HOME/etc/system/local. You must restart Splunk to enable configurations. If the $SPLUNK_HOME/etc/passwd file is present, the settings in this file (user-seed.conf) are not used.'
In which scenario would a Splunk Administrator want to enable data integrity check when creating an index?
Answer : D
Which is a valid stanza for a network input?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Data/Monitornetworkports
Bypassautomaticsourcetypeassignment
What are the required stanza attributes when configuring the transforms. conf to manipulate or remove events?
Answer : C
REGEX = <regular expression>
* Enter a regular expression to operate on your data.
FORMAT = <string>
* NOTE: This option is valid for both index-time and search-time field extraction. Index-time field extraction configuration require the FORMAT settings. The FORMAT settings is optional for search-time field extraction configurations.
* This setting specifies the format of the event, including any field names or values you want to add.
DEST_KEY = <key>
* NOTE: This setting is only valid for index-time field extractions.
* Specifies where SPLUNK software stores the expanded FORMAT results in accordance with the REGEX match.
When would the following command be used?
When working with an indexer cluster, what changes with the global precedence when comparing to a standalone deployment?
Answer : C
The app local directories move to second in the priority list. This is explained in the Splunk documentation, which states:
In a clustered environment, the precedence of configuration files changes slightly from that of a standalone deployment. The app local directories move to second in the priority list, after the peer-apps local directory. This means that any configuration files in the app local directories on the individual peers are overridden by configuration files of the same name and type in the peer-apps local directory on the master node.
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
How do you remove missing forwarders from the Monitoring Console?
Answer : D
Which of the following statements describes how distributed search works?
Answer : C
URL https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Configuredistributedsearch
'To activate distributed search, you add search peers, or indexers, to a Splunk Enterprise instance that you desingate as a search head. You do this by specifying each search peer manually.'
In which phase do indexed extractions in props.conf occur?
Answer : B
The following items in the phases below are listed in the order Splunk applies them (ie LINE_BREAKER occurs before TRUNCATE).
Input phase
inputs.conf
props.conf
CHARSET
NO_BINARY_CHECK
CHECK_METHOD
CHECK_FOR_HEADER (deprecated)
PREFIX_SOURCETYPE
sourcetype
wmi.conf
regmon-filters.conf
Structured parsing phase
props.conf
INDEXED_EXTRACTIONS, and all other structured data header extractions
Parsing phase
props.conf
LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings
TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ, and all other time extraction settings and rules
TRANSFORMS which includes per-event queue filtering, per-event index assignment, per-event routing
SEDCMD
MORE_THAN, LESS_THAN
transforms.conf
stanzas referenced by a TRANSFORMS clause in props.conf
LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH
Configurationparametersandthedatapipeline
After how many warnings within a rolling 30-day period will a license violation occur with an enforced
Enterprise license?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Aboutlicenseviolations
'Enterprise Trial license. If you get five or more warnings in a rolling 30 days period, you are in violation of your license. Dev/Test license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. Developer license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. BUT for Free license. If you get three or more warnings in a rolling 30 days period, you are in violation of your license.'
Where are license files stored?
Answer : C
A non-clustered Splunk environment has three indexers (A,B,C) and two search heads (X, Y). During a search executed on search head X, indexer A crashes. What is Splunk's response?
Answer : A
This is explained in the Splunk documentation1, which states:
If an indexer goes down during a search, the search head notifies you that the results might be incomplete. The search head does not attempt to re-run the search on another indexer.
Which configuration file would be used to forward the Splunk internal logs from a search head to the indexer?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/DistSearch/Forwardsearchheaddata
Per the provided Splunk reference URL by @hwangho, scroll to section Forward search head data, subsection titled, 2. Configure the search head as a forwarder. 'Create an outputs.conf file on the search head that configures the search head for load-balanced forwarding across the set of search peers (indexers).'
Which feature of Splunk's role configuration can be used to aggregate multiple roles intended for groups of
users?
Which of the following are supported configuration methods to add inputs on a forwarder? (select all that apply)
Answer : A, B, D
https://docs.splunk.com/Documentation/Forwarder/8.2.1/Forwarder/HowtoforwarddatatoSplunkEnterprise
'You can collect data on the universal forwarder using several methods. Define inputs on the universal forwarder with the CLI. You can use the CLI to define inputs on the universal forwarder. After you define the inputs, the universal forwarder collects data based on those definitions as long as it has access to the data that you want to monitor. Define inputs on the universal forwarder with configuration files. If the input you want to configure does not have a CLI argument for it, you can configure inputs with configuration files. Create an inputs.conf file in the directory, $SPLUNK_HOME/etc/system/local
Which optional configuration setting in inputs .conf allows you to selectively forward the data to specific indexer(s)?
Answer : A
Specifies a comma-separated list of tcpout group names. Use this setting to selectively forward your data to specific indexers by specifying the tcpout groups that the forwarder should use when forwarding the data. Define the tcpout group names in the outputs.conf file in [tcpout:<tcpout_group_name>] stanzas. The groups present in defaultGroup in [tcpout] stanza in the outputs.conf file.
For single line event sourcetypes. it is most efficient to set SHOULD_linemerge to what value?
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking
Attribute : SHOULD_LINEMERGE = [true|false]
Description : When set to true, the Splunk platform combines several input lines into a single event, with configuration based on the settings described in the next section.
Which of the following is a benefit of distributed search?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Whatisdistributedsearch
Parallel reduce search processing If you struggle with extremely large high-cardinality searches, you might be able to apply parallel reduce processing to them to help them complete faster. You must have a distributed search environment to use parallel reduce search processing.
In this example, if useACK is set to true and the maxQueueSize is set to 7MB, what is the size of the wait queue on this universal forwarder?
Which of the following apply to how distributed search works? (select all that apply)
Answer : A, C, D
Users log on to the search head and run reports: -- The search head dispatches searches to the peers -- Peers run searches in parallel and return their portion of results -- The search head consolidates the individual results and prepares reports
When using license pools, volume allocations apply to which Splunk components?
In this source definition the MAX_TIMESTAMP_LOOKHEAD is missing. Which value would fit best?
Event example:
Answer : D
https://docs.splunk.com/Documentation/Splunk/6.2.0/Data/Configuretimestamprecognition
'Specify how far (how many characters) into an event Splunk software should look for a timestamp.' since TIME_PREFIX = ^ and timestamp is from 0-29 position, so D=30 will pick up the WHOLE timestamp correctly.
What is the default value of LINE_BREAKER?
Answer : B
Line breaking, which uses theLINE_BREAKERsetting to split the incoming stream of data into separate lines. By default, theLINE_BREAKERvalue is any sequence of newlines and carriage returns. In regular expression format, this is represented as the following string:([\r\n]+). You don't normally need to adjust this setting, but in cases where it's necessary, you must configure it in the props.conf configuration file on the forwarder that sends the data to Splunk Cloud Platform or a Splunk Enterprise indexer. TheLINE_BREAKERsetting expects a value in regular expression format.
This file has been manually created on a universal forwarder
A new Splunk admin comes in and connects the universal forwarders to a deployment server and deploys the same app with a new
Which file is now monitored?
Answer : B
When does a warm bucket roll over to a cold bucket?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/HowSplunkstoresindexes
Once further conditions are met (for example, the index reaches some maximum number of warm buckets), the indexer begins to roll the warm buckets to cold, based on their age. It always selects the oldest warm bucket to roll to cold. Buckets continue to roll to cold as they age in this manner. Cold buckets reside in a different location from hot and warm buckets. You can configure the location so that cold buckets reside on cheaper storage.
166653
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
Which additional component is required for a search head cluster?
Answer : A
The deployer. This is a Splunk Enterprise instance that distributes apps and other configurations to the cluster members. It stands outside the cluster and cannot run on the same instance as a cluster member. It can, however, under some circumstances, reside on the same instance as other Splunk Enterprise components, such as a deployment server or an indexer cluster master node.
What is the correct curl to send multiple events through HTTP Event Collector?
Answer : B
curl ''https://mysplunkserver.example.com:8088/services/collector'' \ -H ''Authorization: Splunk DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67'' \ -d '{''event'': ''Hello World''}, {''event'': ''Hola Mundo''}, {''event'': ''Hallo Welt''}'. This is the correct curl command to send multiple events through HTTP Event Collector (HEC), which is a token-based API that allows you to send data to Splunk Enterprise from any application that can make an HTTP request. The command has the following components:
The URL of the HEC endpoint, which consists of the protocol (https), the hostname or IP address of the Splunk server (mysplunkserver.example.com), the port number (8088), and the service name (services/collector).
The header that contains the authorization token, which is a unique identifier that grants access to the HEC endpoint. The token is prefixed with Splunk and enclosed in quotation marks. The token value (DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67) is an example and should be replaced with your own token value.
The data payload that contains the events to be sent, which are JSON objects enclosed in curly braces and separated by commas. Each event object has a mandatory field called event, which contains the raw data to be indexed. The event value can be a string, a number, a boolean, an array, or another JSON object. In this case, the event values are strings that say hello in different languages.
The CLI command splunk add forward-server indexer:
which configuration file?
Answer : C
The CLI command 'Splunk add forward-server indexer:<receiving-port>' is used to define the indexer and the listening port on forwards. The command creates this kind of entry '[tcpout-server://<ip address>:
https://docs.splunk.com/Documentation/Forwarder/8.2.2/Forwarder/Configureforwardingwithoutputs.conf
A new forwarder has been installed with a manually created deploymentclient.conf.
What is the next step to enable the communication between the forwarder and the deployment server?
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
When using a directory monitor input, specific source type can be selectively overridden using which configuration file?
Which feature of Splunk's role configuration can be used to aggregate multiple roles intended for groups of
users?
In which phase do indexed extractions in props.conf occur?
Answer : B
The following items in the phases below are listed in the order Splunk applies them (ie LINE_BREAKER occurs before TRUNCATE).
Input phase
inputs.conf
props.conf
CHARSET
NO_BINARY_CHECK
CHECK_METHOD
CHECK_FOR_HEADER (deprecated)
PREFIX_SOURCETYPE
sourcetype
wmi.conf
regmon-filters.conf
Structured parsing phase
props.conf
INDEXED_EXTRACTIONS, and all other structured data header extractions
Parsing phase
props.conf
LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings
TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ, and all other time extraction settings and rules
TRANSFORMS which includes per-event queue filtering, per-event index assignment, per-event routing
SEDCMD
MORE_THAN, LESS_THAN
transforms.conf
stanzas referenced by a TRANSFORMS clause in props.conf
LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH
Configurationparametersandthedatapipeline
In which Splunk configuration is the SEDCMD used?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Forwarddatatothird-partysystemsd
'You can specify a SEDCMD configuration in props.conf to address data that contains characters that the third-party server cannot process. '
Which of the following statements accurately describes using SSL to secure the feed from a forwarder?
Answer : A
AboutsecuringyourSplunkconfigurationwithSSL
If an update is made to an attribute in inputs.conf on a universal forwarder, on which Splunk component
would the fishbucket need to be reset in order to reindex the data?
Answer : A
https://www.splunk.com/en_us/blog/tips-and-tricks/what-is-this-fishbucket-thing.html
'Every Splunk instance has a fishbucket index, except the lightest of hand-tuned lightweight forwarders, and if you index a lot of files it can get quite large. As any other index, you can change the retention policy to control the size via indexes.conf'
Reference https://community.splunk.com/t5/Archive/How-to-reindex-data-from-a-forwarder/td-p/93310
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
Which layers are involved in Splunk configuration file layering? (select all that apply)
Answer : A, B, C
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles
To determine the order of directories for evaluating configuration file precedence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user: Global. Activities like indexing take place in a global context. They are independent of any app or user. For example, configuration files that determine monitoring or indexing behavior occur outside of the app and user context and are global in nature. App/user. Some activities, like searching, take place in an app or user context. The app and user context is vital to search-time processing, where certain knowledge objects or actions might be valid only for specific users in specific apps.
Where are license files stored?
Answer : C
Which of the following authentication types requires scripting in Splunk?
Answer : D
https://answers.splunk.com/answers/131127/scripted-authentication.html
Scripted Authentication: An option for Splunk Enterprise authentication. You can use an authentication system that you have in place (such as PAM or RADIUS) by configuring authentication.conf to use a script instead of using LDAP or Splunk Enterprise default authentication.
Which of the following statements describes how distributed search works?
Answer : C
URL https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Configuredistributedsearch
'To activate distributed search, you add search peers, or indexers, to a Splunk Enterprise instance that you desingate as a search head. You do this by specifying each search peer manually.'
Which of the following applies only to Splunk index data integrity check?
Answer : C
On the deployment server, administrators can map clients to server classes using client filters. Which of the
following statements is accurate?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.1/Updating/Filterclients
same/td-p/390910
Which Splunk component does a search head primarily communicate with?
Answer : A
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
Which Splunk forwarder type allows parsing of data before forwarding to an indexer?
Answer : C
The priority of layered Splunk configuration files depends on the file's:
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
'To determine the order of directories for evaluating configuration file precendence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user'
Which Splunk component distributes apps and certain other configuration updates to search head cluster members?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations First line says it all: 'The deployment server distributes deployment apps to clients.'
Which of the following are supported options when configuring optional network inputs?
When using license pools, volume allocations apply to which Splunk components?
When Splunk is integrated with LDAP, which attribute can be changed in the Splunk UI for an LDAP user?
User role inheritance allows what to be inherited from the parent role? (select all that apply)
Which additional component is required for a search head cluster?
Answer : A
The deployer. This is a Splunk Enterprise instance that distributes apps and other configurations to the cluster members. It stands outside the cluster and cannot run on the same instance as a cluster member. It can, however, under some circumstances, reside on the same instance as other Splunk Enterprise components, such as a deployment server or an indexer cluster master node.
This file has been manually created on a universal forwarder
A new Splunk admin comes in and connects the universal forwarders to a deployment server and deploys the same app with a new
Which file is now monitored?
Answer : B
After an Enterprise Trial license expires, it will automatically convert to a Free license. How many days is an Enterprise Trial license valid before this conversion occurs?
An organization wants to collect Windows performance data from a set of clients, however, installing Splunk
software on these clients is not allowed. What option is available to collect this data in Splunk Enterprise?
Answer : B
'The Splunk platform collects remote Windows data for indexing in one of two ways: From Splunk forwarders, Using Windows Management Instrumentation (WMI). For Splunk Cloud deployments, you must use the Splunk Universal Forwarder on a Windows machines to montior remote Windows data.'
Within props. conf, which stanzas are valid for data modification? (select all that apply)
Immediately after installation, what will a Universal Forwarder do first?
The LINE_BREAKER attribute is configured in which configuration file?
Answer : A
User role inheritance allows what to be inherited from the parent role? (select all that apply)
Which default Splunk role could be assigned to provide users with the following capabilities?
Create saved searches
Edit shared objects and alerts
Not allowed to create custom roles
The universal forwarder has which capabilities when sending data? (select all that apply)
Which Splunk component does a search head primarily communicate with?
Answer : A
Where can scripts for scripted inputs reside on the host file system? (select all that apply)
Answer : A, C, D
'Where to place the scripts for scripted inputs. The script that you refer to in $SCRIPT can reside in only one of the following places on the host file system:
$SPLUNK_HOME/etc/system/bin
$SPLUNK_HOME/etc/apps/<your_App>/bin
$SPLUNK_HOME/bin/scripts
As a best practice, put your script in the bin/ directory that is nearest to the inputs.conf file that calls your script on the host file system.'
In this example, if useACK is set to true and the maxQueueSize is set to 7MB, what is the size of the wait queue on this universal forwarder?
Which parent directory contains the configuration files in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Configurationfiledirectories
Section titled, Configuration file directories, states 'A detailed list of settings for each configuration file is provided in the .spec file names for that configuration file. You can find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc system/README folder of your Splunk Enterprise installation...'
The LINE_BREAKER attribute is configured in which configuration file?
Answer : A
How do you remove missing forwarders from the Monitoring Console?
Answer : D
Event processing occurs at which phase of the data pipeline?
Which of the methods listed below supports muti-factor authentication?
When indexing a data source, which fields are considered metadata?
Answer : D
Which pathway represents where a network input in Splunk might be found?
Answer : B
The correct answer is B. The network input in Splunk might be found in the $SPLUNK_HOME/etc/apps/$appName/local/inputs.conf file.
A network input is a type of input that monitors data from TCP or UDP ports. To configure a network input, you need to specify the port number, the connection host, the source, and the sourcetype in the inputs.conf file. You can also set other optional settings, such as index, queue, and host_regex1.
The inputs.conf file is a configuration file that contains the settings for different types of inputs, such as files, directories, scripts, network ports, and Windows event logs. The inputs.conf file can be located in various directories, depending on the scope and priority of the settings. The most common locations are:
$SPLUNK_HOME/etc/system/default: This directory contains the default settings for all inputs. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/system/local: This directory contains the custom settings for all inputs that apply to the entire Splunk instance. The settings in this directory override the default settings2.
$SPLUNK_HOME/etc/apps/$appName/default: This directory contains the default settings for all inputs that are specific to an app. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/apps/$appName/local: This directory contains the custom settings for all inputs that are specific to an app. The settings in this directory override the default and system settings2.
Therefore, the best practice is to create or edit the inputs.conf file in the $SPLUNK_HOME/etc/apps/$appName/local directory, where $appName is the name of the app that you want to configure the network input for. This way, you can avoid modifying the default files and ensure that your settings are applied to the specific app.
The other options are incorrect because:
A . There is no network directory under the apps directory. The network input settings should be in the inputs.conf file, not in a separate directory.
C . There is no udp.conf file in Splunk. The network input settings should be in the inputs.conf file, not in a separate file. The system directory is not the recommended location for custom settings, as it affects the entire Splunk instance.
D . The var/lib/splunk directory is where Splunk stores the indexed data, not the input settings. The homePath setting is used to specify the location of the index data, not the input data. The inputName is not a valid variable for inputs.conf.
What happens when the same username exists in Splunk as well as through LDAP?
Answer : C
Splunk platform attempts native authentication first. If authentication fails outside of a local account that doesn't exist, there is no attempt to use LDAP to log in. This is adapted from precedence of Splunk authentication schema.
What is the difference between the two wildcards ... and - for the monitor stanza in inputs, conf?
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Data/Specifyinputpathswithwildcards
... The ellipsis wildcard searches recursively through directories and any number of levels of subdirectories to find matches.
If you specify a folder separator (for example, //var/log/.../file), it does not match the first folder level, only subfolders.
* The asterisk wildcard matches anything in that specific folder path segment.
Unlike ..., * does not recurse through subfolders.
Immediately after installation, what will a Universal Forwarder do first?
When deploying apps on Universal Forwarders using the deployment server, what is the correct component and location of the app before it is deployed?
Answer : C
The correct answer is C. On Deployment Server, $SPLUNK_HOME/etc/deployment-apps.
A deployment server is a Splunk Enterprise instance that acts as a centralized configuration manager for any number of other instances, called ''deployment clients''. A deployment client can be a universal forwarder, a non-clustered indexer, or a search head1.
A deployment app is a directory that contains any content that you want to download to a set of deployment clients. The content can include a Splunk Enterprise app, a set of Splunk Enterprise configurations, or other content, such as scripts, images, and supporting files2.
You create a deployment app by creating a directory for it on the deployment server. The default location is $SPLUNK_HOME/etc/deployment-apps, but this is configurable through the repositoryLocation attribute in serverclass.conf. Underneath this location, each app must have its own subdirectory. The name of the subdirectory serves as the app name in the forwarder management interface2.
The other options are incorrect because:
A . On Universal Forwarder, $SPLUNK_HOME/etc/apps. This is the location where the deployment app resides after it is downloaded from the deployment server to the universal forwarder. It is not the location of the app before it is deployed2.
B . On Deployment Server, $SPLUNK_HOME/etc/apps. This is the location where the apps that are specific to the deployment server itself reside. It is not the location where the deployment apps for the clients are stored2.
In which Splunk configuration is the SEDCMD used?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Forwarddatatothird-partysystemsd
'You can specify a SEDCMD configuration in props.conf to address data that contains characters that the third-party server cannot process. '
Where should apps be located on the deployment server that the clients pull from?
Answer : D
After an app is downloaded, it resides under $SPLUNK_HOME/etc/apps on the deployment clients. But it resided in the $SPLUNK_HOME/etc/deployment-apps location in the deployment server.
When deploying apps, which attribute in the forwarder management interface determines the apps that clients install?
Answer : C
<https://docs.splunk.com/Documentation/Splunk/8.0.6/Updating/Deploymentserverarchitecture>
https://docs.splunk.com/Splexicon:Serverclass
Consider a company with a Splunk distributed environment in production. The Compliance Department wants to start using Splunk; however, they want to ensure that no one can see their reports or any other knowledge objects. Which Splunk Component can be added to implement this policy for the new team?
Answer : D
Which data pipeline phase is the last opportunity for defining event boundaries?
Answer : C
The parsing phase is the process of extracting fields and values from raw data. The parsing phase respects LINE_BREAKER, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings in props.conf. These settings determine how Splunk breaks the data into events based on certain criteria, such as timestamps or regular expressions. The event boundaries are defined by the props.conf file, which can be modified by the administrator. Therefore, the parsing phase is the last opportunity for defining event boundaries.
To set up a Network input in Splunk, what needs to be specified'?
When working with an indexer cluster, what changes with the global precedence when comparing to a standalone deployment?
Answer : C
The app local directories move to second in the priority list. This is explained in the Splunk documentation, which states:
In a clustered environment, the precedence of configuration files changes slightly from that of a standalone deployment. The app local directories move to second in the priority list, after the peer-apps local directory. This means that any configuration files in the app local directories on the individual peers are overridden by configuration files of the same name and type in the peer-apps local directory on the master node.
A company moves to a distributed architecture to meet the growing demand for the use of Splunk. What parameter can be configured to enable automatic load balancing in the
Universal Forwarder to send data to the indexers?
Answer : D
To enable automatic load balancing, set the stanza to have a server value equal to a comma-separated list of IP addresses and indexer ports for each of the indexers in the environment. For example:
[tcpout] server=10.1.1.1:9997,10.1.1.2:9997,10.1.1.3:9997
The forwarder then distributes data across all of the indexers in the list.
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
When using license pools, volume allocations apply to which Splunk components?
Which Splunk component does a search head primarily communicate with?
Answer : A
A non-clustered Splunk environment has three indexers (A,B,C) and two search heads (X, Y). During a search executed on search head X, indexer A crashes. What is Splunk's response?
Answer : A
This is explained in the Splunk documentation1, which states:
If an indexer goes down during a search, the search head notifies you that the results might be incomplete. The search head does not attempt to re-run the search on another indexer.
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
Which network input option provides durable file-system buffering of data to mitigate data loss due to network outages and splunkd restarts?
Answer : C
When Splunk is integrated with LDAP, which attribute can be changed in the Splunk UI for an LDAP user?
What is the correct curl to send multiple events through HTTP Event Collector?
Answer : B
curl ''https://mysplunkserver.example.com:8088/services/collector'' \ -H ''Authorization: Splunk DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67'' \ -d '{''event'': ''Hello World''}, {''event'': ''Hola Mundo''}, {''event'': ''Hallo Welt''}'. This is the correct curl command to send multiple events through HTTP Event Collector (HEC), which is a token-based API that allows you to send data to Splunk Enterprise from any application that can make an HTTP request. The command has the following components:
The URL of the HEC endpoint, which consists of the protocol (https), the hostname or IP address of the Splunk server (mysplunkserver.example.com), the port number (8088), and the service name (services/collector).
The header that contains the authorization token, which is a unique identifier that grants access to the HEC endpoint. The token is prefixed with Splunk and enclosed in quotation marks. The token value (DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67) is an example and should be replaced with your own token value.
The data payload that contains the events to be sent, which are JSON objects enclosed in curly braces and separated by commas. Each event object has a mandatory field called event, which contains the raw data to be indexed. The event value can be a string, a number, a boolean, an array, or another JSON object. In this case, the event values are strings that say hello in different languages.
What is the correct order of steps in Duo Multifactor Authentication?
Answer : C
Using the provided DUO/Splunk reference URL https://duo.com/docs/splunk
Scroll down to the Network Diagram section and note the following 6 similar steps
1 - SPlunk connection initiated
2 - Primary authentication
3 - Splunk connection established to Duo Security over TCP port 443
4 - Secondary authentication via Duo Security's service
5 - Splunk receives authentication response
6 - Splunk session logged in.
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
In this source definition the MAX_TIMESTAMP_LOOKHEAD is missing. Which value would fit best?
Event example:
Answer : D
https://docs.splunk.com/Documentation/Splunk/6.2.0/Data/Configuretimestamprecognition
'Specify how far (how many characters) into an event Splunk software should look for a timestamp.' since TIME_PREFIX = ^ and timestamp is from 0-29 position, so D=30 will pick up the WHOLE timestamp correctly.
Which of the following indexes come pre-configured with Splunk Enterprise? (select all that apply)
Which Splunk forwarder type allows parsing of data before forwarding to an indexer?
Answer : C
The priority of layered Splunk configuration files depends on the file's:
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
'To determine the order of directories for evaluating configuration file precendence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user'
Which data pipeline phase is the last opportunity for defining event boundaries?
Answer : C
The parsing phase is the process of extracting fields and values from raw data. The parsing phase respects LINE_BREAKER, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings in props.conf. These settings determine how Splunk breaks the data into events based on certain criteria, such as timestamps or regular expressions. The event boundaries are defined by the props.conf file, which can be modified by the administrator. Therefore, the parsing phase is the last opportunity for defining event boundaries.
Local user accounts created in Splunk store passwords in which file?
Answer : A
'To set the default username and password, place user-seed.conf in $SPLUNK_HOME/etc/system/local. You must restart Splunk to enable configurations. If the $SPLUNK_HOME/etc/passwd file is present, the settings in this file (user-seed.conf) are not used.'
Which option on the Add Data menu is most useful for testing data ingestion without creating inputs.conf?
Answer : A
Consider the following stanza in inputs.conf:
What will the value of the source filed be for events generated by this scripts input?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Admin/Inputsconf
-Scroll down to source = <string>
*Default: the input file path
In a distributed environment, which Splunk component is used to distribute apps and configurations to the
other Splunk instances?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations
First line says it all: 'The deployment server distributes deployment apps to clients.'
Load balancing on a Universal Forwarder is not scaling correctly. The forwarder's outputs. and the tcpout stanza are setup correctly. What else could be the cause of this scaling issue? (select all that apply)
Answer : A, C
The possible causes of the load balancing issue on the Universal Forwarder are A and C. The receiving port and the DNS record are both factors that affect the ability of the Universal Forwarder to distribute data across multiple receivers. If the receiving port is not properly set up to listen on the right port, or if the DNS record used is not set up with a valid list of IP addresses, the Universal Forwarder might fail to connect to some or all of the receivers, resulting in poor load balancing.
How is data handled by Splunk during the input phase of the data ingestion process?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Deploy/Datapipeline
'In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks in into 64K blocks, and annotates each block with some metadata keys.'
What is the correct order of steps in Duo Multifactor Authentication?
Answer : C
Using the provided DUO/Splunk reference URL https://duo.com/docs/splunk
Scroll down to the Network Diagram section and note the following 6 similar steps
1 - SPlunk connection initiated
2 - Primary authentication
3 - Splunk connection established to Duo Security over TCP port 443
4 - Secondary authentication via Duo Security's service
5 - Splunk receives authentication response
6 - Splunk session logged in.
Which optional configuration setting in inputs .conf allows you to selectively forward the data to specific indexer(s)?
Answer : A
Specifies a comma-separated list of tcpout group names. Use this setting to selectively forward your data to specific indexers by specifying the tcpout groups that the forwarder should use when forwarding the data. Define the tcpout group names in the outputs.conf file in [tcpout:<tcpout_group_name>] stanzas. The groups present in defaultGroup in [tcpout] stanza in the outputs.conf file.
Which default Splunk role could be assigned to provide users with the following capabilities?
Create saved searches
Edit shared objects and alerts
Not allowed to create custom roles
Which of the following are supported options when configuring optional network inputs?
When deploying apps, which attribute in the forwarder management interface determines the apps that clients install?
Answer : C
<https://docs.splunk.com/Documentation/Splunk/8.0.6/Updating/Deploymentserverarchitecture>
https://docs.splunk.com/Splexicon:Serverclass
When indexing a data source, which fields are considered metadata?
Answer : D
How would you configure your distsearch conf to allow you to run the search below? sourcetype=access_combined status=200 action=purchase splunk_setver_group=HOUSTON
A)
B)
C)
D)
The priority of layered Splunk configuration files depends on the file's:
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
'To determine the order of directories for evaluating configuration file precendence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user'
When does a warm bucket roll over to a cold bucket?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/HowSplunkstoresindexes
Once further conditions are met (for example, the index reaches some maximum number of warm buckets), the indexer begins to roll the warm buckets to cold, based on their age. It always selects the oldest warm bucket to roll to cold. Buckets continue to roll to cold as they age in this manner. Cold buckets reside in a different location from hot and warm buckets. You can configure the location so that cold buckets reside on cheaper storage.
166653
What hardware attribute would need to be changed to increase the number of simultaneous searches (ad-hoc and scheduled) on a single search head?
Answer : B
https://docs.splunk.com/Documentation/Splunk/7.3.1/DistSearch/SHCarchitecture
Scroll down to section titled, How the cluster handles concurrent search quotas, 'Overall search quota. This quota determines the maximum number of historical searches (combined scheduled and ad hoc) that the cluster can run concurrently. This quota is configured with max_Searches_per_cpu and related settings in limits.conf.'
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
In a distributed environment, which Splunk component is used to distribute apps and configurations to the
other Splunk instances?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations
First line says it all: 'The deployment server distributes deployment apps to clients.'
Which of the following must be done to define user permissions when integrating Splunk with LDAP?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.1.3/Security/ConfigureLDAPwithSplunkWeb
'You can map either users or groups, but not both. If you are using groups, all users must be members of an appropriate group. Groups inherit capabilities form the highest level role they're a member of.' 'If your LDAP environment does not have group entries, you can treat each user as its own group.'
Where are license files stored?
Answer : C
Which of the following apply to how distributed search works? (select all that apply)
Answer : A, C, D
Users log on to the search head and run reports: -- The search head dispatches searches to the peers -- Peers run searches in parallel and return their portion of results -- The search head consolidates the individual results and prepares reports
The LINE_BREAKER attribute is configured in which configuration file?
Answer : A
Which option on the Add Data menu is most useful for testing data ingestion without creating inputs.conf?
Answer : A
Which of the following are available input methods when adding a file input in Splunk Web? (Choose all that
apply.)
Answer : A, D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Howdoyouwanttoadddata
The fastest way to add data to your Splunk Cloud instance or Splunk Enterprise deployment is to use Splunk Web. After you access the Add Data page, choose one of three options for getting data into your Splunk platform deployment with Splunk Web: (1) Upload, (2) Monitor, (3) Forward The Upload option lets you upload a file or archive of files for indexing. When you choose Upload option, Splunk Web opens the upload process page. Monitor. For Splunk Enterprise installations, the Monitor option lets you monitor one or more files, directories, network streams, scripts, Event Logs (on Windows hosts only), performance metrics, or any other type of machine data that the Splunk Enterprise instance has access to.
Which feature of Splunk's role configuration can be used to aggregate multiple roles intended for groups of
users?
Which of the following are required when defining an index in indexes. conf? (select all that apply)
Answer : A, B, D
homePath = $SPLUNK_DB/hatchdb/db
coldPath = $SPLUNK_DB/hatchdb/colddb
thawedPath = $SPLUNK_DB/hatchdb/thaweddb
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Indexesconf#PER_INDEX_OPTIONS
Where can scripts for scripted inputs reside on the host file system? (select all that apply)
Answer : A, C, D
'Where to place the scripts for scripted inputs. The script that you refer to in $SCRIPT can reside in only one of the following places on the host file system:
$SPLUNK_HOME/etc/system/bin
$SPLUNK_HOME/etc/apps/<your_App>/bin
$SPLUNK_HOME/bin/scripts
As a best practice, put your script in the bin/ directory that is nearest to the inputs.conf file that calls your script on the host file system.'
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
How do you remove missing forwarders from the Monitoring Console?
Answer : D
A company moves to a distributed architecture to meet the growing demand for the use of Splunk. What parameter can be configured to enable automatic load balancing in the
Universal Forwarder to send data to the indexers?
Answer : D
To enable automatic load balancing, set the stanza to have a server value equal to a comma-separated list of IP addresses and indexer ports for each of the indexers in the environment. For example:
[tcpout] server=10.1.1.1:9997,10.1.1.2:9997,10.1.1.3:9997
The forwarder then distributes data across all of the indexers in the list.
When working with an indexer cluster, what changes with the global precedence when comparing to a standalone deployment?
Answer : C
The app local directories move to second in the priority list. This is explained in the Splunk documentation, which states:
In a clustered environment, the precedence of configuration files changes slightly from that of a standalone deployment. The app local directories move to second in the priority list, after the peer-apps local directory. This means that any configuration files in the app local directories on the individual peers are overridden by configuration files of the same name and type in the peer-apps local directory on the master node.
Which of the following enables compression for universal forwarders in outputs. conf ?
A)
B)
C)
D)
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf
# Compression
#
# This example sends compressed events to the remote indexer.
# NOTE: Compression can be enabled TCP or SSL outputs only.
# The receiver input port should also have compression enabled.
[tcpout]
server = splunkServer.example.com:4433
compressed = true
Which of the following methods will connect a deployment client to a deployment server? (select all that apply)
After automatic load balancing is enabled on a forwarder, the time interval for switching indexers can be updated by using which of the following attributes?
Answer : C
When using a directory monitor input, specific source type can be selectively overridden using which configuration file?
Which of the following indexes come pre-configured with Splunk Enterprise? (select all that apply)
Which of the following applies only to Splunk index data integrity check?
Answer : C
Which feature of Splunk's role configuration can be used to aggregate multiple roles intended for groups of
users?
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
How can native authentication be disabled in Splunk?
Answer : B
Which optional configuration setting in inputs .conf allows you to selectively forward the data to specific indexer(s)?
Answer : A
Specifies a comma-separated list of tcpout group names. Use this setting to selectively forward your data to specific indexers by specifying the tcpout groups that the forwarder should use when forwarding the data. Define the tcpout group names in the outputs.conf file in [tcpout:<tcpout_group_name>] stanzas. The groups present in defaultGroup in [tcpout] stanza in the outputs.conf file.
An admin updates the Role to Group mapping for external authentication. How does the change affect users that are currently logged into Splunk?
Answer : A
Splunk checks role-to-group mapping only during user login for external authentication (e.g., LDAP, SAML). Users already logged in will continue using their previously assigned roles until they log out and log back in.
The changes to role mapping do not disrupt ongoing sessions.
Incorrect Options:
B: Search is not disabled upon role updates.
C: This is incorrect since existing users are also updated upon the next login.
D: Role updates do not terminate ongoing sessions.
References:
Splunk Docs: Configure user authentication
An organization wants to collect Windows performance data from a set of clients, however, installing Splunk
software on these clients is not allowed. What option is available to collect this data in Splunk Enterprise?
Answer : B
'The Splunk platform collects remote Windows data for indexing in one of two ways: From Splunk forwarders, Using Windows Management Instrumentation (WMI). For Splunk Cloud deployments, you must use the Splunk Universal Forwarder on a Windows machines to montior remote Windows data.'
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
Which Splunk configuration file is used to enable data integrity checking?
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
Which of the following statements describes how distributed search works?
Answer : C
URL https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Configuredistributedsearch
'To activate distributed search, you add search peers, or indexers, to a Splunk Enterprise instance that you desingate as a search head. You do this by specifying each search peer manually.'
In which Splunk configuration is the SEDCMD used?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Forwarddatatothird-partysystemsd
'You can specify a SEDCMD configuration in props.conf to address data that contains characters that the third-party server cannot process. '
The universal forwarder has which capabilities when sending data? (select all that apply)
Which configuration file would be used to forward the Splunk internal logs from a search head to the indexer?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/DistSearch/Forwardsearchheaddata
Per the provided Splunk reference URL by @hwangho, scroll to section Forward search head data, subsection titled, 2. Configure the search head as a forwarder. 'Create an outputs.conf file on the search head that configures the search head for load-balanced forwarding across the set of search peers (indexers).'
Which additional component is required for a search head cluster?
Answer : A
The deployer. This is a Splunk Enterprise instance that distributes apps and other configurations to the cluster members. It stands outside the cluster and cannot run on the same instance as a cluster member. It can, however, under some circumstances, reside on the same instance as other Splunk Enterprise components, such as a deployment server or an indexer cluster master node.
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
What is the default value of LINE_BREAKER?
Answer : B
Line breaking, which uses theLINE_BREAKERsetting to split the incoming stream of data into separate lines. By default, theLINE_BREAKERvalue is any sequence of newlines and carriage returns. In regular expression format, this is represented as the following string:([\r\n]+). You don't normally need to adjust this setting, but in cases where it's necessary, you must configure it in the props.conf configuration file on the forwarder that sends the data to Splunk Cloud Platform or a Splunk Enterprise indexer. TheLINE_BREAKERsetting expects a value in regular expression format.
Which of the following is the use case for the deployment server feature of Splunk?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
Immediately after installation, what will a Universal Forwarder do first?
After how many warnings within a rolling 30-day period will a license violation occur with an enforced
Enterprise license?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Aboutlicenseviolations
'Enterprise Trial license. If you get five or more warnings in a rolling 30 days period, you are in violation of your license. Dev/Test license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. Developer license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. BUT for Free license. If you get three or more warnings in a rolling 30 days period, you are in violation of your license.'
An organization wants to collect Windows performance data from a set of clients, however, installing Splunk
software on these clients is not allowed. What option is available to collect this data in Splunk Enterprise?
Answer : B
'The Splunk platform collects remote Windows data for indexing in one of two ways: From Splunk forwarders, Using Windows Management Instrumentation (WMI). For Splunk Cloud deployments, you must use the Splunk Universal Forwarder on a Windows machines to montior remote Windows data.'
Which of the following are supported options when configuring optional network inputs?
Which pathway represents where a network input in Splunk might be found?
Answer : B
The correct answer is B. The network input in Splunk might be found in the $SPLUNK_HOME/etc/apps/$appName/local/inputs.conf file.
A network input is a type of input that monitors data from TCP or UDP ports. To configure a network input, you need to specify the port number, the connection host, the source, and the sourcetype in the inputs.conf file. You can also set other optional settings, such as index, queue, and host_regex1.
The inputs.conf file is a configuration file that contains the settings for different types of inputs, such as files, directories, scripts, network ports, and Windows event logs. The inputs.conf file can be located in various directories, depending on the scope and priority of the settings. The most common locations are:
$SPLUNK_HOME/etc/system/default: This directory contains the default settings for all inputs. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/system/local: This directory contains the custom settings for all inputs that apply to the entire Splunk instance. The settings in this directory override the default settings2.
$SPLUNK_HOME/etc/apps/$appName/default: This directory contains the default settings for all inputs that are specific to an app. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/apps/$appName/local: This directory contains the custom settings for all inputs that are specific to an app. The settings in this directory override the default and system settings2.
Therefore, the best practice is to create or edit the inputs.conf file in the $SPLUNK_HOME/etc/apps/$appName/local directory, where $appName is the name of the app that you want to configure the network input for. This way, you can avoid modifying the default files and ensure that your settings are applied to the specific app.
The other options are incorrect because:
A . There is no network directory under the apps directory. The network input settings should be in the inputs.conf file, not in a separate directory.
C . There is no udp.conf file in Splunk. The network input settings should be in the inputs.conf file, not in a separate file. The system directory is not the recommended location for custom settings, as it affects the entire Splunk instance.
D . The var/lib/splunk directory is where Splunk stores the indexed data, not the input settings. The homePath setting is used to specify the location of the index data, not the input data. The inputName is not a valid variable for inputs.conf.
User role inheritance allows what to be inherited from the parent role? (select all that apply)
Which Splunk forwarder has a built-in license?
Answer : C
After an Enterprise Trial license expires, it will automatically convert to a Free license. How many days is an Enterprise Trial license valid before this conversion occurs?
Which parent directory contains the configuration files in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Configurationfiledirectories
Section titled, Configuration file directories, states 'A detailed list of settings for each configuration file is provided in the .spec file names for that configuration file. You can find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc system/README folder of your Splunk Enterprise installation...'
Which of the following must be done to define user permissions when integrating Splunk with LDAP?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.1.3/Security/ConfigureLDAPwithSplunkWeb
'You can map either users or groups, but not both. If you are using groups, all users must be members of an appropriate group. Groups inherit capabilities form the highest level role they're a member of.' 'If your LDAP environment does not have group entries, you can treat each user as its own group.'
When would the following command be used?
When deploying apps on Universal Forwarders using the deployment server, what is the correct component and location of the app before it is deployed?
Answer : C
The correct answer is C. On Deployment Server, $SPLUNK_HOME/etc/deployment-apps.
A deployment server is a Splunk Enterprise instance that acts as a centralized configuration manager for any number of other instances, called ''deployment clients''. A deployment client can be a universal forwarder, a non-clustered indexer, or a search head1.
A deployment app is a directory that contains any content that you want to download to a set of deployment clients. The content can include a Splunk Enterprise app, a set of Splunk Enterprise configurations, or other content, such as scripts, images, and supporting files2.
You create a deployment app by creating a directory for it on the deployment server. The default location is $SPLUNK_HOME/etc/deployment-apps, but this is configurable through the repositoryLocation attribute in serverclass.conf. Underneath this location, each app must have its own subdirectory. The name of the subdirectory serves as the app name in the forwarder management interface2.
The other options are incorrect because:
A . On Universal Forwarder, $SPLUNK_HOME/etc/apps. This is the location where the deployment app resides after it is downloaded from the deployment server to the universal forwarder. It is not the location of the app before it is deployed2.
B . On Deployment Server, $SPLUNK_HOME/etc/apps. This is the location where the apps that are specific to the deployment server itself reside. It is not the location where the deployment apps for the clients are stored2.
Where should apps be located on the deployment server that the clients pull from?
Answer : D
After an app is downloaded, it resides under $SPLUNK_HOME/etc/apps on the deployment clients. But it resided in the $SPLUNK_HOME/etc/deployment-apps location in the deployment server.
The Splunk administrator wants to ensure data is distributed evenly amongst the indexers. To do this, he runs
the following search over the last 24 hours:
index=*
What field can the administrator check to see the data distribution?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Knowledge/Usedefaultfields splunk_server
The splunk server field contains the name of the Splunk server containing the event. Useful in a distributed Splunk environment. Example: Restrict a search to the main index on a server named remote. splunk_server=remote index=main 404
Which configuration files are used to transform raw data ingested by Splunk? (Choose all that apply.)
Answer : A, D
use transformations with props.conf and transforms.conf to:
-- Mask or delete raw data as it is being indexed
--Override sourcetype or host based upon event values
-- Route events to specific indexes based on event content
-- Prevent unwanted events from being indexed
After configuring a universal forwarder to communicate with an indexer, which index can be checked via the Splunk Web UI for a successful connection?
Answer : D
A Universal Forwarder is collecting two separate sources of data (A,B). Source A is being routed through a Heavy Forwarder and then to an indexer. Source B is being routed directly to the indexer. Both sets of data require the masking of raw text strings before being written to disk. What does the administrator need to do to
ensure that the masking takes place successfully?
Answer : D
The correct answer is D. Place both props . conf and transforms . conf on the Heavy Forwarder for source A, and place both props . conf and transforms . conf on the indexer for source B.
For source A, the data is routed through a heavy forwarder, which can parse the data before sending it to the indexer. Therefore, you need to place both props.conf and transforms.conf on the heavy forwarder for source A, so that the masking takes place before indexing.
For source B, the data is routed directly to the indexer, which parses and indexes the data. Therefore, you need to place both props.conf and transforms.conf on the indexer for source B, so that the masking takes place before indexing.
Assume a file is being monitored and the data was incorrectly indexed to an exclusive index. The index is
cleaned and now the data must be reindexed. What other index must be cleaned to reset the input checkpoint
information for that file?
Which of the following are methods for adding inputs in Splunk? (select all that apply)
Answer : A, B, C
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Configureyourinputs
Add your data to Splunk Enterprise. With Splunk Enterprise, you can add data using Splunk Web or Splunk Apps. In addition to these methods, you also can use the following methods. -The Splunk Command Line Interface (CLI) -The inputs.conf configuration file. When you specify your inputs with Splunk Web or the CLI, the details are saved in a configuartion file on Splunk Enterprise indexer and heavy forwarder instances.
Which of the following statements describes how distributed search works?
Answer : C
URL https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Configuredistributedsearch
'To activate distributed search, you add search peers, or indexers, to a Splunk Enterprise instance that you desingate as a search head. You do this by specifying each search peer manually.'
When enabling data integrity control, where does Splunk Enterprise store the hash files for each bucket?
Answer : B
Data integrity controls in Splunk ensure that indexed data has not been tampered with.
When enabled, Splunk calculates hashes for each bucket and stores these hash files in the rawdata directory of the corresponding bucket.
Incorrect Options:
A, C, D: These directories do not store hash files.
References:
Splunk Docs: Configure data integrity controls
Which of the following enables compression for universal forwarders in outputs. conf ?
A)
B)
C)
D)
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf
# Compression
#
# This example sends compressed events to the remote indexer.
# NOTE: Compression can be enabled TCP or SSL outputs only.
# The receiver input port should also have compression enabled.
[tcpout]
server = splunkServer.example.com:4433
compressed = true
In which phase do indexed extractions in props.conf occur?
Answer : B
The following items in the phases below are listed in the order Splunk applies them (ie LINE_BREAKER occurs before TRUNCATE).
Input phase
inputs.conf
props.conf
CHARSET
NO_BINARY_CHECK
CHECK_METHOD
CHECK_FOR_HEADER (deprecated)
PREFIX_SOURCETYPE
sourcetype
wmi.conf
regmon-filters.conf
Structured parsing phase
props.conf
INDEXED_EXTRACTIONS, and all other structured data header extractions
Parsing phase
props.conf
LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings
TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ, and all other time extraction settings and rules
TRANSFORMS which includes per-event queue filtering, per-event index assignment, per-event routing
SEDCMD
MORE_THAN, LESS_THAN
transforms.conf
stanzas referenced by a TRANSFORMS clause in props.conf
LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH
Configurationparametersandthedatapipeline
Which of the following statements apply to directory inputs? {select all that apply)
Answer : A, C
In this example, if useACK is set to true and the maxQueueSize is set to 7MB, what is the size of the wait queue on this universal forwarder?
Who provides the Application Secret, Integration, and Secret keys, as well as the API Hostname when setting
up Duo for Multi-Factor Authentication in Splunk Enterprise?
Answer : A
For single line event sourcetypes. it is most efficient to set SHOULD_linemerge to what value?
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking
Attribute : SHOULD_LINEMERGE = [true|false]
Description : When set to true, the Splunk platform combines several input lines into a single event, with configuration based on the settings described in the next section.
Which of the following are supported configuration methods to add inputs on a forwarder? (select all that apply)
Answer : A, B, D
https://docs.splunk.com/Documentation/Forwarder/8.2.1/Forwarder/HowtoforwarddatatoSplunkEnterprise
'You can collect data on the universal forwarder using several methods. Define inputs on the universal forwarder with the CLI. You can use the CLI to define inputs on the universal forwarder. After you define the inputs, the universal forwarder collects data based on those definitions as long as it has access to the data that you want to monitor. Define inputs on the universal forwarder with configuration files. If the input you want to configure does not have a CLI argument for it, you can configure inputs with configuration files. Create an inputs.conf file in the directory, $SPLUNK_HOME/etc/system/local
After an Enterprise Trial license expires, it will automatically convert to a Free license. How many days is an Enterprise Trial license valid before this conversion occurs?
Which of the following is a valid distributed search group?
There is a file with a vast amount of old dat
a. Which of the following inputs.conf attributes would allow an admin to monitor the file for updates without indexing the pre-existing data?
Answer : D
IgnoreOlderThan: This setting filters files for indexing based on their age. It does not prevent indexing of old data already in the file.
allowList: This setting allows specifying patterns to include files for monitoring, but it does not control indexing of pre-existing data.
monitor: This is the default method for monitoring files but does not address indexing pre-existing data.
followTail: This attribute, when set in inputs.conf, ensures that Splunk starts reading a file from the end (tail) and does not index existing old data. It is ideal for scenarios with large files where only new updates are relevant.
References:
Splunk Docs: Monitor text files
Splunk Docs: Configure followTail in inputs.conf
Local user accounts created in Splunk store passwords in which file?
Answer : A
'To set the default username and password, place user-seed.conf in $SPLUNK_HOME/etc/system/local. You must restart Splunk to enable configurations. If the $SPLUNK_HOME/etc/passwd file is present, the settings in this file (user-seed.conf) are not used.'
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
What are the values for host and index for [stanza1] used by Splunk during index time, given the following configuration files?
Which default Splunk role could be assigned to provide users with the following capabilities?
Create saved searches
Edit shared objects and alerts
Not allowed to create custom roles
Which option on the Add Data menu is most useful for testing data ingestion without creating inputs.conf?
Answer : A
A Splunk administrator has been tasked with developing a retention strategy to have frequently accessed data sets on SSD storage and to have older, less frequently accessed data on slower NAS storage. They have set a mount point for the NAS. Which parameter do they need to modify to set the path for the older, less frequently accessed data in indexes.conf?
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
Which Splunk configuration file is used to enable data integrity checking?
A log file contains 193 days worth of timestamped events. Which monitor stanza would be used to collect data 45 days old and newer from that log file?
Answer : D
Event processing occurs at which phase of the data pipeline?
Which of the following indexes come pre-configured with Splunk Enterprise? (select all that apply)
Which is a valid stanza for a network input?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Data/Monitornetworkports
Bypassautomaticsourcetypeassignment
Consider the following stanza in inputs.conf:
What will the value of the source filed be for events generated by this scripts input?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Admin/Inputsconf
-Scroll down to source = <string>
*Default: the input file path
What is the default character encoding used by Splunk during the input phase?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Data/Configurecharactersetencoding
'Configure character set encoding. Splunk software attempts to apply UTF-8 encoding to your scources by default. If a source foesn't use UTF-8 encoding or is a non-ASCII file, Splunk software tries to convert data from the source to UTF-8 encoding unless you specify a character set to use by setting the CHARSET key in the props.conf file.'
Which of the following are supported configuration methods to add inputs on a forwarder? (select all that apply)
Answer : A, B, D
https://docs.splunk.com/Documentation/Forwarder/8.2.1/Forwarder/HowtoforwarddatatoSplunkEnterprise
'You can collect data on the universal forwarder using several methods. Define inputs on the universal forwarder with the CLI. You can use the CLI to define inputs on the universal forwarder. After you define the inputs, the universal forwarder collects data based on those definitions as long as it has access to the data that you want to monitor. Define inputs on the universal forwarder with configuration files. If the input you want to configure does not have a CLI argument for it, you can configure inputs with configuration files. Create an inputs.conf file in the directory, $SPLUNK_HOME/etc/system/local
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
Which configuration file would be used to forward the Splunk internal logs from a search head to the indexer?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/DistSearch/Forwardsearchheaddata
Per the provided Splunk reference URL by @hwangho, scroll to section Forward search head data, subsection titled, 2. Configure the search head as a forwarder. 'Create an outputs.conf file on the search head that configures the search head for load-balanced forwarding across the set of search peers (indexers).'
Which option on the Add Data menu is most useful for testing data ingestion without creating inputs.conf?
Answer : A
Which of the methods listed below supports muti-factor authentication?
After configuring a universal forwarder to communicate with an indexer, which index can be checked via the Splunk Web UI for a successful connection?
Answer : D
Running this search in a distributed environment:
On what Splunk component does the eval command get executed?
The LINE_BREAKER attribute is configured in which configuration file?
Answer : A
Which Splunk component distributes apps and certain other configuration updates to search head cluster members?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations First line says it all: 'The deployment server distributes deployment apps to clients.'
An organization wants to collect Windows performance data from a set of clients, however, installing Splunk
software on these clients is not allowed. What option is available to collect this data in Splunk Enterprise?
Answer : B
'The Splunk platform collects remote Windows data for indexing in one of two ways: From Splunk forwarders, Using Windows Management Instrumentation (WMI). For Splunk Cloud deployments, you must use the Splunk Universal Forwarder on a Windows machines to montior remote Windows data.'
Which network input option provides durable file-system buffering of data to mitigate data loss due to network outages and splunkd restarts?
Answer : C
Where should apps be located on the deployment server that the clients pull from?
Answer : D
After an app is downloaded, it resides under $SPLUNK_HOME/etc/apps on the deployment clients. But it resided in the $SPLUNK_HOME/etc/deployment-apps location in the deployment server.
In which Splunk configuration is the SEDCMD used?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Forwarddatatothird-partysystemsd
'You can specify a SEDCMD configuration in props.conf to address data that contains characters that the third-party server cannot process. '
Which configuration files are used to transform raw data ingested by Splunk? (Choose all that apply.)
Answer : A, D
use transformations with props.conf and transforms.conf to:
-- Mask or delete raw data as it is being indexed
--Override sourcetype or host based upon event values
-- Route events to specific indexes based on event content
-- Prevent unwanted events from being indexed
Which Splunk forwarder type allows parsing of data before forwarding to an indexer?
Answer : C
After automatic load balancing is enabled on a forwarder, the time interval for switching indexers can be updated by using which of the following attributes?
Answer : C
A company moves to a distributed architecture to meet the growing demand for the use of Splunk. What parameter can be configured to enable automatic load balancing in the
Universal Forwarder to send data to the indexers?
Answer : D
To enable automatic load balancing, set the stanza to have a server value equal to a comma-separated list of IP addresses and indexer ports for each of the indexers in the environment. For example:
[tcpout] server=10.1.1.1:9997,10.1.1.2:9997,10.1.1.3:9997
The forwarder then distributes data across all of the indexers in the list.
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
When enabling data integrity control, where does Splunk Enterprise store the hash files for each bucket?
Answer : B
Data integrity controls in Splunk ensure that indexed data has not been tampered with.
When enabled, Splunk calculates hashes for each bucket and stores these hash files in the rawdata directory of the corresponding bucket.
Incorrect Options:
A, C, D: These directories do not store hash files.
References:
Splunk Docs: Configure data integrity controls
On the deployment server, administrators can map clients to server classes using client filters. Which of the
following statements is accurate?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.1/Updating/Filterclients
same/td-p/390910
Which of the following are required when defining an index in indexes. conf? (select all that apply)
Answer : A, B, D
homePath = $SPLUNK_DB/hatchdb/db
coldPath = $SPLUNK_DB/hatchdb/colddb
thawedPath = $SPLUNK_DB/hatchdb/thaweddb
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Indexesconf#PER_INDEX_OPTIONS
Which configuration files are used to transform raw data ingested by Splunk? (Choose all that apply.)
Answer : A, D
use transformations with props.conf and transforms.conf to:
-- Mask or delete raw data as it is being indexed
--Override sourcetype or host based upon event values
-- Route events to specific indexes based on event content
-- Prevent unwanted events from being indexed
Which Splunk component distributes apps and certain other configuration updates to search head cluster members?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations First line says it all: 'The deployment server distributes deployment apps to clients.'
In case of a conflict between a whitelist and a blacklist input setting, which one is used?
A Universal Forwarder is collecting two separate sources of data (A,B). Source A is being routed through a Heavy Forwarder and then to an indexer. Source B is being routed directly to the indexer. Both sets of data require the masking of raw text strings before being written to disk. What does the administrator need to do to
ensure that the masking takes place successfully?
Answer : D
The correct answer is D. Place both props . conf and transforms . conf on the Heavy Forwarder for source A, and place both props . conf and transforms . conf on the indexer for source B.
For source A, the data is routed through a heavy forwarder, which can parse the data before sending it to the indexer. Therefore, you need to place both props.conf and transforms.conf on the heavy forwarder for source A, so that the masking takes place before indexing.
For source B, the data is routed directly to the indexer, which parses and indexes the data. Therefore, you need to place both props.conf and transforms.conf on the indexer for source B, so that the masking takes place before indexing.
User role inheritance allows what to be inherited from the parent role? (select all that apply)
Which forwarder type can parse data prior to forwarding?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Typesofforwarders
'A heavy forwarder parses data before forwarding it and can route data based on criteria such as source or type of event.'
The following stanzas in inputs. conf are currently being used by a deployment client:
[udp: //145.175.118.177:1001
Connection_host = dns
sourcetype = syslog
Which of the following statements is true of data that is received via this input?
Answer : D
This is because the input type is UDP, which is an unreliable protocol that does not guarantee delivery, order, or integrity of the data packets. UDP does not have any mechanism to resend or acknowledge the data packets, so if Splunk is restarted, any data that was in transit or in the buffer may be dropped and not indexed.
Where are license files stored?
Answer : C
Assume a file is being monitored and the data was incorrectly indexed to an exclusive index. The index is
cleaned and now the data must be reindexed. What other index must be cleaned to reset the input checkpoint
information for that file?
What is required when adding a native user to Splunk? (select all that apply)
Answer : A, B
According to the Splunk system admin course PDF, When adding native users, Username and Password ARE REQUIRED
Which of the following statements apply to directory inputs? {select all that apply)
Answer : A, C
Which Splunk component requires a Forwarder license?
Answer : B
Which of the following are available input methods when adding a file input in Splunk Web? (Choose all that
apply.)
Answer : A, D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Howdoyouwanttoadddata
The fastest way to add data to your Splunk Cloud instance or Splunk Enterprise deployment is to use Splunk Web. After you access the Add Data page, choose one of three options for getting data into your Splunk platform deployment with Splunk Web: (1) Upload, (2) Monitor, (3) Forward The Upload option lets you upload a file or archive of files for indexing. When you choose Upload option, Splunk Web opens the upload process page. Monitor. For Splunk Enterprise installations, the Monitor option lets you monitor one or more files, directories, network streams, scripts, Event Logs (on Windows hosts only), performance metrics, or any other type of machine data that the Splunk Enterprise instance has access to.
Immediately after installation, what will a Universal Forwarder do first?
Which configuration files are used to transform raw data ingested by Splunk? (Choose all that apply.)
Answer : A, D
use transformations with props.conf and transforms.conf to:
-- Mask or delete raw data as it is being indexed
--Override sourcetype or host based upon event values
-- Route events to specific indexes based on event content
-- Prevent unwanted events from being indexed
A new forwarder has been installed with a manually created deploymentclient.conf.
What is the next step to enable the communication between the forwarder and the deployment server?
Which configuration file would be used to forward the Splunk internal logs from a search head to the indexer?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/DistSearch/Forwardsearchheaddata
Per the provided Splunk reference URL by @hwangho, scroll to section Forward search head data, subsection titled, 2. Configure the search head as a forwarder. 'Create an outputs.conf file on the search head that configures the search head for load-balanced forwarding across the set of search peers (indexers).'
Which of the following statements describes how distributed search works?
Answer : C
URL https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Configuredistributedsearch
'To activate distributed search, you add search peers, or indexers, to a Splunk Enterprise instance that you desingate as a search head. You do this by specifying each search peer manually.'
When indexing a data source, which fields are considered metadata?
Answer : D
Which of the following statements accurately describes using SSL to secure the feed from a forwarder?
Answer : A
AboutsecuringyourSplunkconfigurationwithSSL
When are knowledge bundles distributed to search peers?
Answer : D
'The search head replicates the knowledge bundle periodically in the background or when initiating a search. ' 'As part of the distributed search process, the search head replicates and distributes its knowledge objects to its search peers, or indexers. Knowledge objects include saved searches, event types, and other entities used in searching accorss indexes. The search head needs to distribute this material to its search peers so that they can properly execute queries on its behalf.'
The LINE_BREAKER attribute is configured in which configuration file?
Answer : A
When does a warm bucket roll over to a cold bucket?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/HowSplunkstoresindexes
Once further conditions are met (for example, the index reaches some maximum number of warm buckets), the indexer begins to roll the warm buckets to cold, based on their age. It always selects the oldest warm bucket to roll to cold. Buckets continue to roll to cold as they age in this manner. Cold buckets reside in a different location from hot and warm buckets. You can configure the location so that cold buckets reside on cheaper storage.
166653
This file has been manually created on a universal forwarder
A new Splunk admin comes in and connects the universal forwarders to a deployment server and deploys the same app with a new
Which file is now monitored?
Answer : B
Which parent directory contains the configuration files in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Configurationfiledirectories
Section titled, Configuration file directories, states 'A detailed list of settings for each configuration file is provided in the .spec file names for that configuration file. You can find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc system/README folder of your Splunk Enterprise installation...'
An admin updates the Role to Group mapping for external authentication. How does the change affect users that are currently logged into Splunk?
Answer : A
Splunk checks role-to-group mapping only during user login for external authentication (e.g., LDAP, SAML). Users already logged in will continue using their previously assigned roles until they log out and log back in.
The changes to role mapping do not disrupt ongoing sessions.
Incorrect Options:
B: Search is not disabled upon role updates.
C: This is incorrect since existing users are also updated upon the next login.
D: Role updates do not terminate ongoing sessions.
References:
Splunk Docs: Configure user authentication
Which of the following are methods for adding inputs in Splunk? (select all that apply)
Answer : A, B, C
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Configureyourinputs
Add your data to Splunk Enterprise. With Splunk Enterprise, you can add data using Splunk Web or Splunk Apps. In addition to these methods, you also can use the following methods. -The Splunk Command Line Interface (CLI) -The inputs.conf configuration file. When you specify your inputs with Splunk Web or the CLI, the details are saved in a configuartion file on Splunk Enterprise indexer and heavy forwarder instances.
Which of the following apply to how distributed search works? (select all that apply)
Answer : A, C, D
Users log on to the search head and run reports: -- The search head dispatches searches to the peers -- Peers run searches in parallel and return their portion of results -- The search head consolidates the individual results and prepares reports
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
Which feature of Splunk's role configuration can be used to aggregate multiple roles intended for groups of
users?
Within props. conf, which stanzas are valid for data modification? (select all that apply)
Which Splunk configuration file is used to enable data integrity checking?
Which option accurately describes the purpose of the HTTP Event Collector (HEC)?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/UsetheHTTPEventCollector
'The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. HEC uses a token-based authentication model. You can generate a token and then configure a logging library or HTTP client with the token to send data to HEC in a specific format. This process eliminates the need for a Splunk forwarder when you send application events.'
Which Splunk forwarder type allows parsing of data before forwarding to an indexer?
Answer : C
After configuring a universal forwarder to communicate with an indexer, which index can be checked via the Splunk Web UI for a successful connection?
Answer : D
A non-clustered Splunk environment has three indexers (A,B,C) and two search heads (X, Y). During a search executed on search head X, indexer A crashes. What is Splunk's response?
Answer : A
This is explained in the Splunk documentation1, which states:
If an indexer goes down during a search, the search head notifies you that the results might be incomplete. The search head does not attempt to re-run the search on another indexer.
Which pathway represents where a network input in Splunk might be found?
Answer : B
The correct answer is B. The network input in Splunk might be found in the $SPLUNK_HOME/etc/apps/$appName/local/inputs.conf file.
A network input is a type of input that monitors data from TCP or UDP ports. To configure a network input, you need to specify the port number, the connection host, the source, and the sourcetype in the inputs.conf file. You can also set other optional settings, such as index, queue, and host_regex1.
The inputs.conf file is a configuration file that contains the settings for different types of inputs, such as files, directories, scripts, network ports, and Windows event logs. The inputs.conf file can be located in various directories, depending on the scope and priority of the settings. The most common locations are:
$SPLUNK_HOME/etc/system/default: This directory contains the default settings for all inputs. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/system/local: This directory contains the custom settings for all inputs that apply to the entire Splunk instance. The settings in this directory override the default settings2.
$SPLUNK_HOME/etc/apps/$appName/default: This directory contains the default settings for all inputs that are specific to an app. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/apps/$appName/local: This directory contains the custom settings for all inputs that are specific to an app. The settings in this directory override the default and system settings2.
Therefore, the best practice is to create or edit the inputs.conf file in the $SPLUNK_HOME/etc/apps/$appName/local directory, where $appName is the name of the app that you want to configure the network input for. This way, you can avoid modifying the default files and ensure that your settings are applied to the specific app.
The other options are incorrect because:
A . There is no network directory under the apps directory. The network input settings should be in the inputs.conf file, not in a separate directory.
C . There is no udp.conf file in Splunk. The network input settings should be in the inputs.conf file, not in a separate file. The system directory is not the recommended location for custom settings, as it affects the entire Splunk instance.
D . The var/lib/splunk directory is where Splunk stores the indexed data, not the input settings. The homePath setting is used to specify the location of the index data, not the input data. The inputName is not a valid variable for inputs.conf.
When running a real-time search, search results are pulled from which Splunk component?
Answer : D
Using the Splunk reference URL https://docs.splunk.com/Splexicon:Searchpeer
'search peer is a splunk platform instance that responds to search requests from a search head. The term 'search peer' is usally synonymous with the indexer role in a distributed search topology. However, other instance types also have access to indexed data, particularly internal diagnostic data, and thus function as search peers when they respond to search requests for that data.'
There is a file with a vast amount of old dat
a. Which of the following inputs.conf attributes would allow an admin to monitor the file for updates without indexing the pre-existing data?
Answer : D
IgnoreOlderThan: This setting filters files for indexing based on their age. It does not prevent indexing of old data already in the file.
allowList: This setting allows specifying patterns to include files for monitoring, but it does not control indexing of pre-existing data.
monitor: This is the default method for monitoring files but does not address indexing pre-existing data.
followTail: This attribute, when set in inputs.conf, ensures that Splunk starts reading a file from the end (tail) and does not index existing old data. It is ideal for scenarios with large files where only new updates are relevant.
References:
Splunk Docs: Monitor text files
Splunk Docs: Configure followTail in inputs.conf
The CLI command splunk add forward-server indexer:
which configuration file?
Answer : C
The CLI command 'Splunk add forward-server indexer:<receiving-port>' is used to define the indexer and the listening port on forwards. The command creates this kind of entry '[tcpout-server://<ip address>:
https://docs.splunk.com/Documentation/Forwarder/8.2.2/Forwarder/Configureforwardingwithoutputs.conf
If an update is made to an attribute in inputs.conf on a universal forwarder, on which Splunk component
would the fishbucket need to be reset in order to reindex the data?
Answer : A
https://www.splunk.com/en_us/blog/tips-and-tricks/what-is-this-fishbucket-thing.html
'Every Splunk instance has a fishbucket index, except the lightest of hand-tuned lightweight forwarders, and if you index a lot of files it can get quite large. As any other index, you can change the retention policy to control the size via indexes.conf'
Reference https://community.splunk.com/t5/Archive/How-to-reindex-data-from-a-forwarder/td-p/93310
When does a warm bucket roll over to a cold bucket?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/HowSplunkstoresindexes
Once further conditions are met (for example, the index reaches some maximum number of warm buckets), the indexer begins to roll the warm buckets to cold, based on their age. It always selects the oldest warm bucket to roll to cold. Buckets continue to roll to cold as they age in this manner. Cold buckets reside in a different location from hot and warm buckets. You can configure the location so that cold buckets reside on cheaper storage.
166653
When using a directory monitor input, specific source type can be selectively overridden using which configuration file?
An organization wants to collect Windows performance data from a set of clients, however, installing Splunk
software on these clients is not allowed. What option is available to collect this data in Splunk Enterprise?
Answer : B
'The Splunk platform collects remote Windows data for indexing in one of two ways: From Splunk forwarders, Using Windows Management Instrumentation (WMI). For Splunk Cloud deployments, you must use the Splunk Universal Forwarder on a Windows machines to montior remote Windows data.'
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
Which data pipeline phase is the last opportunity for defining event boundaries?
Answer : C
The parsing phase is the process of extracting fields and values from raw data. The parsing phase respects LINE_BREAKER, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings in props.conf. These settings determine how Splunk breaks the data into events based on certain criteria, such as timestamps or regular expressions. The event boundaries are defined by the props.conf file, which can be modified by the administrator. Therefore, the parsing phase is the last opportunity for defining event boundaries.
In case of a conflict between a whitelist and a blacklist input setting, which one is used?
Who provides the Application Secret, Integration, and Secret keys, as well as the API Hostname when setting
up Duo for Multi-Factor Authentication in Splunk Enterprise?
Answer : A
After how many warnings within a rolling 30-day period will a license violation occur with an enforced
Enterprise license?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Aboutlicenseviolations
'Enterprise Trial license. If you get five or more warnings in a rolling 30 days period, you are in violation of your license. Dev/Test license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. Developer license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. BUT for Free license. If you get three or more warnings in a rolling 30 days period, you are in violation of your license.'
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
Which network input option provides durable file-system buffering of data to mitigate data loss due to network outages and splunkd restarts?
Answer : C
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
This file has been manually created on a universal forwarder
A new Splunk admin comes in and connects the universal forwarders to a deployment server and deploys the same app with a new
Which file is now monitored?
Answer : B
What is the correct curl to send multiple events through HTTP Event Collector?
Answer : B
curl ''https://mysplunkserver.example.com:8088/services/collector'' \ -H ''Authorization: Splunk DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67'' \ -d '{''event'': ''Hello World''}, {''event'': ''Hola Mundo''}, {''event'': ''Hallo Welt''}'. This is the correct curl command to send multiple events through HTTP Event Collector (HEC), which is a token-based API that allows you to send data to Splunk Enterprise from any application that can make an HTTP request. The command has the following components:
The URL of the HEC endpoint, which consists of the protocol (https), the hostname or IP address of the Splunk server (mysplunkserver.example.com), the port number (8088), and the service name (services/collector).
The header that contains the authorization token, which is a unique identifier that grants access to the HEC endpoint. The token is prefixed with Splunk and enclosed in quotation marks. The token value (DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67) is an example and should be replaced with your own token value.
The data payload that contains the events to be sent, which are JSON objects enclosed in curly braces and separated by commas. Each event object has a mandatory field called event, which contains the raw data to be indexed. The event value can be a string, a number, a boolean, an array, or another JSON object. In this case, the event values are strings that say hello in different languages.
What is the correct example to redact a plain-text password from raw events?
Answer : B
The correct answer is B. in props.conf:
[identity]
SEDCMD-redact_pw = s/password=([^,|/s]+)/ ####REACTED####/g
s/password=([^,|/s]+)/ ####REACTED####/g
The g flag at the end means that the replacement is applied globally, not just to the first match.
Option A is incorrect because it uses the REGEX attribute instead of the SEDCMD attribute. The REGEX attribute is used to extract fields from events, not to modify them.
Option C is incorrect because it uses the transforms.conf file instead of the props.conf file. The transforms.conf file is used to define transformations that can be applied to fields or events, such as lookups, evaluations, or replacements. However, these transformations are applied after indexing, not before.
Option D is incorrect because it uses both the wrong attribute and the wrong file. There is no REGEX-redact_pw attribute in the transforms.conf file.
References: 1: Redact data from events - Splunk Documentation
Which of the methods listed below supports muti-factor authentication?
Which Splunk component does a search head primarily communicate with?
Answer : A
Which of the following statements describes how distributed search works?
Answer : C
URL https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Configuredistributedsearch
'To activate distributed search, you add search peers, or indexers, to a Splunk Enterprise instance that you desingate as a search head. You do this by specifying each search peer manually.'
Which parent directory contains the configuration files in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Configurationfiledirectories
Section titled, Configuration file directories, states 'A detailed list of settings for each configuration file is provided in the .spec file names for that configuration file. You can find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc system/README folder of your Splunk Enterprise installation...'
Which of the following is accurate regarding the input phase?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline 'The data pipeline segments in depth. INPUT - In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks it into 64K blocks, and annotates each block with some metadata keys. The keys can also include values that are used internally, such as the character encoding of the data stream, and values that control later processing of the data, such as the index into which the events should be stored. PARSING Annotating individual events with metadata copied from the source-wide keys. Transforming event data and metadata according to regex transform rules.'
What are the values for host and index for [stanza1] used by Splunk during index time, given the following configuration files?
What happens when the same username exists in Splunk as well as through LDAP?
Answer : C
Splunk platform attempts native authentication first. If authentication fails outside of a local account that doesn't exist, there is no attempt to use LDAP to log in. This is adapted from precedence of Splunk authentication schema.
How is a remote monitor input distributed to forwarders?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/Usingforwardingagents
Scroll down to the section Titled, How to configure forwarder inputs, and subsection Here are the main ways that you can configure data inputs on a forwarder Install the app or add-on that contains the inputs you wants
What is required when adding a native user to Splunk? (select all that apply)
Answer : A, B
According to the Splunk system admin course PDF, When adding native users, Username and Password ARE REQUIRED
When running the command shown below, what is the default path in which deployment server. conf is created?
splunk set deploy-poll deployServer:port
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/Updating/Definedeploymentclasses#Ways_to_define_server_classes 'When you use forwarder management to create a new server class, it saves the server class definition in a copy of serverclass.conf under $SPLUNK_HOME/etc/system/local. If, instead of using forwarder management, you decide to directly edit serverclass.conf, it is recommended that you create the serverclass.conf file in that same directory, $SPLUNK_HOME/etc/system/local.'
Load balancing on a Universal Forwarder is not scaling correctly. The forwarder's outputs. and the tcpout stanza are setup correctly. What else could be the cause of this scaling issue? (select all that apply)
Answer : A, C
The possible causes of the load balancing issue on the Universal Forwarder are A and C. The receiving port and the DNS record are both factors that affect the ability of the Universal Forwarder to distribute data across multiple receivers. If the receiving port is not properly set up to listen on the right port, or if the DNS record used is not set up with a valid list of IP addresses, the Universal Forwarder might fail to connect to some or all of the receivers, resulting in poor load balancing.
Running this search in a distributed environment:
On what Splunk component does the eval command get executed?
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
What is the default character encoding used by Splunk during the input phase?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Data/Configurecharactersetencoding
'Configure character set encoding. Splunk software attempts to apply UTF-8 encoding to your scources by default. If a source foesn't use UTF-8 encoding or is a non-ASCII file, Splunk software tries to convert data from the source to UTF-8 encoding unless you specify a character set to use by setting the CHARSET key in the props.conf file.'
When working with an indexer cluster, what changes with the global precedence when comparing to a standalone deployment?
Answer : C
The app local directories move to second in the priority list. This is explained in the Splunk documentation, which states:
In a clustered environment, the precedence of configuration files changes slightly from that of a standalone deployment. The app local directories move to second in the priority list, after the peer-apps local directory. This means that any configuration files in the app local directories on the individual peers are overridden by configuration files of the same name and type in the peer-apps local directory on the master node.
When indexing a data source, which fields are considered metadata?
Answer : D
Who provides the Application Secret, Integration, and Secret keys, as well as the API Hostname when setting
up Duo for Multi-Factor Authentication in Splunk Enterprise?
Answer : A
Which of the following indexes come pre-configured with Splunk Enterprise? (select all that apply)
Which forwarder type can parse data prior to forwarding?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Typesofforwarders
'A heavy forwarder parses data before forwarding it and can route data based on criteria such as source or type of event.'
Which file will be matched for the following monitor stanza in inputs. conf?
[monitor: ///var/log/*/bar/*. txt]
Answer : C
The correct answer is C. /var/log/host_460352847/bar/file/foo.txt.
The monitor stanza in inputs.conf is used to configure Splunk to monitor files and directories for new data. The monitor stanza has the following syntax1:
[monitor://<input path>]
The input path can be a file or a directory, and it can include wildcards (*) and regular expressions. The wildcards match any number of characters, including none, while the regular expressions match patterns of characters. The input path is case-sensitive and must be enclosed in double quotes if it contains spaces1.
In this case, the input path is /var/log//bar/.txt, which means Splunk will monitor any file with the .txt extension that is located in a subdirectory named bar under the /var/log directory. The subdirectory bar can be at any level under the /var/log directory, and the * wildcard will match any characters before or after the bar and .txt parts1.
Therefore, the file /var/log/host_460352847/bar/file/foo.txt will be matched by the monitor stanza, as it meets the criteria. The other files will not be matched, because:
A . /var/log/host_460352847/temp/bar/file/csv/foo.txt has a .csv extension, not a .txt extension.
B . /var/log/host_460352847/bar/foo.txt is not located in a subdirectory under the bar directory, but directly in the bar directory.
D . /var/log/host_460352847/temp/bar/file/foo.txt is located in a subdirectory named file under the bar directory, not directly in the bar directory.
In inputs. conf, which stanza would mean Splunk was only reading one local file?
Which of the following methods will connect a deployment client to a deployment server? (select all that apply)
This file has been manually created on a universal forwarder
A new Splunk admin comes in and connects the universal forwarders to a deployment server and deploys the same app with a new
Which file is now monitored?
Answer : B
In which phase do indexed extractions in props.conf occur?
Answer : B
The following items in the phases below are listed in the order Splunk applies them (ie LINE_BREAKER occurs before TRUNCATE).
Input phase
inputs.conf
props.conf
CHARSET
NO_BINARY_CHECK
CHECK_METHOD
CHECK_FOR_HEADER (deprecated)
PREFIX_SOURCETYPE
sourcetype
wmi.conf
regmon-filters.conf
Structured parsing phase
props.conf
INDEXED_EXTRACTIONS, and all other structured data header extractions
Parsing phase
props.conf
LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings
TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ, and all other time extraction settings and rules
TRANSFORMS which includes per-event queue filtering, per-event index assignment, per-event routing
SEDCMD
MORE_THAN, LESS_THAN
transforms.conf
stanzas referenced by a TRANSFORMS clause in props.conf
LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH
Configurationparametersandthedatapipeline
What is the correct order of steps in Duo Multifactor Authentication?
Answer : C
Using the provided DUO/Splunk reference URL https://duo.com/docs/splunk
Scroll down to the Network Diagram section and note the following 6 similar steps
1 - SPlunk connection initiated
2 - Primary authentication
3 - Splunk connection established to Duo Security over TCP port 443
4 - Secondary authentication via Duo Security's service
5 - Splunk receives authentication response
6 - Splunk session logged in.
The LINE_BREAKER attribute is configured in which configuration file?
Answer : A
After automatic load balancing is enabled on a forwarder, the time interval for switching indexers can be updated by using which of the following attributes?
Answer : C
The following stanzas in inputs. conf are currently being used by a deployment client:
[udp: //145.175.118.177:1001
Connection_host = dns
sourcetype = syslog
Which of the following statements is true of data that is received via this input?
Answer : D
This is because the input type is UDP, which is an unreliable protocol that does not guarantee delivery, order, or integrity of the data packets. UDP does not have any mechanism to resend or acknowledge the data packets, so if Splunk is restarted, any data that was in transit or in the buffer may be dropped and not indexed.
Which is a valid stanza for a network input?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Data/Monitornetworkports
Bypassautomaticsourcetypeassignment
Consider a company with a Splunk distributed environment in production. The Compliance Department wants to start using Splunk; however, they want to ensure that no one can see their reports or any other knowledge objects. Which Splunk Component can be added to implement this policy for the new team?
Answer : D
An admin updates the Role to Group mapping for external authentication. How does the change affect users that are currently logged into Splunk?
Answer : A
Splunk checks role-to-group mapping only during user login for external authentication (e.g., LDAP, SAML). Users already logged in will continue using their previously assigned roles until they log out and log back in.
The changes to role mapping do not disrupt ongoing sessions.
Incorrect Options:
B: Search is not disabled upon role updates.
C: This is incorrect since existing users are also updated upon the next login.
D: Role updates do not terminate ongoing sessions.
References:
Splunk Docs: Configure user authentication
When running a real-time search, search results are pulled from which Splunk component?
Answer : D
Using the Splunk reference URL https://docs.splunk.com/Splexicon:Searchpeer
'search peer is a splunk platform instance that responds to search requests from a search head. The term 'search peer' is usally synonymous with the indexer role in a distributed search topology. However, other instance types also have access to indexed data, particularly internal diagnostic data, and thus function as search peers when they respond to search requests for that data.'
Which of the following is the use case for the deployment server feature of Splunk?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
How often does Splunk recheck the LDAP server?
Which parent directory contains the configuration files in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Configurationfiledirectories
Section titled, Configuration file directories, states 'A detailed list of settings for each configuration file is provided in the .spec file names for that configuration file. You can find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc system/README folder of your Splunk Enterprise installation...'
Within props. conf, which stanzas are valid for data modification? (select all that apply)
Which of the following is valid distribute search group?
A)
B)
C)
D)
Answer : D
Where should apps be located on the deployment server that the clients pull from?
Answer : D
After an app is downloaded, it resides under $SPLUNK_HOME/etc/apps on the deployment clients. But it resided in the $SPLUNK_HOME/etc/deployment-apps location in the deployment server.
A Splunk administrator has been tasked with developing a retention strategy to have frequently accessed data sets on SSD storage and to have older, less frequently accessed data on slower NAS storage. They have set a mount point for the NAS. Which parameter do they need to modify to set the path for the older, less frequently accessed data in indexes.conf?
What is the default value of LINE_BREAKER?
Answer : B
Line breaking, which uses theLINE_BREAKERsetting to split the incoming stream of data into separate lines. By default, theLINE_BREAKERvalue is any sequence of newlines and carriage returns. In regular expression format, this is represented as the following string:([\r\n]+). You don't normally need to adjust this setting, but in cases where it's necessary, you must configure it in the props.conf configuration file on the forwarder that sends the data to Splunk Cloud Platform or a Splunk Enterprise indexer. TheLINE_BREAKERsetting expects a value in regular expression format.
Which of the following methods will connect a deployment client to a deployment server? (select all that apply)
In this source definition the MAX_TIMESTAMP_LOOKHEAD is missing. Which value would fit best?
Event example:
Answer : D
https://docs.splunk.com/Documentation/Splunk/6.2.0/Data/Configuretimestamprecognition
'Specify how far (how many characters) into an event Splunk software should look for a timestamp.' since TIME_PREFIX = ^ and timestamp is from 0-29 position, so D=30 will pick up the WHOLE timestamp correctly.
What happens when there are conflicting settings within two or more configuration files?
Answer : D
When there are conflicting settings within two or more configuration files, the setting with the highest precedence is used. The precedence of configuration files is determined by a combination of the file type, the directory location, and the alphabetical order of the file names.
In this example, if useACK is set to true and the maxQueueSize is set to 7MB, what is the size of the wait queue on this universal forwarder?
Using SEDCMD in props.conf allows raw data to be modified. With the given event below, which option will mask the first three digits of the AcctID field resulting output: [22/Oct/2018:15:50:21] VendorID=1234 Code=B AcctID=xxx5309
Event:
[22/Oct/2018:15:50:21] VendorID=1234 Code=B AcctID=xxx5309
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Anonymizedata
Scrolling down to the section titled 'Define the sed script in props.conf shows the correct syntax of an example which validates that the number/character /1 immediately preceded the /g
Which option on the Add Data menu is most useful for testing data ingestion without creating inputs.conf?
Answer : A
Which optional configuration setting in inputs .conf allows you to selectively forward the data to specific indexer(s)?
Answer : A
Specifies a comma-separated list of tcpout group names. Use this setting to selectively forward your data to specific indexers by specifying the tcpout groups that the forwarder should use when forwarding the data. Define the tcpout group names in the outputs.conf file in [tcpout:<tcpout_group_name>] stanzas. The groups present in defaultGroup in [tcpout] stanza in the outputs.conf file.
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
A non-clustered Splunk environment has three indexers (A,B,C) and two search heads (X, Y). During a search executed on search head X, indexer A crashes. What is Splunk's response?
Answer : A
This is explained in the Splunk documentation1, which states:
If an indexer goes down during a search, the search head notifies you that the results might be incomplete. The search head does not attempt to re-run the search on another indexer.
When indexing a data source, which fields are considered metadata?
Answer : D
Which artifact is required in the request header when creating an HTTP event?
When does a warm bucket roll over to a cold bucket?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/HowSplunkstoresindexes
Once further conditions are met (for example, the index reaches some maximum number of warm buckets), the indexer begins to roll the warm buckets to cold, based on their age. It always selects the oldest warm bucket to roll to cold. Buckets continue to roll to cold as they age in this manner. Cold buckets reside in a different location from hot and warm buckets. You can configure the location so that cold buckets reside on cheaper storage.
166653
Which of the following statements apply to directory inputs? {select all that apply)
Answer : A, C
When running the command shown below, what is the default path in which deployment server. conf is created?
splunk set deploy-poll deployServer:port
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/Updating/Definedeploymentclasses#Ways_to_define_server_classes 'When you use forwarder management to create a new server class, it saves the server class definition in a copy of serverclass.conf under $SPLUNK_HOME/etc/system/local. If, instead of using forwarder management, you decide to directly edit serverclass.conf, it is recommended that you create the serverclass.conf file in that same directory, $SPLUNK_HOME/etc/system/local.'
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
In which scenario would a Splunk Administrator want to enable data integrity check when creating an index?
Answer : D
On the deployment server, administrators can map clients to server classes using client filters. Which of the
following statements is accurate?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.1/Updating/Filterclients
same/td-p/390910
Which is a valid stanza for a network input?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Data/Monitornetworkports
Bypassautomaticsourcetypeassignment
Which pathway represents where a network input in Splunk might be found?
Answer : B
The correct answer is B. The network input in Splunk might be found in the $SPLUNK_HOME/etc/apps/$appName/local/inputs.conf file.
A network input is a type of input that monitors data from TCP or UDP ports. To configure a network input, you need to specify the port number, the connection host, the source, and the sourcetype in the inputs.conf file. You can also set other optional settings, such as index, queue, and host_regex1.
The inputs.conf file is a configuration file that contains the settings for different types of inputs, such as files, directories, scripts, network ports, and Windows event logs. The inputs.conf file can be located in various directories, depending on the scope and priority of the settings. The most common locations are:
$SPLUNK_HOME/etc/system/default: This directory contains the default settings for all inputs. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/system/local: This directory contains the custom settings for all inputs that apply to the entire Splunk instance. The settings in this directory override the default settings2.
$SPLUNK_HOME/etc/apps/$appName/default: This directory contains the default settings for all inputs that are specific to an app. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/apps/$appName/local: This directory contains the custom settings for all inputs that are specific to an app. The settings in this directory override the default and system settings2.
Therefore, the best practice is to create or edit the inputs.conf file in the $SPLUNK_HOME/etc/apps/$appName/local directory, where $appName is the name of the app that you want to configure the network input for. This way, you can avoid modifying the default files and ensure that your settings are applied to the specific app.
The other options are incorrect because:
A . There is no network directory under the apps directory. The network input settings should be in the inputs.conf file, not in a separate directory.
C . There is no udp.conf file in Splunk. The network input settings should be in the inputs.conf file, not in a separate file. The system directory is not the recommended location for custom settings, as it affects the entire Splunk instance.
D . The var/lib/splunk directory is where Splunk stores the indexed data, not the input settings. The homePath setting is used to specify the location of the index data, not the input data. The inputName is not a valid variable for inputs.conf.
Event processing occurs at which phase of the data pipeline?
How is data handled by Splunk during the input phase of the data ingestion process?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Deploy/Datapipeline
'In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks in into 64K blocks, and annotates each block with some metadata keys.'
The CLI command splunk add forward-server indexer:
which configuration file?
Answer : C
The CLI command 'Splunk add forward-server indexer:<receiving-port>' is used to define the indexer and the listening port on forwards. The command creates this kind of entry '[tcpout-server://<ip address>:
https://docs.splunk.com/Documentation/Forwarder/8.2.2/Forwarder/Configureforwardingwithoutputs.conf
During search time, which directory of configuration files has the highest precedence?
Answer : D
Adding further clarity and quoting same Splunk reference URL from @giubal'
'To keep configuration settings consistent across peer nodes, configuration files are managed from the cluster master, which pushes the files to the slave-app directories on the peer nodes. Files in the slave-app directories have the highest precedence in a cluster peer's configuration. Here is the expanded precedence order for cluster peers:
1.Slave-app local directories -- highest priority
2. System local directory
3. App local directories
4. Slave-app default directories
5. App default directories
6. System default directory --lowest priority
What are the minimum required settings when creating a network input in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Inputsconf
[tcp://<remote server>:
*Configures the input to listen on a specific TCP network port.
*If a <remote server> makes a connection to this instance, the input uses this stanza to configure itself.
*If you do not specify <remote server>, this stanza matches all connections on the specified port.
*Generates events with source set to 'tcp:
*If you do not specify a sourcetype, generates events with sourcetype set to 'tcp-raw'
Which of the following is the use case for the deployment server feature of Splunk?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
After an Enterprise Trial license expires, it will automatically convert to a Free license. How many days is an Enterprise Trial license valid before this conversion occurs?
User role inheritance allows what to be inherited from the parent role? (select all that apply)
In this source definition the MAX_TIMESTAMP_LOOKHEAD is missing. Which value would fit best?
Event example:
Answer : D
https://docs.splunk.com/Documentation/Splunk/6.2.0/Data/Configuretimestamprecognition
'Specify how far (how many characters) into an event Splunk software should look for a timestamp.' since TIME_PREFIX = ^ and timestamp is from 0-29 position, so D=30 will pick up the WHOLE timestamp correctly.
Which of the following applies only to Splunk index data integrity check?
Answer : C
Who provides the Application Secret, Integration, and Secret keys, as well as the API Hostname when setting
up Duo for Multi-Factor Authentication in Splunk Enterprise?
Answer : A
Which file will be matched for the following monitor stanza in inputs. conf?
[monitor: ///var/log/*/bar/*. txt]
Answer : C
The correct answer is C. /var/log/host_460352847/bar/file/foo.txt.
The monitor stanza in inputs.conf is used to configure Splunk to monitor files and directories for new data. The monitor stanza has the following syntax1:
[monitor://<input path>]
The input path can be a file or a directory, and it can include wildcards (*) and regular expressions. The wildcards match any number of characters, including none, while the regular expressions match patterns of characters. The input path is case-sensitive and must be enclosed in double quotes if it contains spaces1.
In this case, the input path is /var/log//bar/.txt, which means Splunk will monitor any file with the .txt extension that is located in a subdirectory named bar under the /var/log directory. The subdirectory bar can be at any level under the /var/log directory, and the * wildcard will match any characters before or after the bar and .txt parts1.
Therefore, the file /var/log/host_460352847/bar/file/foo.txt will be matched by the monitor stanza, as it meets the criteria. The other files will not be matched, because:
A . /var/log/host_460352847/temp/bar/file/csv/foo.txt has a .csv extension, not a .txt extension.
B . /var/log/host_460352847/bar/foo.txt is not located in a subdirectory under the bar directory, but directly in the bar directory.
D . /var/log/host_460352847/temp/bar/file/foo.txt is located in a subdirectory named file under the bar directory, not directly in the bar directory.
A log file contains 193 days worth of timestamped events. Which monitor stanza would be used to collect data 45 days old and newer from that log file?
Answer : D
What are the required stanza attributes when configuring the transforms. conf to manipulate or remove events?
Answer : C
REGEX = <regular expression>
* Enter a regular expression to operate on your data.
FORMAT = <string>
* NOTE: This option is valid for both index-time and search-time field extraction. Index-time field extraction configuration require the FORMAT settings. The FORMAT settings is optional for search-time field extraction configurations.
* This setting specifies the format of the event, including any field names or values you want to add.
DEST_KEY = <key>
* NOTE: This setting is only valid for index-time field extractions.
* Specifies where SPLUNK software stores the expanded FORMAT results in accordance with the REGEX match.
To set up a Network input in Splunk, what needs to be specified'?
Which authentication methods are natively supported within Splunk Enterprise? (select all that apply)
Answer : A, B, C
Splunk authentication: Provides Admin, Power and User by default, and you can define your own roles using a list of capabilities. If you have an Enterprise license, Splunk authentication is enabled by default. See Set up user authentication with Splunk's built-in system for more information. LDAP: Splunk Enterprise supports authentication with its internal authentication services or your existing LDAP server. See Set up user authentication with LDAP for more information. Scripted authentication API: Use scripted authentication to integrate Splunk authentication with an external authentication system, such as RADIUS or PAM. See Set up user authentication with external systems for more information. Note: Authentication, including native authentication, LDAP, and scripted authentication, is not available in Splunk Free.
Which feature of Splunk's role configuration can be used to aggregate multiple roles intended for groups of
users?
Which of the following is accurate regarding the input phase?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline 'The data pipeline segments in depth. INPUT - In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks it into 64K blocks, and annotates each block with some metadata keys. The keys can also include values that are used internally, such as the character encoding of the data stream, and values that control later processing of the data, such as the index into which the events should be stored. PARSING Annotating individual events with metadata copied from the source-wide keys. Transforming event data and metadata according to regex transform rules.'
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
For single line event sourcetypes. it is most efficient to set SHOULD_linemerge to what value?
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking
Attribute : SHOULD_LINEMERGE = [true|false]
Description : When set to true, the Splunk platform combines several input lines into a single event, with configuration based on the settings described in the next section.
Which of the following authentication types requires scripting in Splunk?
Answer : D
https://answers.splunk.com/answers/131127/scripted-authentication.html
Scripted Authentication: An option for Splunk Enterprise authentication. You can use an authentication system that you have in place (such as PAM or RADIUS) by configuring authentication.conf to use a script instead of using LDAP or Splunk Enterprise default authentication.
A Splunk administrator has been tasked with developing a retention strategy to have frequently accessed data sets on SSD storage and to have older, less frequently accessed data on slower NAS storage. They have set a mount point for the NAS. Which parameter do they need to modify to set the path for the older, less frequently accessed data in indexes.conf?
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
When working with an indexer cluster, what changes with the global precedence when comparing to a standalone deployment?
Answer : C
The app local directories move to second in the priority list. This is explained in the Splunk documentation, which states:
In a clustered environment, the precedence of configuration files changes slightly from that of a standalone deployment. The app local directories move to second in the priority list, after the peer-apps local directory. This means that any configuration files in the app local directories on the individual peers are overridden by configuration files of the same name and type in the peer-apps local directory on the master node.
Consider a company with a Splunk distributed environment in production. The Compliance Department wants to start using Splunk; however, they want to ensure that no one can see their reports or any other knowledge objects. Which Splunk Component can be added to implement this policy for the new team?
Answer : D
How often does Splunk recheck the LDAP server?
Using the CLI on the forwarder, how could the current forwarder to indexer configuration be viewed?
Load balancing on a Universal Forwarder is not scaling correctly. The forwarder's outputs. and the tcpout stanza are setup correctly. What else could be the cause of this scaling issue? (select all that apply)
Answer : A, C
The possible causes of the load balancing issue on the Universal Forwarder are A and C. The receiving port and the DNS record are both factors that affect the ability of the Universal Forwarder to distribute data across multiple receivers. If the receiving port is not properly set up to listen on the right port, or if the DNS record used is not set up with a valid list of IP addresses, the Universal Forwarder might fail to connect to some or all of the receivers, resulting in poor load balancing.
Which Splunk indexer operating system platform is supported when sending logs from a Windows universal forwarder?
Answer : A
'The forwarder/indexer relationship can be considered platform agnostic (within the sphere of supported platforms) because they exchange their data handshake (and the data, if you wish) over TCP.
The Splunk administrator wants to ensure data is distributed evenly amongst the indexers. To do this, he runs
the following search over the last 24 hours:
index=*
What field can the administrator check to see the data distribution?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Knowledge/Usedefaultfields splunk_server
The splunk server field contains the name of the Splunk server containing the event. Useful in a distributed Splunk environment. Example: Restrict a search to the main index on a server named remote. splunk_server=remote index=main 404
When using license pools, volume allocations apply to which Splunk components?
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
When deploying apps on Universal Forwarders using the deployment server, what is the correct component and location of the app before it is deployed?
Answer : C
The correct answer is C. On Deployment Server, $SPLUNK_HOME/etc/deployment-apps.
A deployment server is a Splunk Enterprise instance that acts as a centralized configuration manager for any number of other instances, called ''deployment clients''. A deployment client can be a universal forwarder, a non-clustered indexer, or a search head1.
A deployment app is a directory that contains any content that you want to download to a set of deployment clients. The content can include a Splunk Enterprise app, a set of Splunk Enterprise configurations, or other content, such as scripts, images, and supporting files2.
You create a deployment app by creating a directory for it on the deployment server. The default location is $SPLUNK_HOME/etc/deployment-apps, but this is configurable through the repositoryLocation attribute in serverclass.conf. Underneath this location, each app must have its own subdirectory. The name of the subdirectory serves as the app name in the forwarder management interface2.
The other options are incorrect because:
A . On Universal Forwarder, $SPLUNK_HOME/etc/apps. This is the location where the deployment app resides after it is downloaded from the deployment server to the universal forwarder. It is not the location of the app before it is deployed2.
B . On Deployment Server, $SPLUNK_HOME/etc/apps. This is the location where the apps that are specific to the deployment server itself reside. It is not the location where the deployment apps for the clients are stored2.
After how many warnings within a rolling 30-day period will a license violation occur with an enforced
Enterprise license?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Aboutlicenseviolations
'Enterprise Trial license. If you get five or more warnings in a rolling 30 days period, you are in violation of your license. Dev/Test license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. Developer license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. BUT for Free license. If you get three or more warnings in a rolling 30 days period, you are in violation of your license.'
When deploying apps, which attribute in the forwarder management interface determines the apps that clients install?
Answer : C
<https://docs.splunk.com/Documentation/Splunk/8.0.6/Updating/Deploymentserverarchitecture>
https://docs.splunk.com/Splexicon:Serverclass
When running the command shown below, what is the default path in which deployment server. conf is created?
splunk set deploy-poll deployServer:port
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/Updating/Definedeploymentclasses#Ways_to_define_server_classes 'When you use forwarder management to create a new server class, it saves the server class definition in a copy of serverclass.conf under $SPLUNK_HOME/etc/system/local. If, instead of using forwarder management, you decide to directly edit serverclass.conf, it is recommended that you create the serverclass.conf file in that same directory, $SPLUNK_HOME/etc/system/local.'
What are the required stanza attributes when configuring the transforms. conf to manipulate or remove events?
Answer : C
REGEX = <regular expression>
* Enter a regular expression to operate on your data.
FORMAT = <string>
* NOTE: This option is valid for both index-time and search-time field extraction. Index-time field extraction configuration require the FORMAT settings. The FORMAT settings is optional for search-time field extraction configurations.
* This setting specifies the format of the event, including any field names or values you want to add.
DEST_KEY = <key>
* NOTE: This setting is only valid for index-time field extractions.
* Specifies where SPLUNK software stores the expanded FORMAT results in accordance with the REGEX match.
Which additional component is required for a search head cluster?
Answer : A
The deployer. This is a Splunk Enterprise instance that distributes apps and other configurations to the cluster members. It stands outside the cluster and cannot run on the same instance as a cluster member. It can, however, under some circumstances, reside on the same instance as other Splunk Enterprise components, such as a deployment server or an indexer cluster master node.
What is required when adding a native user to Splunk? (select all that apply)
Answer : A, B
According to the Splunk system admin course PDF, When adding native users, Username and Password ARE REQUIRED
Which of the following is accurate regarding the input phase?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline 'The data pipeline segments in depth. INPUT - In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks it into 64K blocks, and annotates each block with some metadata keys. The keys can also include values that are used internally, such as the character encoding of the data stream, and values that control later processing of the data, such as the index into which the events should be stored. PARSING Annotating individual events with metadata copied from the source-wide keys. Transforming event data and metadata according to regex transform rules.'
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
How can native authentication be disabled in Splunk?
Answer : B
For single line event sourcetypes. it is most efficient to set SHOULD_linemerge to what value?
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking
Attribute : SHOULD_LINEMERGE = [true|false]
Description : When set to true, the Splunk platform combines several input lines into a single event, with configuration based on the settings described in the next section.
In this example, if useACK is set to true and the maxQueueSize is set to 7MB, what is the size of the wait queue on this universal forwarder?
What is the correct example to redact a plain-text password from raw events?
Answer : B
The correct answer is B. in props.conf:
[identity]
SEDCMD-redact_pw = s/password=([^,|/s]+)/ ####REACTED####/g
s/password=([^,|/s]+)/ ####REACTED####/g
The g flag at the end means that the replacement is applied globally, not just to the first match.
Option A is incorrect because it uses the REGEX attribute instead of the SEDCMD attribute. The REGEX attribute is used to extract fields from events, not to modify them.
Option C is incorrect because it uses the transforms.conf file instead of the props.conf file. The transforms.conf file is used to define transformations that can be applied to fields or events, such as lookups, evaluations, or replacements. However, these transformations are applied after indexing, not before.
Option D is incorrect because it uses both the wrong attribute and the wrong file. There is no REGEX-redact_pw attribute in the transforms.conf file.
References: 1: Redact data from events - Splunk Documentation
Where are license files stored?
Answer : C
When would the following command be used?
Which feature in Splunk allows Event Breaking, Timestamp extractions, and any advanced configurations
found in props.conf to be validated all through the UI?
A Splunk administrator has been tasked with developing a retention strategy to have frequently accessed data sets on SSD storage and to have older, less frequently accessed data on slower NAS storage. They have set a mount point for the NAS. Which parameter do they need to modify to set the path for the older, less frequently accessed data in indexes.conf?
There is a file with a vast amount of old dat
a. Which of the following inputs.conf attributes would allow an admin to monitor the file for updates without indexing the pre-existing data?
Answer : D
IgnoreOlderThan: This setting filters files for indexing based on their age. It does not prevent indexing of old data already in the file.
allowList: This setting allows specifying patterns to include files for monitoring, but it does not control indexing of pre-existing data.
monitor: This is the default method for monitoring files but does not address indexing pre-existing data.
followTail: This attribute, when set in inputs.conf, ensures that Splunk starts reading a file from the end (tail) and does not index existing old data. It is ideal for scenarios with large files where only new updates are relevant.
References:
Splunk Docs: Monitor text files
Splunk Docs: Configure followTail in inputs.conf
Which Splunk forwarder has a built-in license?
Answer : C
Which Splunk component requires a Forwarder license?
Answer : B
How can native authentication be disabled in Splunk?
Answer : B
Which Splunk component distributes apps and certain other configuration updates to search head cluster members?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations First line says it all: 'The deployment server distributes deployment apps to clients.'
Which of the following statements describe deployment management? (select all that apply)
Answer : A, B
'All Splunk Enterprise instances functioning as management components needs access to an Enterprise license. Management components include the deployment server, the indexer cluster manager node, the search head cluster deployer, and the monitoring console.'
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
Which of the following apply to how distributed search works? (select all that apply)
Answer : A, C, D
Users log on to the search head and run reports: -- The search head dispatches searches to the peers -- Peers run searches in parallel and return their portion of results -- The search head consolidates the individual results and prepares reports
Which option accurately describes the purpose of the HTTP Event Collector (HEC)?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/UsetheHTTPEventCollector
'The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. HEC uses a token-based authentication model. You can generate a token and then configure a logging library or HTTP client with the token to send data to HEC in a specific format. This process eliminates the need for a Splunk forwarder when you send application events.'
What hardware attribute would need to be changed to increase the number of simultaneous searches (ad-hoc and scheduled) on a single search head?
Answer : B
https://docs.splunk.com/Documentation/Splunk/7.3.1/DistSearch/SHCarchitecture
Scroll down to section titled, How the cluster handles concurrent search quotas, 'Overall search quota. This quota determines the maximum number of historical searches (combined scheduled and ad hoc) that the cluster can run concurrently. This quota is configured with max_Searches_per_cpu and related settings in limits.conf.'
An index stores its data in buckets. Which default directories does Splunk use to store buckets? (Choose all that apply.)
Answer : C, D
What is required when adding a native user to Splunk? (select all that apply)
Answer : A, B
According to the Splunk system admin course PDF, When adding native users, Username and Password ARE REQUIRED
What are the required stanza attributes when configuring the transforms. conf to manipulate or remove events?
Answer : C
REGEX = <regular expression>
* Enter a regular expression to operate on your data.
FORMAT = <string>
* NOTE: This option is valid for both index-time and search-time field extraction. Index-time field extraction configuration require the FORMAT settings. The FORMAT settings is optional for search-time field extraction configurations.
* This setting specifies the format of the event, including any field names or values you want to add.
DEST_KEY = <key>
* NOTE: This setting is only valid for index-time field extractions.
* Specifies where SPLUNK software stores the expanded FORMAT results in accordance with the REGEX match.
In which Splunk configuration is the SEDCMD used?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Forwarddatatothird-partysystemsd
'You can specify a SEDCMD configuration in props.conf to address data that contains characters that the third-party server cannot process. '
Which authentication methods are natively supported within Splunk Enterprise? (select all that apply)
Answer : A, B, C
Splunk authentication: Provides Admin, Power and User by default, and you can define your own roles using a list of capabilities. If you have an Enterprise license, Splunk authentication is enabled by default. See Set up user authentication with Splunk's built-in system for more information. LDAP: Splunk Enterprise supports authentication with its internal authentication services or your existing LDAP server. See Set up user authentication with LDAP for more information. Scripted authentication API: Use scripted authentication to integrate Splunk authentication with an external authentication system, such as RADIUS or PAM. See Set up user authentication with external systems for more information. Note: Authentication, including native authentication, LDAP, and scripted authentication, is not available in Splunk Free.
After how many warnings within a rolling 30-day period will a license violation occur with an enforced
Enterprise license?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Aboutlicenseviolations
'Enterprise Trial license. If you get five or more warnings in a rolling 30 days period, you are in violation of your license. Dev/Test license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. Developer license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. BUT for Free license. If you get three or more warnings in a rolling 30 days period, you are in violation of your license.'
When using license pools, volume allocations apply to which Splunk components?
When would the following command be used?
In inputs. conf, which stanza would mean Splunk was only reading one local file?
A log file contains 193 days worth of timestamped events. Which monitor stanza would be used to collect data 45 days old and newer from that log file?
Answer : D
Which additional component is required for a search head cluster?
Answer : A
The deployer. This is a Splunk Enterprise instance that distributes apps and other configurations to the cluster members. It stands outside the cluster and cannot run on the same instance as a cluster member. It can, however, under some circumstances, reside on the same instance as other Splunk Enterprise components, such as a deployment server or an indexer cluster master node.
When deploying apps on Universal Forwarders using the deployment server, what is the correct component and location of the app before it is deployed?
Answer : C
The correct answer is C. On Deployment Server, $SPLUNK_HOME/etc/deployment-apps.
A deployment server is a Splunk Enterprise instance that acts as a centralized configuration manager for any number of other instances, called ''deployment clients''. A deployment client can be a universal forwarder, a non-clustered indexer, or a search head1.
A deployment app is a directory that contains any content that you want to download to a set of deployment clients. The content can include a Splunk Enterprise app, a set of Splunk Enterprise configurations, or other content, such as scripts, images, and supporting files2.
You create a deployment app by creating a directory for it on the deployment server. The default location is $SPLUNK_HOME/etc/deployment-apps, but this is configurable through the repositoryLocation attribute in serverclass.conf. Underneath this location, each app must have its own subdirectory. The name of the subdirectory serves as the app name in the forwarder management interface2.
The other options are incorrect because:
A . On Universal Forwarder, $SPLUNK_HOME/etc/apps. This is the location where the deployment app resides after it is downloaded from the deployment server to the universal forwarder. It is not the location of the app before it is deployed2.
B . On Deployment Server, $SPLUNK_HOME/etc/apps. This is the location where the apps that are specific to the deployment server itself reside. It is not the location where the deployment apps for the clients are stored2.
Which Splunk component distributes apps and certain other configuration updates to search head cluster members?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations First line says it all: 'The deployment server distributes deployment apps to clients.'
Which of the following is a valid distributed search group?
Event processing occurs at which phase of the data pipeline?
In this example, if useACK is set to true and the maxQueueSize is set to 7MB, what is the size of the wait queue on this universal forwarder?
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
For single line event sourcetypes. it is most efficient to set SHOULD_linemerge to what value?
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking
Attribute : SHOULD_LINEMERGE = [true|false]
Description : When set to true, the Splunk platform combines several input lines into a single event, with configuration based on the settings described in the next section.
Which of the following are supported options when configuring optional network inputs?
What is required when adding a native user to Splunk? (select all that apply)
Answer : A, B
According to the Splunk system admin course PDF, When adding native users, Username and Password ARE REQUIRED
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
In a distributed environment, which Splunk component is used to distribute apps and configurations to the
other Splunk instances?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations
First line says it all: 'The deployment server distributes deployment apps to clients.'
Local user accounts created in Splunk store passwords in which file?
Answer : A
'To set the default username and password, place user-seed.conf in $SPLUNK_HOME/etc/system/local. You must restart Splunk to enable configurations. If the $SPLUNK_HOME/etc/passwd file is present, the settings in this file (user-seed.conf) are not used.'
When configuring HTTP Event Collector (HEC) input, how would one ensure the events have been indexed?
Answer : A
Per the provided Splunk reference URL
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
'While HEC has precautions in place to prevent data loss, it's impossible to completely prevent such an occurrence, especially in the event of a network failure or hardware crash. This is where indexer acknolwedgment comes in.'
Reference https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
When running a real-time search, search results are pulled from which Splunk component?
Answer : D
Using the Splunk reference URL https://docs.splunk.com/Splexicon:Searchpeer
'search peer is a splunk platform instance that responds to search requests from a search head. The term 'search peer' is usally synonymous with the indexer role in a distributed search topology. However, other instance types also have access to indexed data, particularly internal diagnostic data, and thus function as search peers when they respond to search requests for that data.'
The priority of layered Splunk configuration files depends on the file's:
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
'To determine the order of directories for evaluating configuration file precendence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user'
When would the following command be used?
If an update is made to an attribute in inputs.conf on a universal forwarder, on which Splunk component
would the fishbucket need to be reset in order to reindex the data?
Answer : A
https://www.splunk.com/en_us/blog/tips-and-tricks/what-is-this-fishbucket-thing.html
'Every Splunk instance has a fishbucket index, except the lightest of hand-tuned lightweight forwarders, and if you index a lot of files it can get quite large. As any other index, you can change the retention policy to control the size via indexes.conf'
Reference https://community.splunk.com/t5/Archive/How-to-reindex-data-from-a-forwarder/td-p/93310
Which of the following types of data count against the license daily quota?
On the deployment server, administrators can map clients to server classes using client filters. Which of the
following statements is accurate?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.1/Updating/Filterclients
same/td-p/390910
Which of the following is valid distribute search group?
A)
B)
C)
D)
Answer : D
How is a remote monitor input distributed to forwarders?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/Usingforwardingagents
Scroll down to the section Titled, How to configure forwarder inputs, and subsection Here are the main ways that you can configure data inputs on a forwarder Install the app or add-on that contains the inputs you wants
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
When deploying apps, which attribute in the forwarder management interface determines the apps that clients install?
Answer : C
<https://docs.splunk.com/Documentation/Splunk/8.0.6/Updating/Deploymentserverarchitecture>
https://docs.splunk.com/Splexicon:Serverclass
When using license pools, volume allocations apply to which Splunk components?
Which option on the Add Data menu is most useful for testing data ingestion without creating inputs.conf?
Answer : A
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
How would you configure your distsearch conf to allow you to run the search below? sourcetype=access_combined status=200 action=purchase splunk_setver_group=HOUSTON
A)
B)
C)
D)
Which of the following statements accurately describes using SSL to secure the feed from a forwarder?
Answer : A
AboutsecuringyourSplunkconfigurationwithSSL
What is the default value of LINE_BREAKER?
Answer : B
Line breaking, which uses theLINE_BREAKERsetting to split the incoming stream of data into separate lines. By default, theLINE_BREAKERvalue is any sequence of newlines and carriage returns. In regular expression format, this is represented as the following string:([\r\n]+). You don't normally need to adjust this setting, but in cases where it's necessary, you must configure it in the props.conf configuration file on the forwarder that sends the data to Splunk Cloud Platform or a Splunk Enterprise indexer. TheLINE_BREAKERsetting expects a value in regular expression format.
What is the correct curl to send multiple events through HTTP Event Collector?
Answer : B
curl ''https://mysplunkserver.example.com:8088/services/collector'' \ -H ''Authorization: Splunk DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67'' \ -d '{''event'': ''Hello World''}, {''event'': ''Hola Mundo''}, {''event'': ''Hallo Welt''}'. This is the correct curl command to send multiple events through HTTP Event Collector (HEC), which is a token-based API that allows you to send data to Splunk Enterprise from any application that can make an HTTP request. The command has the following components:
The URL of the HEC endpoint, which consists of the protocol (https), the hostname or IP address of the Splunk server (mysplunkserver.example.com), the port number (8088), and the service name (services/collector).
The header that contains the authorization token, which is a unique identifier that grants access to the HEC endpoint. The token is prefixed with Splunk and enclosed in quotation marks. The token value (DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67) is an example and should be replaced with your own token value.
The data payload that contains the events to be sent, which are JSON objects enclosed in curly braces and separated by commas. Each event object has a mandatory field called event, which contains the raw data to be indexed. The event value can be a string, a number, a boolean, an array, or another JSON object. In this case, the event values are strings that say hello in different languages.
When indexing a data source, which fields are considered metadata?
Answer : D
When would the following command be used?
In this source definition the MAX_TIMESTAMP_LOOKHEAD is missing. Which value would fit best?
Event example:
Answer : D
https://docs.splunk.com/Documentation/Splunk/6.2.0/Data/Configuretimestamprecognition
'Specify how far (how many characters) into an event Splunk software should look for a timestamp.' since TIME_PREFIX = ^ and timestamp is from 0-29 position, so D=30 will pick up the WHOLE timestamp correctly.
How can native authentication be disabled in Splunk?
Answer : B
Local user accounts created in Splunk store passwords in which file?
Answer : A
'To set the default username and password, place user-seed.conf in $SPLUNK_HOME/etc/system/local. You must restart Splunk to enable configurations. If the $SPLUNK_HOME/etc/passwd file is present, the settings in this file (user-seed.conf) are not used.'
This file has been manually created on a universal forwarder
A new Splunk admin comes in and connects the universal forwarders to a deployment server and deploys the same app with a new
Which file is now monitored?
Answer : B
Which of the following are supported configuration methods to add inputs on a forwarder? (select all that apply)
Answer : A, B, D
https://docs.splunk.com/Documentation/Forwarder/8.2.1/Forwarder/HowtoforwarddatatoSplunkEnterprise
'You can collect data on the universal forwarder using several methods. Define inputs on the universal forwarder with the CLI. You can use the CLI to define inputs on the universal forwarder. After you define the inputs, the universal forwarder collects data based on those definitions as long as it has access to the data that you want to monitor. Define inputs on the universal forwarder with configuration files. If the input you want to configure does not have a CLI argument for it, you can configure inputs with configuration files. Create an inputs.conf file in the directory, $SPLUNK_HOME/etc/system/local
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
When does a warm bucket roll over to a cold bucket?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/HowSplunkstoresindexes
Once further conditions are met (for example, the index reaches some maximum number of warm buckets), the indexer begins to roll the warm buckets to cold, based on their age. It always selects the oldest warm bucket to roll to cold. Buckets continue to roll to cold as they age in this manner. Cold buckets reside in a different location from hot and warm buckets. You can configure the location so that cold buckets reside on cheaper storage.
166653
What are the required stanza attributes when configuring the transforms. conf to manipulate or remove events?
Answer : C
REGEX = <regular expression>
* Enter a regular expression to operate on your data.
FORMAT = <string>
* NOTE: This option is valid for both index-time and search-time field extraction. Index-time field extraction configuration require the FORMAT settings. The FORMAT settings is optional for search-time field extraction configurations.
* This setting specifies the format of the event, including any field names or values you want to add.
DEST_KEY = <key>
* NOTE: This setting is only valid for index-time field extractions.
* Specifies where SPLUNK software stores the expanded FORMAT results in accordance with the REGEX match.
Using the CLI on the forwarder, how could the current forwarder to indexer configuration be viewed?
Which option accurately describes the purpose of the HTTP Event Collector (HEC)?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/UsetheHTTPEventCollector
'The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. HEC uses a token-based authentication model. You can generate a token and then configure a logging library or HTTP client with the token to send data to HEC in a specific format. This process eliminates the need for a Splunk forwarder when you send application events.'
A Splunk administrator has been tasked with developing a retention strategy to have frequently accessed data sets on SSD storage and to have older, less frequently accessed data on slower NAS storage. They have set a mount point for the NAS. Which parameter do they need to modify to set the path for the older, less frequently accessed data in indexes.conf?
Which is a valid stanza for a network input?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Data/Monitornetworkports
Bypassautomaticsourcetypeassignment
When deploying apps on Universal Forwarders using the deployment server, what is the correct component and location of the app before it is deployed?
Answer : C
The correct answer is C. On Deployment Server, $SPLUNK_HOME/etc/deployment-apps.
A deployment server is a Splunk Enterprise instance that acts as a centralized configuration manager for any number of other instances, called ''deployment clients''. A deployment client can be a universal forwarder, a non-clustered indexer, or a search head1.
A deployment app is a directory that contains any content that you want to download to a set of deployment clients. The content can include a Splunk Enterprise app, a set of Splunk Enterprise configurations, or other content, such as scripts, images, and supporting files2.
You create a deployment app by creating a directory for it on the deployment server. The default location is $SPLUNK_HOME/etc/deployment-apps, but this is configurable through the repositoryLocation attribute in serverclass.conf. Underneath this location, each app must have its own subdirectory. The name of the subdirectory serves as the app name in the forwarder management interface2.
The other options are incorrect because:
A . On Universal Forwarder, $SPLUNK_HOME/etc/apps. This is the location where the deployment app resides after it is downloaded from the deployment server to the universal forwarder. It is not the location of the app before it is deployed2.
B . On Deployment Server, $SPLUNK_HOME/etc/apps. This is the location where the apps that are specific to the deployment server itself reside. It is not the location where the deployment apps for the clients are stored2.
A Universal Forwarder is collecting two separate sources of data (A,B). Source A is being routed through a Heavy Forwarder and then to an indexer. Source B is being routed directly to the indexer. Both sets of data require the masking of raw text strings before being written to disk. What does the administrator need to do to
ensure that the masking takes place successfully?
Answer : D
The correct answer is D. Place both props . conf and transforms . conf on the Heavy Forwarder for source A, and place both props . conf and transforms . conf on the indexer for source B.
For source A, the data is routed through a heavy forwarder, which can parse the data before sending it to the indexer. Therefore, you need to place both props.conf and transforms.conf on the heavy forwarder for source A, so that the masking takes place before indexing.
For source B, the data is routed directly to the indexer, which parses and indexes the data. Therefore, you need to place both props.conf and transforms.conf on the indexer for source B, so that the masking takes place before indexing.
What happens when there are conflicting settings within two or more configuration files?
Answer : D
When there are conflicting settings within two or more configuration files, the setting with the highest precedence is used. The precedence of configuration files is determined by a combination of the file type, the directory location, and the alphabetical order of the file names.
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
If an update is made to an attribute in inputs.conf on a universal forwarder, on which Splunk component
would the fishbucket need to be reset in order to reindex the data?
Answer : A
https://www.splunk.com/en_us/blog/tips-and-tricks/what-is-this-fishbucket-thing.html
'Every Splunk instance has a fishbucket index, except the lightest of hand-tuned lightweight forwarders, and if you index a lot of files it can get quite large. As any other index, you can change the retention policy to control the size via indexes.conf'
Reference https://community.splunk.com/t5/Archive/How-to-reindex-data-from-a-forwarder/td-p/93310
Which of the following is an acceptable channel value when using the HTTP Event Collector indexer acknowledgment capability?
Answer : A
The HTTP Event Collector (HEC) supports indexer acknowledgment to confirm event delivery. Each acknowledgment is associated with a unique GUID (Globally Unique Identifier).
GUID ensures events are not re-indexed in the case of retries.
Incorrect Options:
B, C, D: These are not valid channel values in HEC acknowledgments.
References:
Splunk Docs: Use indexer acknowledgment with HTTP Event Collector
Consider the following stanza in inputs.conf:
What will the value of the source filed be for events generated by this scripts input?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Admin/Inputsconf
-Scroll down to source = <string>
*Default: the input file path
Which of the following monitor inputs stanza headers would match all of the following files?
/var/log/www1/secure.log
/var/log/www/secure.l
/var/log/www/logs/secure.logs
/var/log/www2/secure.log
Answer : C
Consider a company with a Splunk distributed environment in production. The Compliance Department wants to start using Splunk; however, they want to ensure that no one can see their reports or any other knowledge objects. Which Splunk Component can be added to implement this policy for the new team?
Answer : D
Which Splunk forwarder has a built-in license?
Answer : C
In which Splunk configuration is the SEDCMD used?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Forwarddatatothird-partysystemsd
'You can specify a SEDCMD configuration in props.conf to address data that contains characters that the third-party server cannot process. '
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
Which network input option provides durable file-system buffering of data to mitigate data loss due to network outages and splunkd restarts?
Answer : C
Which Splunk component distributes apps and certain other configuration updates to search head cluster members?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations First line says it all: 'The deployment server distributes deployment apps to clients.'
Immediately after installation, what will a Universal Forwarder do first?
The following stanzas in inputs. conf are currently being used by a deployment client:
[udp: //145.175.118.177:1001
Connection_host = dns
sourcetype = syslog
Which of the following statements is true of data that is received via this input?
Answer : D
This is because the input type is UDP, which is an unreliable protocol that does not guarantee delivery, order, or integrity of the data packets. UDP does not have any mechanism to resend or acknowledge the data packets, so if Splunk is restarted, any data that was in transit or in the buffer may be dropped and not indexed.
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
What is the correct example to redact a plain-text password from raw events?
Answer : B
The correct answer is B. in props.conf:
[identity]
SEDCMD-redact_pw = s/password=([^,|/s]+)/ ####REACTED####/g
s/password=([^,|/s]+)/ ####REACTED####/g
The g flag at the end means that the replacement is applied globally, not just to the first match.
Option A is incorrect because it uses the REGEX attribute instead of the SEDCMD attribute. The REGEX attribute is used to extract fields from events, not to modify them.
Option C is incorrect because it uses the transforms.conf file instead of the props.conf file. The transforms.conf file is used to define transformations that can be applied to fields or events, such as lookups, evaluations, or replacements. However, these transformations are applied after indexing, not before.
Option D is incorrect because it uses both the wrong attribute and the wrong file. There is no REGEX-redact_pw attribute in the transforms.conf file.
References: 1: Redact data from events - Splunk Documentation
Which Splunk configuration file is used to enable data integrity checking?
After an Enterprise Trial license expires, it will automatically convert to a Free license. How many days is an Enterprise Trial license valid before this conversion occurs?
For single line event sourcetypes. it is most efficient to set SHOULD_linemerge to what value?
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking
Attribute : SHOULD_LINEMERGE = [true|false]
Description : When set to true, the Splunk platform combines several input lines into a single event, with configuration based on the settings described in the next section.
Which valid bucket types are searchable? (select all that apply)
Answer : A, B, C
Hot/warm/cold/thawed bucket types are searchable. Frozen isn't searchable because its either deleted at that state or archived.
User role inheritance allows what to be inherited from the parent role? (select all that apply)
Which parent directory contains the configuration files in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Configurationfiledirectories
Section titled, Configuration file directories, states 'A detailed list of settings for each configuration file is provided in the .spec file names for that configuration file. You can find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc system/README folder of your Splunk Enterprise installation...'
A new forwarder has been installed with a manually created deploymentclient.conf.
What is the next step to enable the communication between the forwarder and the deployment server?
Which of the following are reasons to create separate indexes? (Choose all that apply.)
Answer : A, C
Different retention times: You can set different retention policies for different indexes, depending on how long you want to keep the data. For example, you can have an index for security data that has a longer retention time than an index for performance data that has a shorter retention time.
Restrict user permissions: You can set different access permissions for different indexes, depending on who needs to see the data. For example, you can have an index for sensitive data that is only accessible by certain users or roles, and an index for public data that is accessible by everyone.
Which authentication methods are natively supported within Splunk Enterprise? (select all that apply)
Answer : A, B, C
Splunk authentication: Provides Admin, Power and User by default, and you can define your own roles using a list of capabilities. If you have an Enterprise license, Splunk authentication is enabled by default. See Set up user authentication with Splunk's built-in system for more information. LDAP: Splunk Enterprise supports authentication with its internal authentication services or your existing LDAP server. See Set up user authentication with LDAP for more information. Scripted authentication API: Use scripted authentication to integrate Splunk authentication with an external authentication system, such as RADIUS or PAM. See Set up user authentication with external systems for more information. Note: Authentication, including native authentication, LDAP, and scripted authentication, is not available in Splunk Free.
On the deployment server, administrators can map clients to server classes using client filters. Which of the
following statements is accurate?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.1/Updating/Filterclients
same/td-p/390910
An index stores its data in buckets. Which default directories does Splunk use to store buckets? (Choose all that apply.)
Answer : C, D
What is the correct order of steps in Duo Multifactor Authentication?
Answer : C
Using the provided DUO/Splunk reference URL https://duo.com/docs/splunk
Scroll down to the Network Diagram section and note the following 6 similar steps
1 - SPlunk connection initiated
2 - Primary authentication
3 - Splunk connection established to Duo Security over TCP port 443
4 - Secondary authentication via Duo Security's service
5 - Splunk receives authentication response
6 - Splunk session logged in.
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
Which pathway represents where a network input in Splunk might be found?
Answer : B
The correct answer is B. The network input in Splunk might be found in the $SPLUNK_HOME/etc/apps/$appName/local/inputs.conf file.
A network input is a type of input that monitors data from TCP or UDP ports. To configure a network input, you need to specify the port number, the connection host, the source, and the sourcetype in the inputs.conf file. You can also set other optional settings, such as index, queue, and host_regex1.
The inputs.conf file is a configuration file that contains the settings for different types of inputs, such as files, directories, scripts, network ports, and Windows event logs. The inputs.conf file can be located in various directories, depending on the scope and priority of the settings. The most common locations are:
$SPLUNK_HOME/etc/system/default: This directory contains the default settings for all inputs. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/system/local: This directory contains the custom settings for all inputs that apply to the entire Splunk instance. The settings in this directory override the default settings2.
$SPLUNK_HOME/etc/apps/$appName/default: This directory contains the default settings for all inputs that are specific to an app. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/apps/$appName/local: This directory contains the custom settings for all inputs that are specific to an app. The settings in this directory override the default and system settings2.
Therefore, the best practice is to create or edit the inputs.conf file in the $SPLUNK_HOME/etc/apps/$appName/local directory, where $appName is the name of the app that you want to configure the network input for. This way, you can avoid modifying the default files and ensure that your settings are applied to the specific app.
The other options are incorrect because:
A . There is no network directory under the apps directory. The network input settings should be in the inputs.conf file, not in a separate directory.
C . There is no udp.conf file in Splunk. The network input settings should be in the inputs.conf file, not in a separate file. The system directory is not the recommended location for custom settings, as it affects the entire Splunk instance.
D . The var/lib/splunk directory is where Splunk stores the indexed data, not the input settings. The homePath setting is used to specify the location of the index data, not the input data. The inputName is not a valid variable for inputs.conf.
What is the correct curl to send multiple events through HTTP Event Collector?
Answer : B
curl ''https://mysplunkserver.example.com:8088/services/collector'' \ -H ''Authorization: Splunk DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67'' \ -d '{''event'': ''Hello World''}, {''event'': ''Hola Mundo''}, {''event'': ''Hallo Welt''}'. This is the correct curl command to send multiple events through HTTP Event Collector (HEC), which is a token-based API that allows you to send data to Splunk Enterprise from any application that can make an HTTP request. The command has the following components:
The URL of the HEC endpoint, which consists of the protocol (https), the hostname or IP address of the Splunk server (mysplunkserver.example.com), the port number (8088), and the service name (services/collector).
The header that contains the authorization token, which is a unique identifier that grants access to the HEC endpoint. The token is prefixed with Splunk and enclosed in quotation marks. The token value (DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67) is an example and should be replaced with your own token value.
The data payload that contains the events to be sent, which are JSON objects enclosed in curly braces and separated by commas. Each event object has a mandatory field called event, which contains the raw data to be indexed. The event value can be a string, a number, a boolean, an array, or another JSON object. In this case, the event values are strings that say hello in different languages.
Which option accurately describes the purpose of the HTTP Event Collector (HEC)?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/UsetheHTTPEventCollector
'The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. HEC uses a token-based authentication model. You can generate a token and then configure a logging library or HTTP client with the token to send data to HEC in a specific format. This process eliminates the need for a Splunk forwarder when you send application events.'
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
Which of the following is accurate regarding the input phase?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline 'The data pipeline segments in depth. INPUT - In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks it into 64K blocks, and annotates each block with some metadata keys. The keys can also include values that are used internally, such as the character encoding of the data stream, and values that control later processing of the data, such as the index into which the events should be stored. PARSING Annotating individual events with metadata copied from the source-wide keys. Transforming event data and metadata according to regex transform rules.'
When using license pools, volume allocations apply to which Splunk components?
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
Which configuration files are used to transform raw data ingested by Splunk? (Choose all that apply.)
Answer : A, D
use transformations with props.conf and transforms.conf to:
-- Mask or delete raw data as it is being indexed
--Override sourcetype or host based upon event values
-- Route events to specific indexes based on event content
-- Prevent unwanted events from being indexed
A log file contains 193 days worth of timestamped events. Which monitor stanza would be used to collect data 45 days old and newer from that log file?
Answer : D
In this example, if useACK is set to true and the maxQueueSize is set to 7MB, what is the size of the wait queue on this universal forwarder?
Which Splunk component does a search head primarily communicate with?
Answer : A
What happens when the same username exists in Splunk as well as through LDAP?
Answer : C
Splunk platform attempts native authentication first. If authentication fails outside of a local account that doesn't exist, there is no attempt to use LDAP to log in. This is adapted from precedence of Splunk authentication schema.
What is the difference between the two wildcards ... and - for the monitor stanza in inputs, conf?
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Data/Specifyinputpathswithwildcards
... The ellipsis wildcard searches recursively through directories and any number of levels of subdirectories to find matches.
If you specify a folder separator (for example, //var/log/.../file), it does not match the first folder level, only subfolders.
* The asterisk wildcard matches anything in that specific folder path segment.
Unlike ..., * does not recurse through subfolders.
Which of the following are reasons to create separate indexes? (Choose all that apply.)
Answer : A, C
Different retention times: You can set different retention policies for different indexes, depending on how long you want to keep the data. For example, you can have an index for security data that has a longer retention time than an index for performance data that has a shorter retention time.
Restrict user permissions: You can set different access permissions for different indexes, depending on who needs to see the data. For example, you can have an index for sensitive data that is only accessible by certain users or roles, and an index for public data that is accessible by everyone.
What is required when adding a native user to Splunk? (select all that apply)
Answer : A, B
According to the Splunk system admin course PDF, When adding native users, Username and Password ARE REQUIRED
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
Which configuration files are used to transform raw data ingested by Splunk? (Choose all that apply.)
Answer : A, D
use transformations with props.conf and transforms.conf to:
-- Mask or delete raw data as it is being indexed
--Override sourcetype or host based upon event values
-- Route events to specific indexes based on event content
-- Prevent unwanted events from being indexed
On the deployment server, administrators can map clients to server classes using client filters. Which of the
following statements is accurate?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.1/Updating/Filterclients
same/td-p/390910
When using a directory monitor input, specific source type can be selectively overridden using which configuration file?
Which additional component is required for a search head cluster?
Answer : A
The deployer. This is a Splunk Enterprise instance that distributes apps and other configurations to the cluster members. It stands outside the cluster and cannot run on the same instance as a cluster member. It can, however, under some circumstances, reside on the same instance as other Splunk Enterprise components, such as a deployment server or an indexer cluster master node.
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
Which of the following types of data count against the license daily quota?
Which feature of Splunk's role configuration can be used to aggregate multiple roles intended for groups of
users?
How is a remote monitor input distributed to forwarders?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/Usingforwardingagents
Scroll down to the section Titled, How to configure forwarder inputs, and subsection Here are the main ways that you can configure data inputs on a forwarder Install the app or add-on that contains the inputs you wants
When Splunk is integrated with LDAP, which attribute can be changed in the Splunk UI for an LDAP user?
Which artifact is required in the request header when creating an HTTP event?
Which of the following indexes come pre-configured with Splunk Enterprise? (select all that apply)
If an update is made to an attribute in inputs.conf on a universal forwarder, on which Splunk component
would the fishbucket need to be reset in order to reindex the data?
Answer : A
https://www.splunk.com/en_us/blog/tips-and-tricks/what-is-this-fishbucket-thing.html
'Every Splunk instance has a fishbucket index, except the lightest of hand-tuned lightweight forwarders, and if you index a lot of files it can get quite large. As any other index, you can change the retention policy to control the size via indexes.conf'
Reference https://community.splunk.com/t5/Archive/How-to-reindex-data-from-a-forwarder/td-p/93310
A log file contains 193 days worth of timestamped events. Which monitor stanza would be used to collect data 45 days old and newer from that log file?
Answer : D
When deploying apps, which attribute in the forwarder management interface determines the apps that clients install?
Answer : C
<https://docs.splunk.com/Documentation/Splunk/8.0.6/Updating/Deploymentserverarchitecture>
https://docs.splunk.com/Splexicon:Serverclass
Which of the following statements accurately describes using SSL to secure the feed from a forwarder?
Answer : A
AboutsecuringyourSplunkconfigurationwithSSL
Where can scripts for scripted inputs reside on the host file system? (select all that apply)
Answer : A, C, D
'Where to place the scripts for scripted inputs. The script that you refer to in $SCRIPT can reside in only one of the following places on the host file system:
$SPLUNK_HOME/etc/system/bin
$SPLUNK_HOME/etc/apps/<your_App>/bin
$SPLUNK_HOME/bin/scripts
As a best practice, put your script in the bin/ directory that is nearest to the inputs.conf file that calls your script on the host file system.'
When working with an indexer cluster, what changes with the global precedence when comparing to a standalone deployment?
Answer : C
The app local directories move to second in the priority list. This is explained in the Splunk documentation, which states:
In a clustered environment, the precedence of configuration files changes slightly from that of a standalone deployment. The app local directories move to second in the priority list, after the peer-apps local directory. This means that any configuration files in the app local directories on the individual peers are overridden by configuration files of the same name and type in the peer-apps local directory on the master node.
What happens when the same username exists in Splunk as well as through LDAP?
Answer : C
Splunk platform attempts native authentication first. If authentication fails outside of a local account that doesn't exist, there is no attempt to use LDAP to log in. This is adapted from precedence of Splunk authentication schema.
Which of the following types of data count against the license daily quota?
When deploying apps on Universal Forwarders using the deployment server, what is the correct component and location of the app before it is deployed?
Answer : C
The correct answer is C. On Deployment Server, $SPLUNK_HOME/etc/deployment-apps.
A deployment server is a Splunk Enterprise instance that acts as a centralized configuration manager for any number of other instances, called ''deployment clients''. A deployment client can be a universal forwarder, a non-clustered indexer, or a search head1.
A deployment app is a directory that contains any content that you want to download to a set of deployment clients. The content can include a Splunk Enterprise app, a set of Splunk Enterprise configurations, or other content, such as scripts, images, and supporting files2.
You create a deployment app by creating a directory for it on the deployment server. The default location is $SPLUNK_HOME/etc/deployment-apps, but this is configurable through the repositoryLocation attribute in serverclass.conf. Underneath this location, each app must have its own subdirectory. The name of the subdirectory serves as the app name in the forwarder management interface2.
The other options are incorrect because:
A . On Universal Forwarder, $SPLUNK_HOME/etc/apps. This is the location where the deployment app resides after it is downloaded from the deployment server to the universal forwarder. It is not the location of the app before it is deployed2.
B . On Deployment Server, $SPLUNK_HOME/etc/apps. This is the location where the apps that are specific to the deployment server itself reside. It is not the location where the deployment apps for the clients are stored2.
Which optional configuration setting in inputs .conf allows you to selectively forward the data to specific indexer(s)?
Answer : A
Specifies a comma-separated list of tcpout group names. Use this setting to selectively forward your data to specific indexers by specifying the tcpout groups that the forwarder should use when forwarding the data. Define the tcpout group names in the outputs.conf file in [tcpout:<tcpout_group_name>] stanzas. The groups present in defaultGroup in [tcpout] stanza in the outputs.conf file.
Which of the following must be done to define user permissions when integrating Splunk with LDAP?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.1.3/Security/ConfigureLDAPwithSplunkWeb
'You can map either users or groups, but not both. If you are using groups, all users must be members of an appropriate group. Groups inherit capabilities form the highest level role they're a member of.' 'If your LDAP environment does not have group entries, you can treat each user as its own group.'
Which parent directory contains the configuration files in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Configurationfiledirectories
Section titled, Configuration file directories, states 'A detailed list of settings for each configuration file is provided in the .spec file names for that configuration file. You can find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc system/README folder of your Splunk Enterprise installation...'
How is a remote monitor input distributed to forwarders?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/Usingforwardingagents
Scroll down to the section Titled, How to configure forwarder inputs, and subsection Here are the main ways that you can configure data inputs on a forwarder Install the app or add-on that contains the inputs you wants
What hardware attribute would need to be changed to increase the number of simultaneous searches (ad-hoc and scheduled) on a single search head?
Answer : B
https://docs.splunk.com/Documentation/Splunk/7.3.1/DistSearch/SHCarchitecture
Scroll down to section titled, How the cluster handles concurrent search quotas, 'Overall search quota. This quota determines the maximum number of historical searches (combined scheduled and ad hoc) that the cluster can run concurrently. This quota is configured with max_Searches_per_cpu and related settings in limits.conf.'
Which Splunk component requires a Forwarder license?
Answer : B
Which pathway represents where a network input in Splunk might be found?
Answer : B
The correct answer is B. The network input in Splunk might be found in the $SPLUNK_HOME/etc/apps/$appName/local/inputs.conf file.
A network input is a type of input that monitors data from TCP or UDP ports. To configure a network input, you need to specify the port number, the connection host, the source, and the sourcetype in the inputs.conf file. You can also set other optional settings, such as index, queue, and host_regex1.
The inputs.conf file is a configuration file that contains the settings for different types of inputs, such as files, directories, scripts, network ports, and Windows event logs. The inputs.conf file can be located in various directories, depending on the scope and priority of the settings. The most common locations are:
$SPLUNK_HOME/etc/system/default: This directory contains the default settings for all inputs. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/system/local: This directory contains the custom settings for all inputs that apply to the entire Splunk instance. The settings in this directory override the default settings2.
$SPLUNK_HOME/etc/apps/$appName/default: This directory contains the default settings for all inputs that are specific to an app. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/apps/$appName/local: This directory contains the custom settings for all inputs that are specific to an app. The settings in this directory override the default and system settings2.
Therefore, the best practice is to create or edit the inputs.conf file in the $SPLUNK_HOME/etc/apps/$appName/local directory, where $appName is the name of the app that you want to configure the network input for. This way, you can avoid modifying the default files and ensure that your settings are applied to the specific app.
The other options are incorrect because:
A . There is no network directory under the apps directory. The network input settings should be in the inputs.conf file, not in a separate directory.
C . There is no udp.conf file in Splunk. The network input settings should be in the inputs.conf file, not in a separate file. The system directory is not the recommended location for custom settings, as it affects the entire Splunk instance.
D . The var/lib/splunk directory is where Splunk stores the indexed data, not the input settings. The homePath setting is used to specify the location of the index data, not the input data. The inputName is not a valid variable for inputs.conf.
Which feature of Splunk's role configuration can be used to aggregate multiple roles intended for groups of
users?
How do you remove missing forwarders from the Monitoring Console?
Answer : D
Which option accurately describes the purpose of the HTTP Event Collector (HEC)?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/UsetheHTTPEventCollector
'The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. HEC uses a token-based authentication model. You can generate a token and then configure a logging library or HTTP client with the token to send data to HEC in a specific format. This process eliminates the need for a Splunk forwarder when you send application events.'
Using the CLI on the forwarder, how could the current forwarder to indexer configuration be viewed?
In which scenario would a Splunk Administrator want to enable data integrity check when creating an index?
Answer : D
Which of the following statements apply to directory inputs? {select all that apply)
Answer : A, C
Which of the following is an acceptable channel value when using the HTTP Event Collector indexer acknowledgment capability?
Answer : A
The HTTP Event Collector (HEC) supports indexer acknowledgment to confirm event delivery. Each acknowledgment is associated with a unique GUID (Globally Unique Identifier).
GUID ensures events are not re-indexed in the case of retries.
Incorrect Options:
B, C, D: These are not valid channel values in HEC acknowledgments.
References:
Splunk Docs: Use indexer acknowledgment with HTTP Event Collector
When indexing a data source, which fields are considered metadata?
Answer : D
Which network input option provides durable file-system buffering of data to mitigate data loss due to network outages and splunkd restarts?
Answer : C
When working with an indexer cluster, what changes with the global precedence when comparing to a standalone deployment?
Answer : C
The app local directories move to second in the priority list. This is explained in the Splunk documentation, which states:
In a clustered environment, the precedence of configuration files changes slightly from that of a standalone deployment. The app local directories move to second in the priority list, after the peer-apps local directory. This means that any configuration files in the app local directories on the individual peers are overridden by configuration files of the same name and type in the peer-apps local directory on the master node.
Which of the methods listed below supports muti-factor authentication?
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
Which Splunk indexer operating system platform is supported when sending logs from a Windows universal forwarder?
Answer : A
'The forwarder/indexer relationship can be considered platform agnostic (within the sphere of supported platforms) because they exchange their data handshake (and the data, if you wish) over TCP.
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
Which forwarder type can parse data prior to forwarding?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Typesofforwarders
'A heavy forwarder parses data before forwarding it and can route data based on criteria such as source or type of event.'
Assume a file is being monitored and the data was incorrectly indexed to an exclusive index. The index is
cleaned and now the data must be reindexed. What other index must be cleaned to reset the input checkpoint
information for that file?
Which Splunk forwarder type allows parsing of data before forwarding to an indexer?
Answer : C
Which pathway represents where a network input in Splunk might be found?
Answer : B
The correct answer is B. The network input in Splunk might be found in the $SPLUNK_HOME/etc/apps/$appName/local/inputs.conf file.
A network input is a type of input that monitors data from TCP or UDP ports. To configure a network input, you need to specify the port number, the connection host, the source, and the sourcetype in the inputs.conf file. You can also set other optional settings, such as index, queue, and host_regex1.
The inputs.conf file is a configuration file that contains the settings for different types of inputs, such as files, directories, scripts, network ports, and Windows event logs. The inputs.conf file can be located in various directories, depending on the scope and priority of the settings. The most common locations are:
$SPLUNK_HOME/etc/system/default: This directory contains the default settings for all inputs. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/system/local: This directory contains the custom settings for all inputs that apply to the entire Splunk instance. The settings in this directory override the default settings2.
$SPLUNK_HOME/etc/apps/$appName/default: This directory contains the default settings for all inputs that are specific to an app. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/apps/$appName/local: This directory contains the custom settings for all inputs that are specific to an app. The settings in this directory override the default and system settings2.
Therefore, the best practice is to create or edit the inputs.conf file in the $SPLUNK_HOME/etc/apps/$appName/local directory, where $appName is the name of the app that you want to configure the network input for. This way, you can avoid modifying the default files and ensure that your settings are applied to the specific app.
The other options are incorrect because:
A . There is no network directory under the apps directory. The network input settings should be in the inputs.conf file, not in a separate directory.
C . There is no udp.conf file in Splunk. The network input settings should be in the inputs.conf file, not in a separate file. The system directory is not the recommended location for custom settings, as it affects the entire Splunk instance.
D . The var/lib/splunk directory is where Splunk stores the indexed data, not the input settings. The homePath setting is used to specify the location of the index data, not the input data. The inputName is not a valid variable for inputs.conf.
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
In this source definition the MAX_TIMESTAMP_LOOKHEAD is missing. Which value would fit best?
Event example:
Answer : D
https://docs.splunk.com/Documentation/Splunk/6.2.0/Data/Configuretimestamprecognition
'Specify how far (how many characters) into an event Splunk software should look for a timestamp.' since TIME_PREFIX = ^ and timestamp is from 0-29 position, so D=30 will pick up the WHOLE timestamp correctly.
Which of the following statements apply to directory inputs? {select all that apply)
Answer : A, C
A log file contains 193 days worth of timestamped events. Which monitor stanza would be used to collect data 45 days old and newer from that log file?
Answer : D
Which parent directory contains the configuration files in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Configurationfiledirectories
Section titled, Configuration file directories, states 'A detailed list of settings for each configuration file is provided in the .spec file names for that configuration file. You can find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc system/README folder of your Splunk Enterprise installation...'
This file has been manually created on a universal forwarder
A new Splunk admin comes in and connects the universal forwarders to a deployment server and deploys the same app with a new
Which file is now monitored?
Answer : B
In this example, if useACK is set to true and the maxQueueSize is set to 7MB, what is the size of the wait queue on this universal forwarder?
When enabling data integrity control, where does Splunk Enterprise store the hash files for each bucket?
Answer : B
Data integrity controls in Splunk ensure that indexed data has not been tampered with.
When enabled, Splunk calculates hashes for each bucket and stores these hash files in the rawdata directory of the corresponding bucket.
Incorrect Options:
A, C, D: These directories do not store hash files.
References:
Splunk Docs: Configure data integrity controls
What is the correct example to redact a plain-text password from raw events?
Answer : B
The correct answer is B. in props.conf:
[identity]
SEDCMD-redact_pw = s/password=([^,|/s]+)/ ####REACTED####/g
s/password=([^,|/s]+)/ ####REACTED####/g
The g flag at the end means that the replacement is applied globally, not just to the first match.
Option A is incorrect because it uses the REGEX attribute instead of the SEDCMD attribute. The REGEX attribute is used to extract fields from events, not to modify them.
Option C is incorrect because it uses the transforms.conf file instead of the props.conf file. The transforms.conf file is used to define transformations that can be applied to fields or events, such as lookups, evaluations, or replacements. However, these transformations are applied after indexing, not before.
Option D is incorrect because it uses both the wrong attribute and the wrong file. There is no REGEX-redact_pw attribute in the transforms.conf file.
References: 1: Redact data from events - Splunk Documentation
Who provides the Application Secret, Integration, and Secret keys, as well as the API Hostname when setting
up Duo for Multi-Factor Authentication in Splunk Enterprise?
Answer : A
When Splunk is integrated with LDAP, which attribute can be changed in the Splunk UI for an LDAP user?
Which of the following are reasons to create separate indexes? (Choose all that apply.)
Answer : A, C
Different retention times: You can set different retention policies for different indexes, depending on how long you want to keep the data. For example, you can have an index for security data that has a longer retention time than an index for performance data that has a shorter retention time.
Restrict user permissions: You can set different access permissions for different indexes, depending on who needs to see the data. For example, you can have an index for sensitive data that is only accessible by certain users or roles, and an index for public data that is accessible by everyone.
Which file will be matched for the following monitor stanza in inputs. conf?
[monitor: ///var/log/*/bar/*. txt]
Answer : C
The correct answer is C. /var/log/host_460352847/bar/file/foo.txt.
The monitor stanza in inputs.conf is used to configure Splunk to monitor files and directories for new data. The monitor stanza has the following syntax1:
[monitor://<input path>]
The input path can be a file or a directory, and it can include wildcards (*) and regular expressions. The wildcards match any number of characters, including none, while the regular expressions match patterns of characters. The input path is case-sensitive and must be enclosed in double quotes if it contains spaces1.
In this case, the input path is /var/log//bar/.txt, which means Splunk will monitor any file with the .txt extension that is located in a subdirectory named bar under the /var/log directory. The subdirectory bar can be at any level under the /var/log directory, and the * wildcard will match any characters before or after the bar and .txt parts1.
Therefore, the file /var/log/host_460352847/bar/file/foo.txt will be matched by the monitor stanza, as it meets the criteria. The other files will not be matched, because:
A . /var/log/host_460352847/temp/bar/file/csv/foo.txt has a .csv extension, not a .txt extension.
B . /var/log/host_460352847/bar/foo.txt is not located in a subdirectory under the bar directory, but directly in the bar directory.
D . /var/log/host_460352847/temp/bar/file/foo.txt is located in a subdirectory named file under the bar directory, not directly in the bar directory.
Assume a file is being monitored and the data was incorrectly indexed to an exclusive index. The index is
cleaned and now the data must be reindexed. What other index must be cleaned to reset the input checkpoint
information for that file?
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
What is the correct order of steps in Duo Multifactor Authentication?
Answer : C
Using the provided DUO/Splunk reference URL https://duo.com/docs/splunk
Scroll down to the Network Diagram section and note the following 6 similar steps
1 - SPlunk connection initiated
2 - Primary authentication
3 - Splunk connection established to Duo Security over TCP port 443
4 - Secondary authentication via Duo Security's service
5 - Splunk receives authentication response
6 - Splunk session logged in.
Which of the following statements describes how distributed search works?
Answer : C
URL https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Configuredistributedsearch
'To activate distributed search, you add search peers, or indexers, to a Splunk Enterprise instance that you desingate as a search head. You do this by specifying each search peer manually.'
When would the following command be used?
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
How can native authentication be disabled in Splunk?
Answer : B
Using the CLI on the forwarder, how could the current forwarder to indexer configuration be viewed?
Which of the following statements accurately describes using SSL to secure the feed from a forwarder?
Answer : A
AboutsecuringyourSplunkconfigurationwithSSL
What is the difference between the two wildcards ... and - for the monitor stanza in inputs, conf?
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Data/Specifyinputpathswithwildcards
... The ellipsis wildcard searches recursively through directories and any number of levels of subdirectories to find matches.
If you specify a folder separator (for example, //var/log/.../file), it does not match the first folder level, only subfolders.
* The asterisk wildcard matches anything in that specific folder path segment.
Unlike ..., * does not recurse through subfolders.
Which of the following authentication types requires scripting in Splunk?
Answer : D
https://answers.splunk.com/answers/131127/scripted-authentication.html
Scripted Authentication: An option for Splunk Enterprise authentication. You can use an authentication system that you have in place (such as PAM or RADIUS) by configuring authentication.conf to use a script instead of using LDAP or Splunk Enterprise default authentication.
When does a warm bucket roll over to a cold bucket?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/HowSplunkstoresindexes
Once further conditions are met (for example, the index reaches some maximum number of warm buckets), the indexer begins to roll the warm buckets to cold, based on their age. It always selects the oldest warm bucket to roll to cold. Buckets continue to roll to cold as they age in this manner. Cold buckets reside in a different location from hot and warm buckets. You can configure the location so that cold buckets reside on cheaper storage.
166653
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
When Splunk is integrated with LDAP, which attribute can be changed in the Splunk UI for an LDAP user?
What is required when adding a native user to Splunk? (select all that apply)
Answer : A, B
According to the Splunk system admin course PDF, When adding native users, Username and Password ARE REQUIRED
Which option on the Add Data menu is most useful for testing data ingestion without creating inputs.conf?
Answer : A
Which of the following enables compression for universal forwarders in outputs. conf ?
A)
B)
C)
D)
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf
# Compression
#
# This example sends compressed events to the remote indexer.
# NOTE: Compression can be enabled TCP or SSL outputs only.
# The receiver input port should also have compression enabled.
[tcpout]
server = splunkServer.example.com:4433
compressed = true
Which parent directory contains the configuration files in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Configurationfiledirectories
Section titled, Configuration file directories, states 'A detailed list of settings for each configuration file is provided in the .spec file names for that configuration file. You can find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc system/README folder of your Splunk Enterprise installation...'
Which setting allows the configuration of Splunk to allow events to span over more than one line?
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
Which configuration file would be used to forward the Splunk internal logs from a search head to the indexer?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/DistSearch/Forwardsearchheaddata
Per the provided Splunk reference URL by @hwangho, scroll to section Forward search head data, subsection titled, 2. Configure the search head as a forwarder. 'Create an outputs.conf file on the search head that configures the search head for load-balanced forwarding across the set of search peers (indexers).'
A company moves to a distributed architecture to meet the growing demand for the use of Splunk. What parameter can be configured to enable automatic load balancing in the
Universal Forwarder to send data to the indexers?
Answer : D
To enable automatic load balancing, set the stanza to have a server value equal to a comma-separated list of IP addresses and indexer ports for each of the indexers in the environment. For example:
[tcpout] server=10.1.1.1:9997,10.1.1.2:9997,10.1.1.3:9997
The forwarder then distributes data across all of the indexers in the list.
Which of the following are supported configuration methods to add inputs on a forwarder? (select all that apply)
Answer : A, B, D
https://docs.splunk.com/Documentation/Forwarder/8.2.1/Forwarder/HowtoforwarddatatoSplunkEnterprise
'You can collect data on the universal forwarder using several methods. Define inputs on the universal forwarder with the CLI. You can use the CLI to define inputs on the universal forwarder. After you define the inputs, the universal forwarder collects data based on those definitions as long as it has access to the data that you want to monitor. Define inputs on the universal forwarder with configuration files. If the input you want to configure does not have a CLI argument for it, you can configure inputs with configuration files. Create an inputs.conf file in the directory, $SPLUNK_HOME/etc/system/local
Using the CLI on the forwarder, how could the current forwarder to indexer configuration be viewed?
The LINE_BREAKER attribute is configured in which configuration file?
Answer : A
What is the default character encoding used by Splunk during the input phase?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Data/Configurecharactersetencoding
'Configure character set encoding. Splunk software attempts to apply UTF-8 encoding to your scources by default. If a source foesn't use UTF-8 encoding or is a non-ASCII file, Splunk software tries to convert data from the source to UTF-8 encoding unless you specify a character set to use by setting the CHARSET key in the props.conf file.'
Which valid bucket types are searchable? (select all that apply)
Answer : A, B, C
Hot/warm/cold/thawed bucket types are searchable. Frozen isn't searchable because its either deleted at that state or archived.
Which of the following statements apply to directory inputs? {select all that apply)
Answer : A, C
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
Which artifact is required in the request header when creating an HTTP event?
Which file will be matched for the following monitor stanza in inputs. conf?
[monitor: ///var/log/*/bar/*. txt]
Answer : C
The correct answer is C. /var/log/host_460352847/bar/file/foo.txt.
The monitor stanza in inputs.conf is used to configure Splunk to monitor files and directories for new data. The monitor stanza has the following syntax1:
[monitor://<input path>]
The input path can be a file or a directory, and it can include wildcards (*) and regular expressions. The wildcards match any number of characters, including none, while the regular expressions match patterns of characters. The input path is case-sensitive and must be enclosed in double quotes if it contains spaces1.
In this case, the input path is /var/log//bar/.txt, which means Splunk will monitor any file with the .txt extension that is located in a subdirectory named bar under the /var/log directory. The subdirectory bar can be at any level under the /var/log directory, and the * wildcard will match any characters before or after the bar and .txt parts1.
Therefore, the file /var/log/host_460352847/bar/file/foo.txt will be matched by the monitor stanza, as it meets the criteria. The other files will not be matched, because:
A . /var/log/host_460352847/temp/bar/file/csv/foo.txt has a .csv extension, not a .txt extension.
B . /var/log/host_460352847/bar/foo.txt is not located in a subdirectory under the bar directory, but directly in the bar directory.
D . /var/log/host_460352847/temp/bar/file/foo.txt is located in a subdirectory named file under the bar directory, not directly in the bar directory.
Which pathway represents where a network input in Splunk might be found?
Answer : B
The correct answer is B. The network input in Splunk might be found in the $SPLUNK_HOME/etc/apps/$appName/local/inputs.conf file.
A network input is a type of input that monitors data from TCP or UDP ports. To configure a network input, you need to specify the port number, the connection host, the source, and the sourcetype in the inputs.conf file. You can also set other optional settings, such as index, queue, and host_regex1.
The inputs.conf file is a configuration file that contains the settings for different types of inputs, such as files, directories, scripts, network ports, and Windows event logs. The inputs.conf file can be located in various directories, depending on the scope and priority of the settings. The most common locations are:
$SPLUNK_HOME/etc/system/default: This directory contains the default settings for all inputs. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/system/local: This directory contains the custom settings for all inputs that apply to the entire Splunk instance. The settings in this directory override the default settings2.
$SPLUNK_HOME/etc/apps/$appName/default: This directory contains the default settings for all inputs that are specific to an app. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/apps/$appName/local: This directory contains the custom settings for all inputs that are specific to an app. The settings in this directory override the default and system settings2.
Therefore, the best practice is to create or edit the inputs.conf file in the $SPLUNK_HOME/etc/apps/$appName/local directory, where $appName is the name of the app that you want to configure the network input for. This way, you can avoid modifying the default files and ensure that your settings are applied to the specific app.
The other options are incorrect because:
A . There is no network directory under the apps directory. The network input settings should be in the inputs.conf file, not in a separate directory.
C . There is no udp.conf file in Splunk. The network input settings should be in the inputs.conf file, not in a separate file. The system directory is not the recommended location for custom settings, as it affects the entire Splunk instance.
D . The var/lib/splunk directory is where Splunk stores the indexed data, not the input settings. The homePath setting is used to specify the location of the index data, not the input data. The inputName is not a valid variable for inputs.conf.
Immediately after installation, what will a Universal Forwarder do first?
How is data handled by Splunk during the input phase of the data ingestion process?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Deploy/Datapipeline
'In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks in into 64K blocks, and annotates each block with some metadata keys.'
Which of the following statements describe deployment management? (select all that apply)
Answer : A, B
'All Splunk Enterprise instances functioning as management components needs access to an Enterprise license. Management components include the deployment server, the indexer cluster manager node, the search head cluster deployer, and the monitoring console.'
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
When are knowledge bundles distributed to search peers?
Answer : D
'The search head replicates the knowledge bundle periodically in the background or when initiating a search. ' 'As part of the distributed search process, the search head replicates and distributes its knowledge objects to its search peers, or indexers. Knowledge objects include saved searches, event types, and other entities used in searching accorss indexes. The search head needs to distribute this material to its search peers so that they can properly execute queries on its behalf.'
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
Using the CLI on the forwarder, how could the current forwarder to indexer configuration be viewed?
When using a directory monitor input, specific source type can be selectively overridden using which configuration file?
What are the required stanza attributes when configuring the transforms. conf to manipulate or remove events?
Answer : C
REGEX = <regular expression>
* Enter a regular expression to operate on your data.
FORMAT = <string>
* NOTE: This option is valid for both index-time and search-time field extraction. Index-time field extraction configuration require the FORMAT settings. The FORMAT settings is optional for search-time field extraction configurations.
* This setting specifies the format of the event, including any field names or values you want to add.
DEST_KEY = <key>
* NOTE: This setting is only valid for index-time field extractions.
* Specifies where SPLUNK software stores the expanded FORMAT results in accordance with the REGEX match.
In case of a conflict between a whitelist and a blacklist input setting, which one is used?
Which of the following is the use case for the deployment server feature of Splunk?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
Which of the following is a benefit of distributed search?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Whatisdistributedsearch
Parallel reduce search processing If you struggle with extremely large high-cardinality searches, you might be able to apply parallel reduce processing to them to help them complete faster. You must have a distributed search environment to use parallel reduce search processing.
Which configuration file would be used to forward the Splunk internal logs from a search head to the indexer?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/DistSearch/Forwardsearchheaddata
Per the provided Splunk reference URL by @hwangho, scroll to section Forward search head data, subsection titled, 2. Configure the search head as a forwarder. 'Create an outputs.conf file on the search head that configures the search head for load-balanced forwarding across the set of search peers (indexers).'
Where should apps be located on the deployment server that the clients pull from?
Answer : D
After an app is downloaded, it resides under $SPLUNK_HOME/etc/apps on the deployment clients. But it resided in the $SPLUNK_HOME/etc/deployment-apps location in the deployment server.
Which of the following enables compression for universal forwarders in outputs. conf ?
A)
B)
C)
D)
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf
# Compression
#
# This example sends compressed events to the remote indexer.
# NOTE: Compression can be enabled TCP or SSL outputs only.
# The receiver input port should also have compression enabled.
[tcpout]
server = splunkServer.example.com:4433
compressed = true
The following stanzas in inputs. conf are currently being used by a deployment client:
[udp: //145.175.118.177:1001
Connection_host = dns
sourcetype = syslog
Which of the following statements is true of data that is received via this input?
Answer : D
This is because the input type is UDP, which is an unreliable protocol that does not guarantee delivery, order, or integrity of the data packets. UDP does not have any mechanism to resend or acknowledge the data packets, so if Splunk is restarted, any data that was in transit or in the buffer may be dropped and not indexed.
When would the following command be used?
How is data handled by Splunk during the input phase of the data ingestion process?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Deploy/Datapipeline
'In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks in into 64K blocks, and annotates each block with some metadata keys.'
Which of the following apply to how distributed search works? (select all that apply)
Answer : A, C, D
Users log on to the search head and run reports: -- The search head dispatches searches to the peers -- Peers run searches in parallel and return their portion of results -- The search head consolidates the individual results and prepares reports
Which Splunk configuration file is used to enable data integrity checking?
How can native authentication be disabled in Splunk?
Answer : B
Which data pipeline phase is the last opportunity for defining event boundaries?
Answer : C
The parsing phase is the process of extracting fields and values from raw data. The parsing phase respects LINE_BREAKER, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings in props.conf. These settings determine how Splunk breaks the data into events based on certain criteria, such as timestamps or regular expressions. The event boundaries are defined by the props.conf file, which can be modified by the administrator. Therefore, the parsing phase is the last opportunity for defining event boundaries.
How do you remove missing forwarders from the Monitoring Console?
Answer : D
After configuring a universal forwarder to communicate with an indexer, which index can be checked via the Splunk Web UI for a successful connection?
Answer : D
What are the minimum required settings when creating a network input in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Inputsconf
[tcp://<remote server>:
*Configures the input to listen on a specific TCP network port.
*If a <remote server> makes a connection to this instance, the input uses this stanza to configure itself.
*If you do not specify <remote server>, this stanza matches all connections on the specified port.
*Generates events with source set to 'tcp:
*If you do not specify a sourcetype, generates events with sourcetype set to 'tcp-raw'
Which of the following statements describe deployment management? (select all that apply)
Answer : A, B
'All Splunk Enterprise instances functioning as management components needs access to an Enterprise license. Management components include the deployment server, the indexer cluster manager node, the search head cluster deployer, and the monitoring console.'
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
The universal forwarder has which capabilities when sending data? (select all that apply)
How often does Splunk recheck the LDAP server?
Which of the following are reasons to create separate indexes? (Choose all that apply.)
Answer : A, C
Different retention times: You can set different retention policies for different indexes, depending on how long you want to keep the data. For example, you can have an index for security data that has a longer retention time than an index for performance data that has a shorter retention time.
Restrict user permissions: You can set different access permissions for different indexes, depending on who needs to see the data. For example, you can have an index for sensitive data that is only accessible by certain users or roles, and an index for public data that is accessible by everyone.
When running the command shown below, what is the default path in which deployment server. conf is created?
splunk set deploy-poll deployServer:port
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/Updating/Definedeploymentclasses#Ways_to_define_server_classes 'When you use forwarder management to create a new server class, it saves the server class definition in a copy of serverclass.conf under $SPLUNK_HOME/etc/system/local. If, instead of using forwarder management, you decide to directly edit serverclass.conf, it is recommended that you create the serverclass.conf file in that same directory, $SPLUNK_HOME/etc/system/local.'
Which of the following must be done to define user permissions when integrating Splunk with LDAP?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.1.3/Security/ConfigureLDAPwithSplunkWeb
'You can map either users or groups, but not both. If you are using groups, all users must be members of an appropriate group. Groups inherit capabilities form the highest level role they're a member of.' 'If your LDAP environment does not have group entries, you can treat each user as its own group.'
Which of the following methods will connect a deployment client to a deployment server? (select all that apply)
Which Splunk forwarder has a built-in license?
Answer : C
What is required when adding a native user to Splunk? (select all that apply)
Answer : A, B
According to the Splunk system admin course PDF, When adding native users, Username and Password ARE REQUIRED
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
A new forwarder has been installed with a manually created deploymentclient.conf.
What is the next step to enable the communication between the forwarder and the deployment server?
Which of the following are supported configuration methods to add inputs on a forwarder? (select all that apply)
Answer : A, B, D
https://docs.splunk.com/Documentation/Forwarder/8.2.1/Forwarder/HowtoforwarddatatoSplunkEnterprise
'You can collect data on the universal forwarder using several methods. Define inputs on the universal forwarder with the CLI. You can use the CLI to define inputs on the universal forwarder. After you define the inputs, the universal forwarder collects data based on those definitions as long as it has access to the data that you want to monitor. Define inputs on the universal forwarder with configuration files. If the input you want to configure does not have a CLI argument for it, you can configure inputs with configuration files. Create an inputs.conf file in the directory, $SPLUNK_HOME/etc/system/local
Which option on the Add Data menu is most useful for testing data ingestion without creating inputs.conf?
Answer : A
This file has been manually created on a universal forwarder
A new Splunk admin comes in and connects the universal forwarders to a deployment server and deploys the same app with a new
Which file is now monitored?
Answer : B
If an update is made to an attribute in inputs.conf on a universal forwarder, on which Splunk component
would the fishbucket need to be reset in order to reindex the data?
Answer : A
https://www.splunk.com/en_us/blog/tips-and-tricks/what-is-this-fishbucket-thing.html
'Every Splunk instance has a fishbucket index, except the lightest of hand-tuned lightweight forwarders, and if you index a lot of files it can get quite large. As any other index, you can change the retention policy to control the size via indexes.conf'
Reference https://community.splunk.com/t5/Archive/How-to-reindex-data-from-a-forwarder/td-p/93310
In which phase do indexed extractions in props.conf occur?
Answer : B
The following items in the phases below are listed in the order Splunk applies them (ie LINE_BREAKER occurs before TRUNCATE).
Input phase
inputs.conf
props.conf
CHARSET
NO_BINARY_CHECK
CHECK_METHOD
CHECK_FOR_HEADER (deprecated)
PREFIX_SOURCETYPE
sourcetype
wmi.conf
regmon-filters.conf
Structured parsing phase
props.conf
INDEXED_EXTRACTIONS, and all other structured data header extractions
Parsing phase
props.conf
LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings
TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ, and all other time extraction settings and rules
TRANSFORMS which includes per-event queue filtering, per-event index assignment, per-event routing
SEDCMD
MORE_THAN, LESS_THAN
transforms.conf
stanzas referenced by a TRANSFORMS clause in props.conf
LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH
Configurationparametersandthedatapipeline
Consider a company with a Splunk distributed environment in production. The Compliance Department wants to start using Splunk; however, they want to ensure that no one can see their reports or any other knowledge objects. Which Splunk Component can be added to implement this policy for the new team?
Answer : D
A company moves to a distributed architecture to meet the growing demand for the use of Splunk. What parameter can be configured to enable automatic load balancing in the
Universal Forwarder to send data to the indexers?
Answer : D
To enable automatic load balancing, set the stanza to have a server value equal to a comma-separated list of IP addresses and indexer ports for each of the indexers in the environment. For example:
[tcpout] server=10.1.1.1:9997,10.1.1.2:9997,10.1.1.3:9997
The forwarder then distributes data across all of the indexers in the list.
An organization wants to collect Windows performance data from a set of clients, however, installing Splunk
software on these clients is not allowed. What option is available to collect this data in Splunk Enterprise?
Answer : B
'The Splunk platform collects remote Windows data for indexing in one of two ways: From Splunk forwarders, Using Windows Management Instrumentation (WMI). For Splunk Cloud deployments, you must use the Splunk Universal Forwarder on a Windows machines to montior remote Windows data.'
There is a file with a vast amount of old dat
a. Which of the following inputs.conf attributes would allow an admin to monitor the file for updates without indexing the pre-existing data?
Answer : D
IgnoreOlderThan: This setting filters files for indexing based on their age. It does not prevent indexing of old data already in the file.
allowList: This setting allows specifying patterns to include files for monitoring, but it does not control indexing of pre-existing data.
monitor: This is the default method for monitoring files but does not address indexing pre-existing data.
followTail: This attribute, when set in inputs.conf, ensures that Splunk starts reading a file from the end (tail) and does not index existing old data. It is ideal for scenarios with large files where only new updates are relevant.
References:
Splunk Docs: Monitor text files
Splunk Docs: Configure followTail in inputs.conf
Which of the following authentication types requires scripting in Splunk?
Answer : D
https://answers.splunk.com/answers/131127/scripted-authentication.html
Scripted Authentication: An option for Splunk Enterprise authentication. You can use an authentication system that you have in place (such as PAM or RADIUS) by configuring authentication.conf to use a script instead of using LDAP or Splunk Enterprise default authentication.
Which of the following statements describe deployment management? (select all that apply)
Answer : A, B
'All Splunk Enterprise instances functioning as management components needs access to an Enterprise license. Management components include the deployment server, the indexer cluster manager node, the search head cluster deployer, and the monitoring console.'
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
Which of the following are required when defining an index in indexes. conf? (select all that apply)
Answer : A, B, D
homePath = $SPLUNK_DB/hatchdb/db
coldPath = $SPLUNK_DB/hatchdb/colddb
thawedPath = $SPLUNK_DB/hatchdb/thaweddb
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Indexesconf#PER_INDEX_OPTIONS
After configuring a universal forwarder to communicate with an indexer, which index can be checked via the Splunk Web UI for a successful connection?
Answer : D
Consider the following stanza in inputs.conf:
What will the value of the source filed be for events generated by this scripts input?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Admin/Inputsconf
-Scroll down to source = <string>
*Default: the input file path
Which of the following is the use case for the deployment server feature of Splunk?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
Which artifact is required in the request header when creating an HTTP event?
A Universal Forwarder is collecting two separate sources of data (A,B). Source A is being routed through a Heavy Forwarder and then to an indexer. Source B is being routed directly to the indexer. Both sets of data require the masking of raw text strings before being written to disk. What does the administrator need to do to
ensure that the masking takes place successfully?
Answer : D
The correct answer is D. Place both props . conf and transforms . conf on the Heavy Forwarder for source A, and place both props . conf and transforms . conf on the indexer for source B.
For source A, the data is routed through a heavy forwarder, which can parse the data before sending it to the indexer. Therefore, you need to place both props.conf and transforms.conf on the heavy forwarder for source A, so that the masking takes place before indexing.
For source B, the data is routed directly to the indexer, which parses and indexes the data. Therefore, you need to place both props.conf and transforms.conf on the indexer for source B, so that the masking takes place before indexing.
An admin updates the Role to Group mapping for external authentication. How does the change affect users that are currently logged into Splunk?
Answer : A
Splunk checks role-to-group mapping only during user login for external authentication (e.g., LDAP, SAML). Users already logged in will continue using their previously assigned roles until they log out and log back in.
The changes to role mapping do not disrupt ongoing sessions.
Incorrect Options:
B: Search is not disabled upon role updates.
C: This is incorrect since existing users are also updated upon the next login.
D: Role updates do not terminate ongoing sessions.
References:
Splunk Docs: Configure user authentication
The universal forwarder has which capabilities when sending data? (select all that apply)
This file has been manually created on a universal forwarder
A new Splunk admin comes in and connects the universal forwarders to a deployment server and deploys the same app with a new
Which file is now monitored?
Answer : B
Which of the following types of data count against the license daily quota?
The Splunk administrator wants to ensure data is distributed evenly amongst the indexers. To do this, he runs
the following search over the last 24 hours:
index=*
What field can the administrator check to see the data distribution?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Knowledge/Usedefaultfields splunk_server
The splunk server field contains the name of the Splunk server containing the event. Useful in a distributed Splunk environment. Example: Restrict a search to the main index on a server named remote. splunk_server=remote index=main 404
What happens when there are conflicting settings within two or more configuration files?
Answer : D
When there are conflicting settings within two or more configuration files, the setting with the highest precedence is used. The precedence of configuration files is determined by a combination of the file type, the directory location, and the alphabetical order of the file names.
In this source definition the MAX_TIMESTAMP_LOOKHEAD is missing. Which value would fit best?
Event example:
Answer : D
https://docs.splunk.com/Documentation/Splunk/6.2.0/Data/Configuretimestamprecognition
'Specify how far (how many characters) into an event Splunk software should look for a timestamp.' since TIME_PREFIX = ^ and timestamp is from 0-29 position, so D=30 will pick up the WHOLE timestamp correctly.
Which is a valid stanza for a network input?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Data/Monitornetworkports
Bypassautomaticsourcetypeassignment
Which of the following statements describe deployment management? (select all that apply)
Answer : A, B
'All Splunk Enterprise instances functioning as management components needs access to an Enterprise license. Management components include the deployment server, the indexer cluster manager node, the search head cluster deployer, and the monitoring console.'
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
After an Enterprise Trial license expires, it will automatically convert to a Free license. How many days is an Enterprise Trial license valid before this conversion occurs?
What are the minimum required settings when creating a network input in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Inputsconf
[tcp://<remote server>:
*Configures the input to listen on a specific TCP network port.
*If a <remote server> makes a connection to this instance, the input uses this stanza to configure itself.
*If you do not specify <remote server>, this stanza matches all connections on the specified port.
*Generates events with source set to 'tcp:
*If you do not specify a sourcetype, generates events with sourcetype set to 'tcp-raw'
In which phase of the index time process does the license metering occur?
Answer : C
'When ingesting event data, the measured data volume is based on the new raw data that is placed into the indexing pipeline. Because the data is measured at the indexing pipeline, data that is filetered and dropped prior to indexing does not count against the license volume qota.'
https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/HowSplunklicensingworks
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
Event processing occurs at which phase of the data pipeline?
When running the command shown below, what is the default path in which deployment server. conf is created?
splunk set deploy-poll deployServer:port
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/Updating/Definedeploymentclasses#Ways_to_define_server_classes 'When you use forwarder management to create a new server class, it saves the server class definition in a copy of serverclass.conf under $SPLUNK_HOME/etc/system/local. If, instead of using forwarder management, you decide to directly edit serverclass.conf, it is recommended that you create the serverclass.conf file in that same directory, $SPLUNK_HOME/etc/system/local.'
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
Which of the following statements apply to directory inputs? {select all that apply)
Answer : A, C
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
What is the difference between the two wildcards ... and - for the monitor stanza in inputs, conf?
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Data/Specifyinputpathswithwildcards
... The ellipsis wildcard searches recursively through directories and any number of levels of subdirectories to find matches.
If you specify a folder separator (for example, //var/log/.../file), it does not match the first folder level, only subfolders.
* The asterisk wildcard matches anything in that specific folder path segment.
Unlike ..., * does not recurse through subfolders.
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
Which of the following statements accurately describes using SSL to secure the feed from a forwarder?
Answer : A
AboutsecuringyourSplunkconfigurationwithSSL
How is data handled by Splunk during the input phase of the data ingestion process?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Deploy/Datapipeline
'In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks in into 64K blocks, and annotates each block with some metadata keys.'
Which pathway represents where a network input in Splunk might be found?
Answer : B
The correct answer is B. The network input in Splunk might be found in the $SPLUNK_HOME/etc/apps/$appName/local/inputs.conf file.
A network input is a type of input that monitors data from TCP or UDP ports. To configure a network input, you need to specify the port number, the connection host, the source, and the sourcetype in the inputs.conf file. You can also set other optional settings, such as index, queue, and host_regex1.
The inputs.conf file is a configuration file that contains the settings for different types of inputs, such as files, directories, scripts, network ports, and Windows event logs. The inputs.conf file can be located in various directories, depending on the scope and priority of the settings. The most common locations are:
$SPLUNK_HOME/etc/system/default: This directory contains the default settings for all inputs. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/system/local: This directory contains the custom settings for all inputs that apply to the entire Splunk instance. The settings in this directory override the default settings2.
$SPLUNK_HOME/etc/apps/$appName/default: This directory contains the default settings for all inputs that are specific to an app. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/apps/$appName/local: This directory contains the custom settings for all inputs that are specific to an app. The settings in this directory override the default and system settings2.
Therefore, the best practice is to create or edit the inputs.conf file in the $SPLUNK_HOME/etc/apps/$appName/local directory, where $appName is the name of the app that you want to configure the network input for. This way, you can avoid modifying the default files and ensure that your settings are applied to the specific app.
The other options are incorrect because:
A . There is no network directory under the apps directory. The network input settings should be in the inputs.conf file, not in a separate directory.
C . There is no udp.conf file in Splunk. The network input settings should be in the inputs.conf file, not in a separate file. The system directory is not the recommended location for custom settings, as it affects the entire Splunk instance.
D . The var/lib/splunk directory is where Splunk stores the indexed data, not the input settings. The homePath setting is used to specify the location of the index data, not the input data. The inputName is not a valid variable for inputs.conf.
Which optional configuration setting in inputs .conf allows you to selectively forward the data to specific indexer(s)?
Answer : A
Specifies a comma-separated list of tcpout group names. Use this setting to selectively forward your data to specific indexers by specifying the tcpout groups that the forwarder should use when forwarding the data. Define the tcpout group names in the outputs.conf file in [tcpout:<tcpout_group_name>] stanzas. The groups present in defaultGroup in [tcpout] stanza in the outputs.conf file.
A Splunk administrator has been tasked with developing a retention strategy to have frequently accessed data sets on SSD storage and to have older, less frequently accessed data on slower NAS storage. They have set a mount point for the NAS. Which parameter do they need to modify to set the path for the older, less frequently accessed data in indexes.conf?
Which of the following types of data count against the license daily quota?
How often does Splunk recheck the LDAP server?
What is the default value of LINE_BREAKER?
Answer : B
Line breaking, which uses theLINE_BREAKERsetting to split the incoming stream of data into separate lines. By default, theLINE_BREAKERvalue is any sequence of newlines and carriage returns. In regular expression format, this is represented as the following string:([\r\n]+). You don't normally need to adjust this setting, but in cases where it's necessary, you must configure it in the props.conf configuration file on the forwarder that sends the data to Splunk Cloud Platform or a Splunk Enterprise indexer. TheLINE_BREAKERsetting expects a value in regular expression format.
Where are license files stored?
Answer : C
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
Which configuration files are used to transform raw data ingested by Splunk? (Choose all that apply.)
Answer : A, D
use transformations with props.conf and transforms.conf to:
-- Mask or delete raw data as it is being indexed
--Override sourcetype or host based upon event values
-- Route events to specific indexes based on event content
-- Prevent unwanted events from being indexed
Which of the following are supported options when configuring optional network inputs?
For single line event sourcetypes. it is most efficient to set SHOULD_linemerge to what value?
Answer : B
https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking
Attribute : SHOULD_LINEMERGE = [true|false]
Description : When set to true, the Splunk platform combines several input lines into a single event, with configuration based on the settings described in the next section.
Which data pipeline phase is the last opportunity for defining event boundaries?
Answer : C
The parsing phase is the process of extracting fields and values from raw data. The parsing phase respects LINE_BREAKER, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings in props.conf. These settings determine how Splunk breaks the data into events based on certain criteria, such as timestamps or regular expressions. The event boundaries are defined by the props.conf file, which can be modified by the administrator. Therefore, the parsing phase is the last opportunity for defining event boundaries.
When deploying apps, which attribute in the forwarder management interface determines the apps that clients install?
Answer : C
<https://docs.splunk.com/Documentation/Splunk/8.0.6/Updating/Deploymentserverarchitecture>
https://docs.splunk.com/Splexicon:Serverclass
In which phase do indexed extractions in props.conf occur?
Answer : B
The following items in the phases below are listed in the order Splunk applies them (ie LINE_BREAKER occurs before TRUNCATE).
Input phase
inputs.conf
props.conf
CHARSET
NO_BINARY_CHECK
CHECK_METHOD
CHECK_FOR_HEADER (deprecated)
PREFIX_SOURCETYPE
sourcetype
wmi.conf
regmon-filters.conf
Structured parsing phase
props.conf
INDEXED_EXTRACTIONS, and all other structured data header extractions
Parsing phase
props.conf
LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings
TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ, and all other time extraction settings and rules
TRANSFORMS which includes per-event queue filtering, per-event index assignment, per-event routing
SEDCMD
MORE_THAN, LESS_THAN
transforms.conf
stanzas referenced by a TRANSFORMS clause in props.conf
LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH
Configurationparametersandthedatapipeline
When Splunk is integrated with LDAP, which attribute can be changed in the Splunk UI for an LDAP user?
Which of the following are reasons to create separate indexes? (Choose all that apply.)
Answer : A, C
Different retention times: You can set different retention policies for different indexes, depending on how long you want to keep the data. For example, you can have an index for security data that has a longer retention time than an index for performance data that has a shorter retention time.
Restrict user permissions: You can set different access permissions for different indexes, depending on who needs to see the data. For example, you can have an index for sensitive data that is only accessible by certain users or roles, and an index for public data that is accessible by everyone.
Which authentication methods are natively supported within Splunk Enterprise? (select all that apply)
Answer : A, B, C
Splunk authentication: Provides Admin, Power and User by default, and you can define your own roles using a list of capabilities. If you have an Enterprise license, Splunk authentication is enabled by default. See Set up user authentication with Splunk's built-in system for more information. LDAP: Splunk Enterprise supports authentication with its internal authentication services or your existing LDAP server. See Set up user authentication with LDAP for more information. Scripted authentication API: Use scripted authentication to integrate Splunk authentication with an external authentication system, such as RADIUS or PAM. See Set up user authentication with external systems for more information. Note: Authentication, including native authentication, LDAP, and scripted authentication, is not available in Splunk Free.
When does a warm bucket roll over to a cold bucket?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/HowSplunkstoresindexes
Once further conditions are met (for example, the index reaches some maximum number of warm buckets), the indexer begins to roll the warm buckets to cold, based on their age. It always selects the oldest warm bucket to roll to cold. Buckets continue to roll to cold as they age in this manner. Cold buckets reside in a different location from hot and warm buckets. You can configure the location so that cold buckets reside on cheaper storage.
166653
When configuring HTTP Event Collector (HEC) input, how would one ensure the events have been indexed?
Answer : A
Per the provided Splunk reference URL
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
'While HEC has precautions in place to prevent data loss, it's impossible to completely prevent such an occurrence, especially in the event of a network failure or hardware crash. This is where indexer acknolwedgment comes in.'
Reference https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/AboutHECIDXAck
Who provides the Application Secret, Integration, and Secret keys, as well as the API Hostname when setting
up Duo for Multi-Factor Authentication in Splunk Enterprise?
Answer : A
What happens when there are conflicting settings within two or more configuration files?
Answer : D
When there are conflicting settings within two or more configuration files, the setting with the highest precedence is used. The precedence of configuration files is determined by a combination of the file type, the directory location, and the alphabetical order of the file names.
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
In this example, if useACK is set to true and the maxQueueSize is set to 7MB, what is the size of the wait queue on this universal forwarder?
Which of the following are available input methods when adding a file input in Splunk Web? (Choose all that
apply.)
Answer : A, D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Howdoyouwanttoadddata
The fastest way to add data to your Splunk Cloud instance or Splunk Enterprise deployment is to use Splunk Web. After you access the Add Data page, choose one of three options for getting data into your Splunk platform deployment with Splunk Web: (1) Upload, (2) Monitor, (3) Forward The Upload option lets you upload a file or archive of files for indexing. When you choose Upload option, Splunk Web opens the upload process page. Monitor. For Splunk Enterprise installations, the Monitor option lets you monitor one or more files, directories, network streams, scripts, Event Logs (on Windows hosts only), performance metrics, or any other type of machine data that the Splunk Enterprise instance has access to.
Where can scripts for scripted inputs reside on the host file system? (select all that apply)
Answer : A, C, D
'Where to place the scripts for scripted inputs. The script that you refer to in $SCRIPT can reside in only one of the following places on the host file system:
$SPLUNK_HOME/etc/system/bin
$SPLUNK_HOME/etc/apps/<your_App>/bin
$SPLUNK_HOME/bin/scripts
As a best practice, put your script in the bin/ directory that is nearest to the inputs.conf file that calls your script on the host file system.'
When working with an indexer cluster, what changes with the global precedence when comparing to a standalone deployment?
Answer : C
The app local directories move to second in the priority list. This is explained in the Splunk documentation, which states:
In a clustered environment, the precedence of configuration files changes slightly from that of a standalone deployment. The app local directories move to second in the priority list, after the peer-apps local directory. This means that any configuration files in the app local directories on the individual peers are overridden by configuration files of the same name and type in the peer-apps local directory on the master node.
An admin updates the Role to Group mapping for external authentication. How does the change affect users that are currently logged into Splunk?
Answer : A
Splunk checks role-to-group mapping only during user login for external authentication (e.g., LDAP, SAML). Users already logged in will continue using their previously assigned roles until they log out and log back in.
The changes to role mapping do not disrupt ongoing sessions.
Incorrect Options:
B: Search is not disabled upon role updates.
C: This is incorrect since existing users are also updated upon the next login.
D: Role updates do not terminate ongoing sessions.
References:
Splunk Docs: Configure user authentication
Which artifact is required in the request header when creating an HTTP event?
If an update is made to an attribute in inputs.conf on a universal forwarder, on which Splunk component
would the fishbucket need to be reset in order to reindex the data?
Answer : A
https://www.splunk.com/en_us/blog/tips-and-tricks/what-is-this-fishbucket-thing.html
'Every Splunk instance has a fishbucket index, except the lightest of hand-tuned lightweight forwarders, and if you index a lot of files it can get quite large. As any other index, you can change the retention policy to control the size via indexes.conf'
Reference https://community.splunk.com/t5/Archive/How-to-reindex-data-from-a-forwarder/td-p/93310
Event processing occurs at which phase of the data pipeline?
When enabling data integrity control, where does Splunk Enterprise store the hash files for each bucket?
Answer : B
Data integrity controls in Splunk ensure that indexed data has not been tampered with.
When enabled, Splunk calculates hashes for each bucket and stores these hash files in the rawdata directory of the corresponding bucket.
Incorrect Options:
A, C, D: These directories do not store hash files.
References:
Splunk Docs: Configure data integrity controls
The following stanzas in inputs. conf are currently being used by a deployment client:
[udp: //145.175.118.177:1001
Connection_host = dns
sourcetype = syslog
Which of the following statements is true of data that is received via this input?
Answer : D
This is because the input type is UDP, which is an unreliable protocol that does not guarantee delivery, order, or integrity of the data packets. UDP does not have any mechanism to resend or acknowledge the data packets, so if Splunk is restarted, any data that was in transit or in the buffer may be dropped and not indexed.
Which layers are involved in Splunk configuration file layering? (select all that apply)
Answer : A, B, C
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles
To determine the order of directories for evaluating configuration file precedence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user: Global. Activities like indexing take place in a global context. They are independent of any app or user. For example, configuration files that determine monitoring or indexing behavior occur outside of the app and user context and are global in nature. App/user. Some activities, like searching, take place in an app or user context. The app and user context is vital to search-time processing, where certain knowledge objects or actions might be valid only for specific users in specific apps.
Which Splunk component does a search head primarily communicate with?
Answer : A
When deploying apps, which attribute in the forwarder management interface determines the apps that clients install?
Answer : C
<https://docs.splunk.com/Documentation/Splunk/8.0.6/Updating/Deploymentserverarchitecture>
https://docs.splunk.com/Splexicon:Serverclass
Which of the following is a benefit of distributed search?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Whatisdistributedsearch
Parallel reduce search processing If you struggle with extremely large high-cardinality searches, you might be able to apply parallel reduce processing to them to help them complete faster. You must have a distributed search environment to use parallel reduce search processing.
Which of the following must be done to define user permissions when integrating Splunk with LDAP?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.1.3/Security/ConfigureLDAPwithSplunkWeb
'You can map either users or groups, but not both. If you are using groups, all users must be members of an appropriate group. Groups inherit capabilities form the highest level role they're a member of.' 'If your LDAP environment does not have group entries, you can treat each user as its own group.'
When using a directory monitor input, specific source type can be selectively overridden using which configuration file?
How is a remote monitor input distributed to forwarders?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/Usingforwardingagents
Scroll down to the section Titled, How to configure forwarder inputs, and subsection Here are the main ways that you can configure data inputs on a forwarder Install the app or add-on that contains the inputs you wants
Which feature in Splunk allows Event Breaking, Timestamp extractions, and any advanced configurations
found in props.conf to be validated all through the UI?
Which of the following are supported options when configuring optional network inputs?
This file has been manually created on a universal forwarder
A new Splunk admin comes in and connects the universal forwarders to a deployment server and deploys the same app with a new
Which file is now monitored?
Answer : B
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
Which of the following are reasons to create separate indexes? (Choose all that apply.)
Answer : A, C
Different retention times: You can set different retention policies for different indexes, depending on how long you want to keep the data. For example, you can have an index for security data that has a longer retention time than an index for performance data that has a shorter retention time.
Restrict user permissions: You can set different access permissions for different indexes, depending on who needs to see the data. For example, you can have an index for sensitive data that is only accessible by certain users or roles, and an index for public data that is accessible by everyone.
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
What is required when adding a native user to Splunk? (select all that apply)
Answer : A, B
According to the Splunk system admin course PDF, When adding native users, Username and Password ARE REQUIRED
The priority of layered Splunk configuration files depends on the file's:
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
'To determine the order of directories for evaluating configuration file precendence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user'
On the deployment server, administrators can map clients to server classes using client filters. Which of the
following statements is accurate?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.1/Updating/Filterclients
same/td-p/390910
When running the command shown below, what is the default path in which deployment server. conf is created?
splunk set deploy-poll deployServer:port
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/Updating/Definedeploymentclasses#Ways_to_define_server_classes 'When you use forwarder management to create a new server class, it saves the server class definition in a copy of serverclass.conf under $SPLUNK_HOME/etc/system/local. If, instead of using forwarder management, you decide to directly edit serverclass.conf, it is recommended that you create the serverclass.conf file in that same directory, $SPLUNK_HOME/etc/system/local.'
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
Using SEDCMD in props.conf allows raw data to be modified. With the given event below, which option will mask the first three digits of the AcctID field resulting output: [22/Oct/2018:15:50:21] VendorID=1234 Code=B AcctID=xxx5309
Event:
[22/Oct/2018:15:50:21] VendorID=1234 Code=B AcctID=xxx5309
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Anonymizedata
Scrolling down to the section titled 'Define the sed script in props.conf shows the correct syntax of an example which validates that the number/character /1 immediately preceded the /g
What is the default value of LINE_BREAKER?
Answer : B
Line breaking, which uses theLINE_BREAKERsetting to split the incoming stream of data into separate lines. By default, theLINE_BREAKERvalue is any sequence of newlines and carriage returns. In regular expression format, this is represented as the following string:([\r\n]+). You don't normally need to adjust this setting, but in cases where it's necessary, you must configure it in the props.conf configuration file on the forwarder that sends the data to Splunk Cloud Platform or a Splunk Enterprise indexer. TheLINE_BREAKERsetting expects a value in regular expression format.
When deploying apps, which attribute in the forwarder management interface determines the apps that clients install?
Answer : C
<https://docs.splunk.com/Documentation/Splunk/8.0.6/Updating/Deploymentserverarchitecture>
https://docs.splunk.com/Splexicon:Serverclass
Where can scripts for scripted inputs reside on the host file system? (select all that apply)
Answer : A, C, D
'Where to place the scripts for scripted inputs. The script that you refer to in $SCRIPT can reside in only one of the following places on the host file system:
$SPLUNK_HOME/etc/system/bin
$SPLUNK_HOME/etc/apps/<your_App>/bin
$SPLUNK_HOME/bin/scripts
As a best practice, put your script in the bin/ directory that is nearest to the inputs.conf file that calls your script on the host file system.'
Which of the following is a valid distributed search group?
How often does Splunk recheck the LDAP server?
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
When indexing a data source, which fields are considered metadata?
Answer : D
Which of the following statements apply to directory inputs? {select all that apply)
Answer : A, C
Which of the following monitor inputs stanza headers would match all of the following files?
/var/log/www1/secure.log
/var/log/www/secure.l
/var/log/www/logs/secure.logs
/var/log/www2/secure.log
Answer : C
Consider the following stanza in inputs.conf:
What will the value of the source filed be for events generated by this scripts input?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Admin/Inputsconf
-Scroll down to source = <string>
*Default: the input file path
After how many warnings within a rolling 30-day period will a license violation occur with an enforced
Enterprise license?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Aboutlicenseviolations
'Enterprise Trial license. If you get five or more warnings in a rolling 30 days period, you are in violation of your license. Dev/Test license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. Developer license. If you generate five or more warnings in a rolling 30-day period, you are in violation of your license. BUT for Free license. If you get three or more warnings in a rolling 30 days period, you are in violation of your license.'
How is data handled by Splunk during the input phase of the data ingestion process?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Deploy/Datapipeline
'In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks in into 64K blocks, and annotates each block with some metadata keys.'
Which Splunk indexer operating system platform is supported when sending logs from a Windows universal forwarder?
Answer : A
'The forwarder/indexer relationship can be considered platform agnostic (within the sphere of supported platforms) because they exchange their data handshake (and the data, if you wish) over TCP.
When working with an indexer cluster, what changes with the global precedence when comparing to a standalone deployment?
Answer : C
The app local directories move to second in the priority list. This is explained in the Splunk documentation, which states:
In a clustered environment, the precedence of configuration files changes slightly from that of a standalone deployment. The app local directories move to second in the priority list, after the peer-apps local directory. This means that any configuration files in the app local directories on the individual peers are overridden by configuration files of the same name and type in the peer-apps local directory on the master node.
What action is required to enable forwarder management in Splunk Web?
Answer : C
https://docs.splunk.com/Documentation/MSApp/2.0.3/MSInfra/Setupadeploymentserver
'To activate deployment server, you must place at least one app into %SPLUNK_HOME%\etc\deployment-apps on the host you want to act as deployment server. In this case, the app is the 'send to indexer' app you created earlier, and the host is the indexer you set up initially.
When enabling data integrity control, where does Splunk Enterprise store the hash files for each bucket?
Answer : B
Data integrity controls in Splunk ensure that indexed data has not been tampered with.
When enabled, Splunk calculates hashes for each bucket and stores these hash files in the rawdata directory of the corresponding bucket.
Incorrect Options:
A, C, D: These directories do not store hash files.
References:
Splunk Docs: Configure data integrity controls
Which Splunk component does a search head primarily communicate with?
Answer : A
When using license pools, volume allocations apply to which Splunk components?
What is the difference between the two wildcards ... and - for the monitor stanza in inputs, conf?
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Data/Specifyinputpathswithwildcards
... The ellipsis wildcard searches recursively through directories and any number of levels of subdirectories to find matches.
If you specify a folder separator (for example, //var/log/.../file), it does not match the first folder level, only subfolders.
* The asterisk wildcard matches anything in that specific folder path segment.
Unlike ..., * does not recurse through subfolders.
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
Using the CLI on the forwarder, how could the current forwarder to indexer configuration be viewed?
Which optional configuration setting in inputs .conf allows you to selectively forward the data to specific indexer(s)?
Answer : A
Specifies a comma-separated list of tcpout group names. Use this setting to selectively forward your data to specific indexers by specifying the tcpout groups that the forwarder should use when forwarding the data. Define the tcpout group names in the outputs.conf file in [tcpout:<tcpout_group_name>] stanzas. The groups present in defaultGroup in [tcpout] stanza in the outputs.conf file.
After configuring a universal forwarder to communicate with an indexer, which index can be checked via the Splunk Web UI for a successful connection?
Answer : D
An organization wants to collect Windows performance data from a set of clients, however, installing Splunk
software on these clients is not allowed. What option is available to collect this data in Splunk Enterprise?
Answer : B
'The Splunk platform collects remote Windows data for indexing in one of two ways: From Splunk forwarders, Using Windows Management Instrumentation (WMI). For Splunk Cloud deployments, you must use the Splunk Universal Forwarder on a Windows machines to montior remote Windows data.'
Which of the following is accurate regarding the input phase?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline 'The data pipeline segments in depth. INPUT - In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks it into 64K blocks, and annotates each block with some metadata keys. The keys can also include values that are used internally, such as the character encoding of the data stream, and values that control later processing of the data, such as the index into which the events should be stored. PARSING Annotating individual events with metadata copied from the source-wide keys. Transforming event data and metadata according to regex transform rules.'
How is a remote monitor input distributed to forwarders?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/Usingforwardingagents
Scroll down to the section Titled, How to configure forwarder inputs, and subsection Here are the main ways that you can configure data inputs on a forwarder Install the app or add-on that contains the inputs you wants
Which of the following apply to how distributed search works? (select all that apply)
Answer : A, C, D
Users log on to the search head and run reports: -- The search head dispatches searches to the peers -- Peers run searches in parallel and return their portion of results -- The search head consolidates the individual results and prepares reports
Which of the methods listed below supports muti-factor authentication?
When running a real-time search, search results are pulled from which Splunk component?
Answer : D
Using the Splunk reference URL https://docs.splunk.com/Splexicon:Searchpeer
'search peer is a splunk platform instance that responds to search requests from a search head. The term 'search peer' is usally synonymous with the indexer role in a distributed search topology. However, other instance types also have access to indexed data, particularly internal diagnostic data, and thus function as search peers when they respond to search requests for that data.'
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
Which configuration files are used to transform raw data ingested by Splunk? (Choose all that apply.)
Answer : A, D
use transformations with props.conf and transforms.conf to:
-- Mask or delete raw data as it is being indexed
--Override sourcetype or host based upon event values
-- Route events to specific indexes based on event content
-- Prevent unwanted events from being indexed
Which Splunk component does a search head primarily communicate with?
Answer : A
In a distributed environment, which Splunk component is used to distribute apps and configurations to the
other Splunk instances?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations
First line says it all: 'The deployment server distributes deployment apps to clients.'
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
Which setting allows the configuration of Splunk to allow events to span over more than one line?
Which artifact is required in the request header when creating an HTTP event?
When configuring monitor inputs with whitelists or blacklists, what is the supported method of filtering the lists?
Which of the following is valid distribute search group?
A)
B)
C)
D)
Answer : D
Which of the following are available input methods when adding a file input in Splunk Web? (Choose all that
apply.)
Answer : A, D
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Howdoyouwanttoadddata
The fastest way to add data to your Splunk Cloud instance or Splunk Enterprise deployment is to use Splunk Web. After you access the Add Data page, choose one of three options for getting data into your Splunk platform deployment with Splunk Web: (1) Upload, (2) Monitor, (3) Forward The Upload option lets you upload a file or archive of files for indexing. When you choose Upload option, Splunk Web opens the upload process page. Monitor. For Splunk Enterprise installations, the Monitor option lets you monitor one or more files, directories, network streams, scripts, Event Logs (on Windows hosts only), performance metrics, or any other type of machine data that the Splunk Enterprise instance has access to.
A Universal Forwarder is collecting two separate sources of data (A,B). Source A is being routed through a Heavy Forwarder and then to an indexer. Source B is being routed directly to the indexer. Both sets of data require the masking of raw text strings before being written to disk. What does the administrator need to do to
ensure that the masking takes place successfully?
Answer : D
The correct answer is D. Place both props . conf and transforms . conf on the Heavy Forwarder for source A, and place both props . conf and transforms . conf on the indexer for source B.
For source A, the data is routed through a heavy forwarder, which can parse the data before sending it to the indexer. Therefore, you need to place both props.conf and transforms.conf on the heavy forwarder for source A, so that the masking takes place before indexing.
For source B, the data is routed directly to the indexer, which parses and indexes the data. Therefore, you need to place both props.conf and transforms.conf on the indexer for source B, so that the masking takes place before indexing.
Which option accurately describes the purpose of the HTTP Event Collector (HEC)?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/UsetheHTTPEventCollector
'The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. HEC uses a token-based authentication model. You can generate a token and then configure a logging library or HTTP client with the token to send data to HEC in a specific format. This process eliminates the need for a Splunk forwarder when you send application events.'
Which of the following applies only to Splunk index data integrity check?
Answer : C
Which pathway represents where a network input in Splunk might be found?
Answer : B
The correct answer is B. The network input in Splunk might be found in the $SPLUNK_HOME/etc/apps/$appName/local/inputs.conf file.
A network input is a type of input that monitors data from TCP or UDP ports. To configure a network input, you need to specify the port number, the connection host, the source, and the sourcetype in the inputs.conf file. You can also set other optional settings, such as index, queue, and host_regex1.
The inputs.conf file is a configuration file that contains the settings for different types of inputs, such as files, directories, scripts, network ports, and Windows event logs. The inputs.conf file can be located in various directories, depending on the scope and priority of the settings. The most common locations are:
$SPLUNK_HOME/etc/system/default: This directory contains the default settings for all inputs. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/system/local: This directory contains the custom settings for all inputs that apply to the entire Splunk instance. The settings in this directory override the default settings2.
$SPLUNK_HOME/etc/apps/$appName/default: This directory contains the default settings for all inputs that are specific to an app. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/apps/$appName/local: This directory contains the custom settings for all inputs that are specific to an app. The settings in this directory override the default and system settings2.
Therefore, the best practice is to create or edit the inputs.conf file in the $SPLUNK_HOME/etc/apps/$appName/local directory, where $appName is the name of the app that you want to configure the network input for. This way, you can avoid modifying the default files and ensure that your settings are applied to the specific app.
The other options are incorrect because:
A . There is no network directory under the apps directory. The network input settings should be in the inputs.conf file, not in a separate directory.
C . There is no udp.conf file in Splunk. The network input settings should be in the inputs.conf file, not in a separate file. The system directory is not the recommended location for custom settings, as it affects the entire Splunk instance.
D . The var/lib/splunk directory is where Splunk stores the indexed data, not the input settings. The homePath setting is used to specify the location of the index data, not the input data. The inputName is not a valid variable for inputs.conf.
Consider a company with a Splunk distributed environment in production. The Compliance Department wants to start using Splunk; however, they want to ensure that no one can see their reports or any other knowledge objects. Which Splunk Component can be added to implement this policy for the new team?
Answer : D
Which configuration file would be used to forward the Splunk internal logs from a search head to the indexer?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/DistSearch/Forwardsearchheaddata
Per the provided Splunk reference URL by @hwangho, scroll to section Forward search head data, subsection titled, 2. Configure the search head as a forwarder. 'Create an outputs.conf file on the search head that configures the search head for load-balanced forwarding across the set of search peers (indexers).'
Within props. conf, which stanzas are valid for data modification? (select all that apply)
What happens when the same username exists in Splunk as well as through LDAP?
Answer : C
Splunk platform attempts native authentication first. If authentication fails outside of a local account that doesn't exist, there is no attempt to use LDAP to log in. This is adapted from precedence of Splunk authentication schema.
Which of the following indexes come pre-configured with Splunk Enterprise? (select all that apply)
How often does Splunk recheck the LDAP server?
An admin is running the latest version of Splunk with a 500 GB license. The current daily volume of new data
is 300 GB per day. To minimize license issues, what is the best way to add 10 TB of historical data to the
index?
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Aboutlicenseviolations
'An Enterprise license stack with a license volume of 100 GB of data per day or more does not currently violate.'
Which Splunk forwarder has a built-in license?
Answer : C
Which Splunk configuration file is used to enable data integrity checking?
In which phase do indexed extractions in props.conf occur?
Answer : B
The following items in the phases below are listed in the order Splunk applies them (ie LINE_BREAKER occurs before TRUNCATE).
Input phase
inputs.conf
props.conf
CHARSET
NO_BINARY_CHECK
CHECK_METHOD
CHECK_FOR_HEADER (deprecated)
PREFIX_SOURCETYPE
sourcetype
wmi.conf
regmon-filters.conf
Structured parsing phase
props.conf
INDEXED_EXTRACTIONS, and all other structured data header extractions
Parsing phase
props.conf
LINE_BREAKER, TRUNCATE, SHOULD_LINEMERGE, BREAK_ONLY_BEFORE_DATE, and all other line merging settings
TIME_PREFIX, TIME_FORMAT, DATETIME_CONFIG (datetime.xml), TZ, and all other time extraction settings and rules
TRANSFORMS which includes per-event queue filtering, per-event index assignment, per-event routing
SEDCMD
MORE_THAN, LESS_THAN
transforms.conf
stanzas referenced by a TRANSFORMS clause in props.conf
LOOKAHEAD, DEST_KEY, WRITE_META, DEFAULT_VALUE, REPEAT_MATCH
Configurationparametersandthedatapipeline
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
During search time, which directory of configuration files has the highest precedence?
Answer : D
Adding further clarity and quoting same Splunk reference URL from @giubal'
'To keep configuration settings consistent across peer nodes, configuration files are managed from the cluster master, which pushes the files to the slave-app directories on the peer nodes. Files in the slave-app directories have the highest precedence in a cluster peer's configuration. Here is the expanded precedence order for cluster peers:
1.Slave-app local directories -- highest priority
2. System local directory
3. App local directories
4. Slave-app default directories
5. App default directories
6. System default directory --lowest priority
What is the correct curl to send multiple events through HTTP Event Collector?
Answer : B
curl ''https://mysplunkserver.example.com:8088/services/collector'' \ -H ''Authorization: Splunk DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67'' \ -d '{''event'': ''Hello World''}, {''event'': ''Hola Mundo''}, {''event'': ''Hallo Welt''}'. This is the correct curl command to send multiple events through HTTP Event Collector (HEC), which is a token-based API that allows you to send data to Splunk Enterprise from any application that can make an HTTP request. The command has the following components:
The URL of the HEC endpoint, which consists of the protocol (https), the hostname or IP address of the Splunk server (mysplunkserver.example.com), the port number (8088), and the service name (services/collector).
The header that contains the authorization token, which is a unique identifier that grants access to the HEC endpoint. The token is prefixed with Splunk and enclosed in quotation marks. The token value (DF4S7ZE4-3GS1-8SFS-E777-0284GG91PF67) is an example and should be replaced with your own token value.
The data payload that contains the events to be sent, which are JSON objects enclosed in curly braces and separated by commas. Each event object has a mandatory field called event, which contains the raw data to be indexed. The event value can be a string, a number, a boolean, an array, or another JSON object. In this case, the event values are strings that say hello in different languages.
Which of the following are supported options when configuring optional network inputs?
In this source definition the MAX_TIMESTAMP_LOOKHEAD is missing. Which value would fit best?
Event example:
Answer : D
https://docs.splunk.com/Documentation/Splunk/6.2.0/Data/Configuretimestamprecognition
'Specify how far (how many characters) into an event Splunk software should look for a timestamp.' since TIME_PREFIX = ^ and timestamp is from 0-29 position, so D=30 will pick up the WHOLE timestamp correctly.
On the deployment server, administrators can map clients to server classes using client filters. Which of the
following statements is accurate?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.1/Updating/Filterclients
same/td-p/390910
Which of the following is a valid distributed search group?
The following stanzas in inputs. conf are currently being used by a deployment client:
[udp: //145.175.118.177:1001
Connection_host = dns
sourcetype = syslog
Which of the following statements is true of data that is received via this input?
Answer : D
This is because the input type is UDP, which is an unreliable protocol that does not guarantee delivery, order, or integrity of the data packets. UDP does not have any mechanism to resend or acknowledge the data packets, so if Splunk is restarted, any data that was in transit or in the buffer may be dropped and not indexed.
Using the CLI on the forwarder, how could the current forwarder to indexer configuration be viewed?
Which layers are involved in Splunk configuration file layering? (select all that apply)
Answer : A, B, C
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles
To determine the order of directories for evaluating configuration file precedence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user: Global. Activities like indexing take place in a global context. They are independent of any app or user. For example, configuration files that determine monitoring or indexing behavior occur outside of the app and user context and are global in nature. App/user. Some activities, like searching, take place in an app or user context. The app and user context is vital to search-time processing, where certain knowledge objects or actions might be valid only for specific users in specific apps.
Which of the following are reasons to create separate indexes? (Choose all that apply.)
Answer : A, C
Different retention times: You can set different retention policies for different indexes, depending on how long you want to keep the data. For example, you can have an index for security data that has a longer retention time than an index for performance data that has a shorter retention time.
Restrict user permissions: You can set different access permissions for different indexes, depending on who needs to see the data. For example, you can have an index for sensitive data that is only accessible by certain users or roles, and an index for public data that is accessible by everyone.
Which optional configuration setting in inputs .conf allows you to selectively forward the data to specific indexer(s)?
Answer : A
Specifies a comma-separated list of tcpout group names. Use this setting to selectively forward your data to specific indexers by specifying the tcpout groups that the forwarder should use when forwarding the data. Define the tcpout group names in the outputs.conf file in [tcpout:<tcpout_group_name>] stanzas. The groups present in defaultGroup in [tcpout] stanza in the outputs.conf file.
Which of the following must be done to define user permissions when integrating Splunk with LDAP?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.1.3/Security/ConfigureLDAPwithSplunkWeb
'You can map either users or groups, but not both. If you are using groups, all users must be members of an appropriate group. Groups inherit capabilities form the highest level role they're a member of.' 'If your LDAP environment does not have group entries, you can treat each user as its own group.'
An admin updates the Role to Group mapping for external authentication. How does the change affect users that are currently logged into Splunk?
Answer : A
Splunk checks role-to-group mapping only during user login for external authentication (e.g., LDAP, SAML). Users already logged in will continue using their previously assigned roles until they log out and log back in.
The changes to role mapping do not disrupt ongoing sessions.
Incorrect Options:
B: Search is not disabled upon role updates.
C: This is incorrect since existing users are also updated upon the next login.
D: Role updates do not terminate ongoing sessions.
References:
Splunk Docs: Configure user authentication
Assume a file is being monitored and the data was incorrectly indexed to an exclusive index. The index is
cleaned and now the data must be reindexed. What other index must be cleaned to reset the input checkpoint
information for that file?
A log file contains 193 days worth of timestamped events. Which monitor stanza would be used to collect data 45 days old and newer from that log file?
Answer : D
Which of the following is an appropriate description of a deployment server in a non-cluster environment?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Deploymentserverarchitecture
'A deployment client is a Splunk instance remotely configured by a deployment server'.
Which of the following is an acceptable channel value when using the HTTP Event Collector indexer acknowledgment capability?
Answer : A
The HTTP Event Collector (HEC) supports indexer acknowledgment to confirm event delivery. Each acknowledgment is associated with a unique GUID (Globally Unique Identifier).
GUID ensures events are not re-indexed in the case of retries.
Incorrect Options:
B, C, D: These are not valid channel values in HEC acknowledgments.
References:
Splunk Docs: Use indexer acknowledgment with HTTP Event Collector
Where can scripts for scripted inputs reside on the host file system? (select all that apply)
Answer : A, C, D
'Where to place the scripts for scripted inputs. The script that you refer to in $SCRIPT can reside in only one of the following places on the host file system:
$SPLUNK_HOME/etc/system/bin
$SPLUNK_HOME/etc/apps/<your_App>/bin
$SPLUNK_HOME/bin/scripts
As a best practice, put your script in the bin/ directory that is nearest to the inputs.conf file that calls your script on the host file system.'
Load balancing on a Universal Forwarder is not scaling correctly. The forwarder's outputs. and the tcpout stanza are setup correctly. What else could be the cause of this scaling issue? (select all that apply)
Answer : A, C
The possible causes of the load balancing issue on the Universal Forwarder are A and C. The receiving port and the DNS record are both factors that affect the ability of the Universal Forwarder to distribute data across multiple receivers. If the receiving port is not properly set up to listen on the right port, or if the DNS record used is not set up with a valid list of IP addresses, the Universal Forwarder might fail to connect to some or all of the receivers, resulting in poor load balancing.
You update a props. conf file while Splunk is running. You do not restart Splunk and you run this command: splunk btoo1 props list ---debug. What will the output be?
Answer : C
'The btool command simulates the merging process using the on-disk conf files and creates a report showing the merged settings.'
'The report does not necessarily represent what's loaded in memory. If a conf file change is made that requires a service restart, the btool report shows the change even though that change isn't active.'
Which authentication methods are natively supported within Splunk Enterprise? (select all that apply)
Answer : A, B, C
Splunk authentication: Provides Admin, Power and User by default, and you can define your own roles using a list of capabilities. If you have an Enterprise license, Splunk authentication is enabled by default. See Set up user authentication with Splunk's built-in system for more information. LDAP: Splunk Enterprise supports authentication with its internal authentication services or your existing LDAP server. See Set up user authentication with LDAP for more information. Scripted authentication API: Use scripted authentication to integrate Splunk authentication with an external authentication system, such as RADIUS or PAM. See Set up user authentication with external systems for more information. Note: Authentication, including native authentication, LDAP, and scripted authentication, is not available in Splunk Free.
What is the default value of LINE_BREAKER?
Answer : B
Line breaking, which uses theLINE_BREAKERsetting to split the incoming stream of data into separate lines. By default, theLINE_BREAKERvalue is any sequence of newlines and carriage returns. In regular expression format, this is represented as the following string:([\r\n]+). You don't normally need to adjust this setting, but in cases where it's necessary, you must configure it in the props.conf configuration file on the forwarder that sends the data to Splunk Cloud Platform or a Splunk Enterprise indexer. TheLINE_BREAKERsetting expects a value in regular expression format.
Which default Splunk role could be assigned to provide users with the following capabilities?
Create saved searches
Edit shared objects and alerts
Not allowed to create custom roles
In a distributed environment, which Splunk component is used to distribute apps and configurations to the
other Splunk instances?
Answer : D
https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Updateconfigurations
First line says it all: 'The deployment server distributes deployment apps to clients.'
User role inheritance allows what to be inherited from the parent role? (select all that apply)
Which forwarder type can parse data prior to forwarding?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Typesofforwarders
'A heavy forwarder parses data before forwarding it and can route data based on criteria such as source or type of event.'
Where should apps be located on the deployment server that the clients pull from?
Answer : D
After an app is downloaded, it resides under $SPLUNK_HOME/etc/apps on the deployment clients. But it resided in the $SPLUNK_HOME/etc/deployment-apps location in the deployment server.
Which Splunk forwarder has a built-in license?
Answer : C
Which network input option provides durable file-system buffering of data to mitigate data loss due to network outages and splunkd restarts?
Answer : C
Where can scripts for scripted inputs reside on the host file system? (select all that apply)
Answer : A, C, D
'Where to place the scripts for scripted inputs. The script that you refer to in $SCRIPT can reside in only one of the following places on the host file system:
$SPLUNK_HOME/etc/system/bin
$SPLUNK_HOME/etc/apps/<your_App>/bin
$SPLUNK_HOME/bin/scripts
As a best practice, put your script in the bin/ directory that is nearest to the inputs.conf file that calls your script on the host file system.'
When using a directory monitor input, specific source type can be selectively overridden using which configuration file?
Which parent directory contains the configuration files in Splunk?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Configurationfiledirectories
Section titled, Configuration file directories, states 'A detailed list of settings for each configuration file is provided in the .spec file names for that configuration file. You can find the latest version of the .spec and .example files in the $SPLUNK_HOME/etc system/README folder of your Splunk Enterprise installation...'
What is the default character encoding used by Splunk during the input phase?
Answer : A
https://docs.splunk.com/Documentation/Splunk/7.3.1/Data/Configurecharactersetencoding
'Configure character set encoding. Splunk software attempts to apply UTF-8 encoding to your scources by default. If a source foesn't use UTF-8 encoding or is a non-ASCII file, Splunk software tries to convert data from the source to UTF-8 encoding unless you specify a character set to use by setting the CHARSET key in the props.conf file.'
Using the CLI on the forwarder, how could the current forwarder to indexer configuration be viewed?
If an update is made to an attribute in inputs.conf on a universal forwarder, on which Splunk component
would the fishbucket need to be reset in order to reindex the data?
Answer : A
https://www.splunk.com/en_us/blog/tips-and-tricks/what-is-this-fishbucket-thing.html
'Every Splunk instance has a fishbucket index, except the lightest of hand-tuned lightweight forwarders, and if you index a lot of files it can get quite large. As any other index, you can change the retention policy to control the size via indexes.conf'
Reference https://community.splunk.com/t5/Archive/How-to-reindex-data-from-a-forwarder/td-p/93310
In which Splunk configuration is the SEDCMD used?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Forwarddatatothird-partysystemsd
'You can specify a SEDCMD configuration in props.conf to address data that contains characters that the third-party server cannot process. '
When indexing a data source, which fields are considered metadata?
Answer : D
A Splunk administrator has been tasked with developing a retention strategy to have frequently accessed data sets on SSD storage and to have older, less frequently accessed data on slower NAS storage. They have set a mount point for the NAS. Which parameter do they need to modify to set the path for the older, less frequently accessed data in indexes.conf?
Which of the following statements describe deployment management? (select all that apply)
Answer : A, B
'All Splunk Enterprise instances functioning as management components needs access to an Enterprise license. Management components include the deployment server, the indexer cluster manager node, the search head cluster deployer, and the monitoring console.'
https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Aboutdeploymentserver
'The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances.'
Which of the following are required when defining an index in indexes. conf? (select all that apply)
Answer : A, B, D
homePath = $SPLUNK_DB/hatchdb/db
coldPath = $SPLUNK_DB/hatchdb/colddb
thawedPath = $SPLUNK_DB/hatchdb/thaweddb
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Indexesconf#PER_INDEX_OPTIONS
Which of the following is accurate regarding the input phase?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline 'The data pipeline segments in depth. INPUT - In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks it into 64K blocks, and annotates each block with some metadata keys. The keys can also include values that are used internally, such as the character encoding of the data stream, and values that control later processing of the data, such as the index into which the events should be stored. PARSING Annotating individual events with metadata copied from the source-wide keys. Transforming event data and metadata according to regex transform rules.'
The volume of data from collecting log files from 50 Linux servers and 200 Windows servers will require
multiple indexers. Following best practices, which types of Splunk component instances are needed?
Answer : C
Indexers, search head, deployment server, license master, universal forwarder. This is the combination of Splunk component instances that are needed to handle the volume of data from collecting log files from 50 Linux servers and 200 Windows servers, following the best practices. The roles and functions of these components are:
Indexers: These are the Splunk instances that index the data and make it searchable. They also perform some data processing, such as timestamp extraction, line breaking, and field extraction. Multiple indexers can be clustered together to provide high availability, data replication, and load balancing.
Search head: This is the Splunk instance that coordinates the search across the indexers and merges the results from them. It also provides the user interface for searching, reporting, and dashboarding. A search head can also be clustered with other search heads to provide high availability, scalability, and load balancing.
Deployment server: This is the Splunk instance that manages the configuration and app deployment for the universal forwarders. It allows the administrator to centrally control the inputs.conf, outputs.conf, and other configuration files for the forwarders, as well as distribute apps and updates to them.
License master: This is the Splunk instance that manages the licensing for the entire Splunk deployment. It tracks the license usage of all the Splunk instances and enforces the license limits and violations. It also allows the administrator to add, remove, or change licenses.
Universal forwarder: These are the lightweight Splunk instances that collect data from various sources and forward it to the indexers or other forwarders. They do not index or parse the data, but only perform minimal processing, such as compression and encryption. They are installed on the Linux and Windows servers that generate the log files.
The universal forwarder has which capabilities when sending data? (select all that apply)
When deploying apps on Universal Forwarders using the deployment server, what is the correct component and location of the app before it is deployed?
Answer : C
The correct answer is C. On Deployment Server, $SPLUNK_HOME/etc/deployment-apps.
A deployment server is a Splunk Enterprise instance that acts as a centralized configuration manager for any number of other instances, called ''deployment clients''. A deployment client can be a universal forwarder, a non-clustered indexer, or a search head1.
A deployment app is a directory that contains any content that you want to download to a set of deployment clients. The content can include a Splunk Enterprise app, a set of Splunk Enterprise configurations, or other content, such as scripts, images, and supporting files2.
You create a deployment app by creating a directory for it on the deployment server. The default location is $SPLUNK_HOME/etc/deployment-apps, but this is configurable through the repositoryLocation attribute in serverclass.conf. Underneath this location, each app must have its own subdirectory. The name of the subdirectory serves as the app name in the forwarder management interface2.
The other options are incorrect because:
A . On Universal Forwarder, $SPLUNK_HOME/etc/apps. This is the location where the deployment app resides after it is downloaded from the deployment server to the universal forwarder. It is not the location of the app before it is deployed2.
B . On Deployment Server, $SPLUNK_HOME/etc/apps. This is the location where the apps that are specific to the deployment server itself reside. It is not the location where the deployment apps for the clients are stored2.
When using a directory monitor input, specific source type can be selectively overridden using which configuration file?
An organization wants to collect Windows performance data from a set of clients, however, installing Splunk
software on these clients is not allowed. What option is available to collect this data in Splunk Enterprise?
Answer : B
'The Splunk platform collects remote Windows data for indexing in one of two ways: From Splunk forwarders, Using Windows Management Instrumentation (WMI). For Splunk Cloud deployments, you must use the Splunk Universal Forwarder on a Windows machines to montior remote Windows data.'
What is the default value of LINE_BREAKER?
Answer : B
Line breaking, which uses theLINE_BREAKERsetting to split the incoming stream of data into separate lines. By default, theLINE_BREAKERvalue is any sequence of newlines and carriage returns. In regular expression format, this is represented as the following string:([\r\n]+). You don't normally need to adjust this setting, but in cases where it's necessary, you must configure it in the props.conf configuration file on the forwarder that sends the data to Splunk Cloud Platform or a Splunk Enterprise indexer. TheLINE_BREAKERsetting expects a value in regular expression format.
Which option accurately describes the purpose of the HTTP Event Collector (HEC)?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/UsetheHTTPEventCollector
'The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. HEC uses a token-based authentication model. You can generate a token and then configure a logging library or HTTP client with the token to send data to HEC in a specific format. This process eliminates the need for a Splunk forwarder when you send application events.'
When running the command shown below, what is the default path in which deployment server. conf is created?
splunk set deploy-poll deployServer:port
Answer : C
https://docs.splunk.com/Documentation/Splunk/8.1.1/Updating/Definedeploymentclasses#Ways_to_define_server_classes 'When you use forwarder management to create a new server class, it saves the server class definition in a copy of serverclass.conf under $SPLUNK_HOME/etc/system/local. If, instead of using forwarder management, you decide to directly edit serverclass.conf, it is recommended that you create the serverclass.conf file in that same directory, $SPLUNK_HOME/etc/system/local.'
Which pathway represents where a network input in Splunk might be found?
Answer : B
The correct answer is B. The network input in Splunk might be found in the $SPLUNK_HOME/etc/apps/$appName/local/inputs.conf file.
A network input is a type of input that monitors data from TCP or UDP ports. To configure a network input, you need to specify the port number, the connection host, the source, and the sourcetype in the inputs.conf file. You can also set other optional settings, such as index, queue, and host_regex1.
The inputs.conf file is a configuration file that contains the settings for different types of inputs, such as files, directories, scripts, network ports, and Windows event logs. The inputs.conf file can be located in various directories, depending on the scope and priority of the settings. The most common locations are:
$SPLUNK_HOME/etc/system/default: This directory contains the default settings for all inputs. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/system/local: This directory contains the custom settings for all inputs that apply to the entire Splunk instance. The settings in this directory override the default settings2.
$SPLUNK_HOME/etc/apps/$appName/default: This directory contains the default settings for all inputs that are specific to an app. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/apps/$appName/local: This directory contains the custom settings for all inputs that are specific to an app. The settings in this directory override the default and system settings2.
Therefore, the best practice is to create or edit the inputs.conf file in the $SPLUNK_HOME/etc/apps/$appName/local directory, where $appName is the name of the app that you want to configure the network input for. This way, you can avoid modifying the default files and ensure that your settings are applied to the specific app.
The other options are incorrect because:
A . There is no network directory under the apps directory. The network input settings should be in the inputs.conf file, not in a separate directory.
C . There is no udp.conf file in Splunk. The network input settings should be in the inputs.conf file, not in a separate file. The system directory is not the recommended location for custom settings, as it affects the entire Splunk instance.
D . The var/lib/splunk directory is where Splunk stores the indexed data, not the input settings. The homePath setting is used to specify the location of the index data, not the input data. The inputName is not a valid variable for inputs.conf.
A Splunk administrator has been tasked with developing a retention strategy to have frequently accessed data sets on SSD storage and to have older, less frequently accessed data on slower NAS storage. They have set a mount point for the NAS. Which parameter do they need to modify to set the path for the older, less frequently accessed data in indexes.conf?
Which of the following is an acceptable channel value when using the HTTP Event Collector indexer acknowledgment capability?
Answer : A
The HTTP Event Collector (HEC) supports indexer acknowledgment to confirm event delivery. Each acknowledgment is associated with a unique GUID (Globally Unique Identifier).
GUID ensures events are not re-indexed in the case of retries.
Incorrect Options:
B, C, D: These are not valid channel values in HEC acknowledgments.
References:
Splunk Docs: Use indexer acknowledgment with HTTP Event Collector
Where can scripts for scripted inputs reside on the host file system? (select all that apply)
Answer : A, C, D
'Where to place the scripts for scripted inputs. The script that you refer to in $SCRIPT can reside in only one of the following places on the host file system:
$SPLUNK_HOME/etc/system/bin
$SPLUNK_HOME/etc/apps/<your_App>/bin
$SPLUNK_HOME/bin/scripts
As a best practice, put your script in the bin/ directory that is nearest to the inputs.conf file that calls your script on the host file system.'
After an Enterprise Trial license expires, it will automatically convert to a Free license. How many days is an Enterprise Trial license valid before this conversion occurs?
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
Which file will be matched for the following monitor stanza in inputs. conf?
[monitor: ///var/log/*/bar/*. txt]
Answer : C
The correct answer is C. /var/log/host_460352847/bar/file/foo.txt.
The monitor stanza in inputs.conf is used to configure Splunk to monitor files and directories for new data. The monitor stanza has the following syntax1:
[monitor://<input path>]
The input path can be a file or a directory, and it can include wildcards (*) and regular expressions. The wildcards match any number of characters, including none, while the regular expressions match patterns of characters. The input path is case-sensitive and must be enclosed in double quotes if it contains spaces1.
In this case, the input path is /var/log//bar/.txt, which means Splunk will monitor any file with the .txt extension that is located in a subdirectory named bar under the /var/log directory. The subdirectory bar can be at any level under the /var/log directory, and the * wildcard will match any characters before or after the bar and .txt parts1.
Therefore, the file /var/log/host_460352847/bar/file/foo.txt will be matched by the monitor stanza, as it meets the criteria. The other files will not be matched, because:
A . /var/log/host_460352847/temp/bar/file/csv/foo.txt has a .csv extension, not a .txt extension.
B . /var/log/host_460352847/bar/foo.txt is not located in a subdirectory under the bar directory, but directly in the bar directory.
D . /var/log/host_460352847/temp/bar/file/foo.txt is located in a subdirectory named file under the bar directory, not directly in the bar directory.
Consider the following stanza in inputs.conf:
What will the value of the source filed be for events generated by this scripts input?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.2.2/Admin/Inputsconf
-Scroll down to source = <string>
*Default: the input file path
An index stores its data in buckets. Which default directories does Splunk use to store buckets? (Choose all that apply.)
Answer : C, D
What hardware attribute would need to be changed to increase the number of simultaneous searches (ad-hoc and scheduled) on a single search head?
Answer : B
https://docs.splunk.com/Documentation/Splunk/7.3.1/DistSearch/SHCarchitecture
Scroll down to section titled, How the cluster handles concurrent search quotas, 'Overall search quota. This quota determines the maximum number of historical searches (combined scheduled and ad hoc) that the cluster can run concurrently. This quota is configured with max_Searches_per_cpu and related settings in limits.conf.'
When Splunk is integrated with LDAP, which attribute can be changed in the Splunk UI for an LDAP user?
Running this search in a distributed environment:
On what Splunk component does the eval command get executed?
A company moves to a distributed architecture to meet the growing demand for the use of Splunk. What parameter can be configured to enable automatic load balancing in the
Universal Forwarder to send data to the indexers?
Answer : D
To enable automatic load balancing, set the stanza to have a server value equal to a comma-separated list of IP addresses and indexer ports for each of the indexers in the environment. For example:
[tcpout] server=10.1.1.1:9997,10.1.1.2:9997,10.1.1.3:9997
The forwarder then distributes data across all of the indexers in the list.
Which option accurately describes the purpose of the HTTP Event Collector (HEC)?
Answer : B
https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/UsetheHTTPEventCollector
'The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. HEC uses a token-based authentication model. You can generate a token and then configure a logging library or HTTP client with the token to send data to HEC in a specific format. This process eliminates the need for a Splunk forwarder when you send application events.'
The universal forwarder has which capabilities when sending data? (select all that apply)
Assume a file is being monitored and the data was incorrectly indexed to an exclusive index. The index is
cleaned and now the data must be reindexed. What other index must be cleaned to reset the input checkpoint
information for that file?
What is the difference between the two wildcards ... and - for the monitor stanza in inputs, conf?
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Data/Specifyinputpathswithwildcards
... The ellipsis wildcard searches recursively through directories and any number of levels of subdirectories to find matches.
If you specify a folder separator (for example, //var/log/.../file), it does not match the first folder level, only subfolders.
* The asterisk wildcard matches anything in that specific folder path segment.
Unlike ..., * does not recurse through subfolders.
What is the correct order of steps in Duo Multifactor Authentication?
Answer : C
Using the provided DUO/Splunk reference URL https://duo.com/docs/splunk
Scroll down to the Network Diagram section and note the following 6 similar steps
1 - SPlunk connection initiated
2 - Primary authentication
3 - Splunk connection established to Duo Security over TCP port 443
4 - Secondary authentication via Duo Security's service
5 - Splunk receives authentication response
6 - Splunk session logged in.
The priority of layered Splunk configuration files depends on the file's:
Answer : C
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
'To determine the order of directories for evaluating configuration file precendence, Splunk software considers each file's context. Configuration files operate in either a global context or in the context of the current app and user'
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as part of indexing?
Answer : D
The correct answer is D. The timezone of the forwarder will be added to the event as part of indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, and it knows its system time zone and sends that information along with the events to the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk software converts the event time to UTC based on the time zone that it determines from the rules above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk component that handles search requests and distributes them to indexers, but it does not process incoming data3. The search head uses the user's timezone setting to determine the time range in UTC that should be searched and to display the timestamp of the results in the user's timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the other rules apply. In this case, the forwarder provides the time zone information, so the indexer does not use its own time zone1.
Which of the following indexes come pre-configured with Splunk Enterprise? (select all that apply)
Which network input option provides durable file-system buffering of data to mitigate data loss due to network outages and splunkd restarts?
Answer : C
Which of the following are supported options when configuring optional network inputs?
In which Splunk configuration is the SEDCMD used?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Forwarddatatothird-partysystemsd
'You can specify a SEDCMD configuration in props.conf to address data that contains characters that the third-party server cannot process. '
Which of the following is accurate regarding the input phase?
Answer : D
https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline 'The data pipeline segments in depth. INPUT - In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks it into 64K blocks, and annotates each block with some metadata keys. The keys can also include values that are used internally, such as the character encoding of the data stream, and values that control later processing of the data, such as the index into which the events should be stored. PARSING Annotating individual events with metadata copied from the source-wide keys. Transforming event data and metadata according to regex transform rules.'
What type of data is counted against the Enterprise license at a fixed 150 bytes per event?
Answer : B
During search time, which directory of configuration files has the highest precedence?
Answer : D
Adding further clarity and quoting same Splunk reference URL from @giubal'
'To keep configuration settings consistent across peer nodes, configuration files are managed from the cluster master, which pushes the files to the slave-app directories on the peer nodes. Files in the slave-app directories have the highest precedence in a cluster peer's configuration. Here is the expanded precedence order for cluster peers:
1.Slave-app local directories -- highest priority
2. System local directory
3. App local directories
4. Slave-app default directories
5. App default directories
6. System default directory --lowest priority
How is data handled by Splunk during the input phase of the data ingestion process?
Answer : A
https://docs.splunk.com/Documentation/Splunk/8.0.5/Deploy/Datapipeline
'In the input segment, Splunk software consumes data. It acquires the raw data stream from its source, breaks in into 64K blocks, and annotates each block with some metadata keys.'
Which Splunk indexer operating system platform is supported when sending logs from a Windows universal forwarder?
Answer : A
'The forwarder/indexer relationship can be considered platform agnostic (within the sphere of supported platforms) because they exchange their data handshake (and the data, if you wish) over TCP.
When are knowledge bundles distributed to search peers?
Answer : D
'The search head replicates the knowledge bundle periodically in the background or when initiating a search. ' 'As part of the distributed search process, the search head replicates and distributes its knowledge objects to its search peers, or indexers. Knowledge objects include saved searches, event types, and other entities used in searching accorss indexes. The search head needs to distribute this material to its search peers so that they can properly execute queries on its behalf.'
A log file contains 193 days worth of timestamped events. Which monitor stanza would be used to collect data 45 days old and newer from that log file?
Answer : D
What happens when the same username exists in Splunk as well as through LDAP?
Answer : C
Splunk platform attempts native authentication first. If authentication fails outside of a local account that doesn't exist, there is no attempt to use LDAP to log in. This is adapted from precedence of Splunk authentication schema.
In which scenario would a Splunk Administrator want to enable data integrity check when creating an index?
Answer : D
A non-clustered Splunk environment has three indexers (A,B,C) and two search heads (X, Y). During a search executed on search head X, indexer A crashes. What is Splunk's response?
Answer : A
This is explained in the Splunk documentation1, which states:
If an indexer goes down during a search, the search head notifies you that the results might be incomplete. The search head does not attempt to re-run the search on another indexer.
When would the following command be used?