Splunk Certified Cybersecurity Defense Engineer SPLK-5002 Exam Questions

Page: 1 / 14
Total 83 questions
Question 1

A security analyst wants to validate whether a newly deployed SOAR playbook is performing as expected.

What steps should they take?



Answer : A

A SOAR (Security Orchestration, Automation, and Response) playbook is a set of automated actions designed to respond to security incidents. Before deploying it in a live environment, a security analyst must ensure that it operates correctly, minimizes false positives, and doesn't disrupt business operations.

Key Reasons for Using Simulated Incidents:

Ensures that the playbook executes correctly and follows the expected workflow.

Identifies false positives or incorrect actions before deployment.

Tests integrations with other security tools (SIEM, firewalls, endpoint security).

Provides a controlled testing environment without affecting production.

How to Test a Playbook in Splunk SOAR?

1 Use the 'Test Connectivity' Feature -- Ensures that APIs and integrations work. 2 Simulate an Incident -- Manually trigger an alert similar to a real attack (e.g., phishing email or failed admin login). 3 Review the Execution Path -- Check each step in the playbook debugger to verify correct actions. 4 Analyze Logs & Alerts -- Validate that Splunk ES logs, security alerts, and remediation steps are correct. 5 Fine-tune Based on Results -- Modify the playbook logic to reduce unnecessary alerts or excessive automation.

Why Not the Other Options?

B. Monitor the playbook's actions in real-time environments -- Risky without prior validation. It can cause disruptions if the playbook misfires. C. Automate all tasks immediately -- Not best practice. Gradual deployment ensures better security control and monitoring. D. Compare with existing workflows -- Good practice, but it does not validate the playbook's real execution.

Reference & Learning Resources

Splunk SOAR Documentation: https://docs.splunk.com/Documentation/SOAR Testing Playbooks in Splunk SOAR: https://www.splunk.com/en_us/products/soar.html SOAR Playbook Debugging Best Practices: https://splunkbase.splunk.com


Question 2

What are essential practices for generating audit-ready reports in Splunk? (Choose three)



Answer : A, C, D

Audit-ready reports help demonstrate compliance with security policies and regulations (e.g., PCI DSS, HIPAA, ISO 27001, NIST).

1. Including Evidence of Compliance with Regulations (A)

Reports must show security controls, access logs, and incident response actions.

Example:

A PCI DSS compliance report tracks privileged user access logs and unauthorized access attempts.

2. Ensuring Reports Are Time-Stamped (C)

Provides chronological accuracy for security incidents and log reviews.

Example:

Incident response logs should include detection, containment, and remediation timestamps.

3. Automating Report Scheduling (D)

Enables automatic generation and distribution of reports to stakeholders.

Example:

A weekly audit report on security logs is auto-emailed to compliance officers.

Incorrect Answers:

B . Excluding all technical metrics Security reports must include event logs, IP details, and correlation results.

E . Using predefined report templates exclusively Reports should be customized for compliance needs.

Additional Resources:

Splunk Compliance Reporting Guide

Automating Security Reports in Splunk


Question 3

An organization uses MITRE ATT&CK to enhance its threat detection capabilities.

How should this methodology be incorporated?



Answer : A

MITRE ATT&CK is a threat intelligence framework that helps security teams map attack techniques to detection rules.

1. Develop Custom Detection Rules Based on Attack Techniques (A)

Maps Splunk correlation searches to MITRE ATT&CK techniques to detect adversary behaviors.

Example:

To detect T1078 (Valid Accounts):

index=auth_logs action=failed | stats count by user, src_ip

If an account logs in from anomalous locations, trigger an alert.

Incorrect Answers:

B . Use it only for reporting after incidents MITRE ATT&CK should be used proactively for threat detection.

C . Rely solely on vendor-provided threat intelligence Custom rules tailored to an organization's threat landscape are more effective.

D . Deploy it as a replacement for current detection systems MITRE ATT&CK complements existing SIEM/EDR tools, not replaces them.

Additional Resources:

MITRE ATT&CK & Splunk

Using MITRE ATT&CK in SIEMs


Question 4

What is the main benefit of automating case management workflows in Splunk?



Answer : C

Automating case management workflows in Splunk streamlines incident response and reduces manual overhead, allowing analysts to focus on higher-value tasks.

Main Benefits of Automating Case Management:

Reduces Response Times (C)

Automatically assigns cases to analysts based on predefined rules.

Triggers playbooks and workflows in Splunk SOAR to handle common incidents.

Improves Analyst Productivity (C)

Reduces time spent on manual case creation and updates.

Provides integrated case tracking across Splunk and ITSM tools (e.g., ServiceNow, Jira).

Incorrect Answers: A. Eliminating the need for manual alerts -- Alerts still require analyst verification and triage. B. Enabling dynamic storage allocation -- Case management does not impact Splunk storage. D. Minimizing the use of correlation searches -- Correlation searches remain essential for detection, even with automation.


Splunk Case Management Best Practices

Automating Incident Response with Splunk SOAR

Question 5

Which Splunk configuration ensures events are parsed and indexed only once for optimal storage?



Answer : C

Why Use Index-Time Transformations for One-Time Parsing & Indexing?

Splunk parses and indexes data once during ingestion to ensure efficient storage and search performance. Index-time transformations ensure that logs are:

Parsed, transformed, and stored efficiently before indexing. Normalized before indexing, so the SOC team doesn't need to clean up fields later. Processed once, ensuring optimal storage utilization.

Example of Index-Time Transformation in Splunk: Scenario: The SOC team needs to mask sensitive data in security logs before storing them in Splunk. Solution: Use an INDEXED_EXTRACTIONS rule to:

Redact confidential fields (e.g., obfuscate Social Security Numbers in logs).

Rename fields for consistency before indexing.


Question 6

A company wants to implement risk-based detection for privileged account activities.

What should they configure first?



Answer : A

Why Configure Asset & Identity Information for Privileged Accounts First?

Risk-based detection focuses on identifying and prioritizing threats based on the severity of their impact. For privileged accounts (admins, domain controllers, finance users), understanding who they are, what they access, and how they behave is critical.

Key Steps for Risk-Based Detection in Splunk ES: 1 Define Privileged Accounts & Groups -- Identify high-risk users (Admin, HR, Finance, CISO). 2 Assign Risk Scores -- Apply higher scores to actions involving privileged users. 3 Enable Identity & Asset Correlation -- Link users to assets for better detection. 4 Monitor for Anomalies -- Detect abnormal login patterns, excessive file access, or unusual privilege escalation.

Example in Splunk ES:

A domain admin logs in from an unusual location Trigger high-risk alert

A finance director downloads sensitive payroll data at midnight Escalate for investigation

Why Not the Other Options?

B. Correlation searches with low thresholds -- May generate excessive false positives, overwhelming the SOC. C. Event sampling for raw data -- Doesn't provide context for risk-based detection. D. Automated dashboards for all accounts -- Useful for visibility, but not the first step for risk-based security.

Reference & Learning Resources

Splunk ES Risk-Based Alerting (RBA): https://www.splunk.com/en_us/blog/security/risk-based-alerting.html Privileged Account Monitoring in Splunk: https://docs.splunk.com/Documentation/ES/latest/User/RiskBasedAlerting Implementing Privileged Access Security (PAM) with Splunk: https://splunkbase.splunk.com


Question 7

Which sourcetype configurations affect data ingestion? (Choose three)



Answer : A, B, D

The sourcetype in Splunk defines how incoming machine data is interpreted, structured, and stored. Proper sourcetype configurations ensure accurate event parsing, indexing, and searching.

1. Event Breaking Rules (A)

Determines how Splunk splits raw logs into individual events.

If misconfigured, a single event may be broken into multiple fragments or multiple log lines may be combined incorrectly.

Controlled using LINE_BREAKER and BREAK_ONLY_BEFORE settings.

2. Timestamp Extraction (B)

Extracts and assigns timestamps to events during ingestion.

Incorrect timestamp configuration leads to misplaced events in time-based searches.

Uses TIME_PREFIX, MAX_TIMESTAMP_LOOKAHEAD, and TIME_FORMAT settings.

3. Line Merging Rules (D)

Controls whether multiline events should be combined into a single event.

Useful for logs like stack traces or multi-line syslog messages.

Uses SHOULD_LINEMERGE and LINE_BREAKER settings.

Incorrect Answer:

C . Data Retention Policies

Affects storage and deletion, not data ingestion itself.

Additional Resources:

Splunk Sourcetype Configuration Guide

Event Breaking and Line Merging


Page:    1 / 14   
Total 83 questions