When can you use the Grafana Heatmap panel?
Answer : B
The Grafana Heatmap panel is best suited for visualizing histogram metrics collected from Prometheus. Histograms provide bucketed data distributions (e.g., request durations, response sizes), and the heatmap effectively displays these as a two-dimensional density chart over time.
In Prometheus, histogram metrics are exposed as multiple time series with the _bucket suffix and the label le (less than or equal). Grafana interprets these buckets to create visual bands showing how frequently different value ranges occurred.
Counters, gauges, and info metrics do not have bucketed distributions, so a heatmap would not produce meaningful output for them.
Verified from Grafana documentation -- Heatmap Panel Overview, Visualizing Prometheus Histograms, and Prometheus documentation -- Understanding Histogram Buckets.
What does the evaluation_interval parameter in the Prometheus configuration control?
Answer : B
The evaluation_interval parameter defines how frequently Prometheus evaluates its recording and alerting rules. It determines the schedule at which the rule engine runs, checking whether alert conditions are met and generating new time series for recording rules.
For example, setting:
global:
evaluation_interval: 30s
means Prometheus evaluates all configured rules every 30 seconds. This setting differs from scrape_interval, which controls how often Prometheus collects data from targets.
Having a proper evaluation interval ensures alerting latency is balanced with system performance.
Which exporter would be best suited for basic HTTP probing?
Answer : B
The Blackbox Exporter is the Prometheus component designed specifically for probing endpoints over various network protocols, including HTTP, HTTPS, TCP, ICMP, and DNS. It acts as a generic probe service, allowing Prometheus to test endpoints' availability, latency, and correctness without requiring instrumentation in the target application itself.
For basic HTTP probing, the Blackbox Exporter performs HTTP GET or POST requests to defined URLs and exposes metrics like probe success, latency, response code, and SSL certificate validity. This makes it ideal for uptime and availability monitoring.
By contrast, the JMX exporter is used for collecting metrics from Java applications, the Apache exporter for Apache HTTP Server metrics, and the SNMP exporter for network devices. Thus, only the Blackbox Exporter serves the purpose of HTTP probing.
Verified from Prometheus documentation -- Blackbox Exporter Overview and Exporter Usage Guidelines.
What is the name of the official *nix OS kernel metrics exporter?
Answer : B
The official Prometheus exporter for collecting system-level and kernel-related metrics from Linux and other UNIX-like operating systems is the Node Exporter.
The Node Exporter exposes hardware and OS metrics including CPU load, memory usage, disk I/O, network traffic, and kernel statistics. It is designed to provide host-level observability and serves data at the default endpoint :9100/metrics in the standard Prometheus exposition text format.
This exporter is part of the official Prometheus ecosystem and is widely deployed for infrastructure monitoring. None of the other listed options (Prometheus_exporter, metrics_exporter, or os_exporter) are official components of the Prometheus project.
Verified from Prometheus documentation -- Node Exporter Overview, System Metrics Collection, and Official Exporters List.
Which of the following is a valid metric name?
Answer : C
According to Prometheus naming rules, metric names must match the regex [a-zA-Z_:][a-zA-Z0-9_:]*. This means metric names must begin with a letter, underscore, or colon, and can only contain letters, digits, and underscores thereafter.
The valid metric name among the options is go_goroutines, which follows all these rules. It starts with a letter (g), uses underscores to separate words, and contains only allowed characters.
By contrast:
go routines is invalid because it contains a space.
go.goroutines is invalid because it contains a dot (.), which is reserved for recording rule naming hierarchies, not metric identifiers.
99_goroutines is invalid because metric names cannot start with a number.
Following these conventions ensures compatibility with PromQL syntax and Prometheus' internal data model.
Extracted from Prometheus documentation -- Metric Naming Conventions and Data Model Rules sections.
What is considered the best practice when working with alerting notifications?
Answer : B
The Prometheus alerting philosophy emphasizes signal over noise --- meaning alerts should focus only on actionable and user-impacting issues. The best practice is to alert on symptoms that indicate potential or actual user-visible problems, not on every internal metric anomaly.
This approach reduces alert fatigue, avoids desensitizing operators, and ensures high-priority alerts get the attention they deserve. For example, alerting on ''service unavailable'' or ''latency exceeding SLO'' is more effective than alerting on ''CPU above 80%'' or ''disk usage increasing,'' which may not directly affect users.
Option B correctly reflects this principle: keep alerts meaningful, few, and symptom-based. The other options contradict core best practices by promoting excessive or equal-weight alerting, which can overwhelm operations teams.
Verified from Prometheus documentation -- Alerting Best Practices, Alertmanager Design Philosophy, and Prometheus Monitoring and Reliability Engineering Principles.
What Prometheus component would you use if targets are running behind a Firewall/NAT?
Answer : D
When Prometheus targets are behind firewalls or NAT and cannot be reached directly by the Prometheus server's pull mechanism, the recommended component to use is PushProx.
PushProx works by reversing the usual pull model. It consists of a PushProx Proxy (accessible by Prometheus) and PushProx Clients (running alongside the targets). The clients establish outbound connections to the proxy, which allows Prometheus to ''pull'' metrics indirectly. This approach bypasses network restrictions without compromising the Prometheus data model.
Unlike the Pushgateway (which is used for short-lived batch jobs, not network-isolated targets), PushProx maintains the Prometheus ''pull'' semantics while accommodating environments where direct scraping is impossible.
Verified from Prometheus documentation and official PushProx design notes -- Monitoring Behind NAT/Firewall, PushProx Overview, and Architecture and Usage Scenarios sections.