How many metric types does Prometheus text format support?
Answer : B
Prometheus defines four core metric types in its official exposition format, which are: Counter, Gauge, Histogram, and Summary. These types represent the fundamental building blocks for expressing quantitative measurements of system performance, behavior, and state.
A Counter is a cumulative metric that only increases (e.g., number of requests served).
A Gauge represents a value that can go up and down, such as memory usage or temperature.
A Histogram samples observations (e.g., request durations) and counts them in configurable buckets, providing both counts and sum of observed values.
A Summary is similar to a histogram but provides quantile estimation over a sliding time window along with count and sum metrics.
These four types are the only officially supported metric types in the Prometheus text exposition format as defined by the Prometheus data model. Any additional metrics or custom naming conventions are built on top of these core types but do not constitute new types.
Extracted and verified from Prometheus official documentation sections on Metric Types and Exposition Formats in the Prometheus study materials.
Given the following Histogram metric data, how many requests took less than or equal to 0.1 seconds?
apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="+Inf"} 3
apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="0.05"} 0
apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="0.1"} 1
apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="1"} 3
apiserver_request_duration_seconds_count{job="kube-apiserver"} 3
apiserver_request_duration_seconds_sum{job="kube-apiserver"} 0.554003785
Answer : C
In Prometheus, histogram metrics use cumulative buckets to record the count of observations that fall within specific duration thresholds. Each bucket has a label le (''less than or equal to''), representing the upper bound of that bucket.
In the given metric, the bucket labeled le='0.1' has a value of 1, meaning exactly one request took less than or equal to 0.1 seconds. Buckets are cumulative, so:
le='0.05' 0 requests 0.05 seconds
le='0.1' 1 request 0.1 seconds
le='1' 3 requests 1 second
le='+Inf' all 3 requests total
The _sum and _count values represent total duration and request count respectively, but the number of requests below a given threshold is read directly from the bucket's le value.
Verified from Prometheus documentation -- Understanding Histograms and Summaries, Bucket Semantics, and Histogram Query Examples sections.
If the vector selector foo[5m] contains 1 1 NaN, what would max_over_time(foo[5m]) return?
Answer : B
In PromQL, range vector functions like max_over_time() compute an aggregate value (in this case, the maximum) over all samples within a specified time range. The function ignores NaN (Not-a-Number) values when computing the result.
Given the range vector foo[5m] containing samples [1, 1, NaN], the maximum value among the valid numeric samples is 1. Therefore, max_over_time(foo[5m]) returns 1.
Prometheus functions handle missing or invalid data points gracefully---ignoring NaN ensures stable calculations even when intermittent collection issues or resets occur. The function only errors if the selector is syntactically invalid or if no numeric samples exist at all.
Verified from Prometheus documentation -- PromQL Range Vector Functions, Aggregation Over Time Functions, and Handling NaN Values in PromQL sections.
Which kind of metrics are associated with the function deriv()?
Answer : B
The deriv() function in PromQL calculates the per-second derivative of a time series using linear regression over the provided time range. It estimates the instantaneous rate of change for metrics that can both increase and decrease --- which are typically gauges.
Because counters can only increase (except when reset), rate() or increase() functions are more appropriate for them. deriv() is used to identify trends in fluctuating metrics like CPU temperature, memory utilization, or queue depth, where values rise and fall continuously.
In contrast, summaries and histograms consist of multiple sub-metrics (e.g., _count, _sum, _bucket) and are not directly suited for derivative calculation without decomposition.
Extracted and verified from Prometheus documentation -- PromQL Functions -- deriv(), Understanding Rates and Derivatives, and Gauge Metric Examples.
How do you calculate the average request duration during the last 5 minutes from a histogram or summary called http_request_duration_seconds?
Answer : A
In Prometheus, histograms and summaries expose metrics with _sum and _count suffixes to represent total accumulated values and sample counts, respectively. To compute the average request duration over a given time window (for example, 5 minutes), you divide the rate of increase of _sum by the rate of increase of _count:
\text{Average duration} = \frac{\text{rate(http_request_duration_seconds_sum[5m])}}{\text{rate(http_request_duration_seconds_count[5m])}}
Here,
http_request_duration_seconds_sum represents the total accumulated request time, and
http_request_duration_seconds_count represents the number of requests observed.
By dividing these rates, you obtain the average request duration per request over the specified time range.
Extracted and verified from Prometheus documentation -- Querying Histograms and Summaries, PromQL Rate Function, and Metric Naming Conventions sections.
What does the rate() function in PromQL return?
Answer : B
The rate() function calculates the average per-second rate of increase of a counter over the specified range. It smooths out short-term fluctuations and adjusts for counter resets.
Example:
rate(http_requests_total[5m])
returns the number of requests per second averaged over the last five minutes. This function is frequently used in dashboards and alerting expressions.
How can you use Prometheus Node Exporter?
Answer : C
The Prometheus Node Exporter is a core system-level exporter that exposes hardware and operating system metrics from *nix-based hosts. It collects metrics such as CPU usage, memory, disk I/O, filesystem space, network statistics, and load averages.
It runs as a lightweight daemon on each host and exposes metrics via an HTTP endpoint (default: :9100/metrics), which Prometheus scrapes periodically.
Key clarification:
It does not instrument applications (A).
It does not collect metrics directly from application HTTP endpoints (B).
It is unrelated to HTTP probing tasks --- those are handled by the Blackbox Exporter (D).
Thus, the correct use of the Node Exporter is to collect and expose hardware and OS-level metrics for Prometheus monitoring.
Extracted and verified from Prometheus documentation -- Node Exporter Overview, Host-Level Monitoring, and Exporter Usage Best Practices sections.