You are managing the testing efforts of an existing distributed system that manages inventories of automobile and light truck tires from multiple warehouses across the country. The system is being enhanced to track incoming restocking shipments at the point of entry to the warehouse and outbound sales shipments at the point of shipment from the warehouse, all of which are executed in real-time. System loads traditionally peak on Mondays due to built-up demand from the previous weekend.
You are preparing a presentation to the business stakeholders, outlining your performance testing strategy.
Which of the following is appropriate to present to this audience?
Answer : D
Business stakeholders are most concerned with risks that affect deployment and production stability. The primary risk in performance testing is that the test environment may differ from production, leading to misleading test results.
Option A (HTTP response goals) is too technical for a business stakeholder audience.
Option B (Hardware specifications) is relevant for technical teams, not business stakeholders.
Option C (Support staff details) is a logistical aspect, not a key performance testing risk for business decision-makers.
Which of the following should be a key part of your test acceptance criteria in your performance test plan?
Answer : C
One of the most critical test acceptance criteria in performance testing is to ensure that the hardware in the test environment is comparable to production. Differences in CPU, memory, disk I/O, or network infrastructure can distort performance results.
Option A (Convincing stakeholders to set goals) is a planning activity, not an acceptance criterion.
Option B (Describing the system under test) is important but does not directly affect test acceptance.
Option D (Comparing baselined metrics) is useful, but without a comparable test environment, baseline metrics may be misleading.
Which of the following is the best description of spike testing?
Answer : D
Spike testing is a type of performance testing that evaluates how a system responds to sudden, extreme increases or decreases in load. It is designed to simulate unexpected surges in user activity or workload, such as flash sales, viral events, or cyberattacks.
Option A (Gradual load increase testing) describes load testing, not spike testing.
Option B (Handling expected peak load) describes stress testing, which pushes the system to or beyond its limits but does not focus on sudden changes.
Option C (Meeting future efficiency requirements) relates to capacity planning rather than spike testing.
Spike testing helps to identify system bottlenecks, resource allocation issues, and performance degradation when traffic surges unexpectedly.
What is the primary purpose of a load generator?
Answer : B
A load generator is responsible for simulating virtual users and applying workloads to a system as defined by an operational profile. This allows testers to analyze how the system behaves under different load conditions.
Option A (Background load) is incorrect because load generators create simulated user interactions, not just background noise.
Option C (Record and analyze behavior) is the role of monitoring tools, not a load generator.
Option D (Support root cause analysis) is incorrect because root cause analysis is done after the load test, using monitoring tools.
You are managing a project that is testing a system that manages a newly redesigned jet engine for heavy aircraft. Given the fact that this engine is specifically engineered to reduce noise, it is important that the software maintains enough thrust for lift for a period of 5 minutes without exceeding 87.5dB. The software must achieve this independent of other internal systems such as fuel or navigation management.
Given the risk for the ability of the aircraft to meet the noise abatement regulations while still being able to fly, when is the optimum time in the software lifecycle to apply the performance testing?
Answer : A
Performance testing should be integrated into every phase of the software lifecycle to ensure that critical performance requirements (such as thrust-to-noise ratio) are met early and continuously validated.
Option B (End of system testing) is too late, as issues may be costly to fix at that stage.
Option C (During system integration testing) is useful but not comprehensive enough.
Option D (At the end of unit testing) is incorrect because unit tests do not assess overall system performance.
Which of the following can provide measurements for both individual and aggregated elements within a single performance test?
Answer : D
Nested transactions allow performance tests to capture both individual and aggregated elements by grouping multiple related transactions and measuring their cumulative impact.
Option A (Aggregated metrics) provides summaries but lacks insight into individual elements.
Option B (Process optimization tracking) focuses on business process improvement, not test metrics.
Option C (Measuring underlying transactions) focuses only on individual transactions.
Which of the following is a major contributor to unreliable performance projections?
Answer : C
One of the biggest contributors to unreliable performance projections is differences between the test and production environments. If test environments do not match CPU, memory, network configurations, and database setups in production, the results may not be representative of real-world performance.
Option A (Redundancy between test and production environments) is not a problem; it's actually beneficial for reliability.
Option B (Disagreement between stakeholders) can affect planning but does not cause unreliable projections.
Option D (Unrealistic stakeholder goals) affects expectations but not the accuracy of projections.