Finding
Paper
Abstract
A key question in the software development industry is “how do we know when a release is ready to ship to customers?” Is the release sufficiently reliable, or are there too many residual bugs remaining? These questions are difficult to answer with any certainty. To help ensure reliability, many development teams rely on traditional metrics, such as severity 1 and 2 backlog levels, test case execution percentages, and pass rates for function testing. Often, these types of metrics are aggregated into a “quality index” that uses weighted averages to gauge “release health” throughout the lifecycle, but are mainly focused during the function and system testing of the integration branch, the new release’s total code base. These metrics and indices are considered useful, but usually there is not a known correlation between the key metrics/indices and the customer’s reliability experience. In other words, we merely assume they are good indicators of release health—evidence is usually lacking that moving these metric/index “levers” by this much results in changes in the customer reliability experience by that much.
Authors
Pete Rotella
Journal
Journal name not available for this finding