为了度量软件交付的效能,越来越多的组织默认采用由DORA 研究 项目定义的 交付核心四指标 ,即:更改前置时间、部署频率、平均恢复时间(MTTR)和变更失败率。这项研究及其统计分析展示了高效能交付团队和这些指标的高度相关性,它们为衡量整个交付组织的表现提供了极佳的领先指标。
虽然我们依然是这些指标的坚定拥护者,但我们也吸取了一些教训。我们持续看到被误导的度量方式,这些方式使用的工具单纯基于持续交付(CD)流水线。尤其在衡量稳定性指标(MTTR和变更失败率)时,仅依赖CD流水线数据提供的信息并不足以确定部署失败对实际用户的影响。只有包含真实事故,如用户服务降级,稳定性指标才有意义。
我们建议始终牢记度量指标背后的终极目的,并使用它们来思考和学习。比如,在花费数周时间构建复杂仪表板工具之前,可以考虑定期在团队回顾会议上进行DORA快速检查 。这能够使团队有机会思考他们应该在哪些能力 上努力以提升他们的指标,这会远比过于详细的开箱即用工具更有效。需要牢记这四个核心指标源于对高效能交付团队的组织级研究,在团队级别使用这些指标应该作为团队反思自身行为的一种方式,而不仅仅是添加到仪表板上的另一组度量标准。
为了度量软件交付的效能,越来越多的组织开始采用由DORA 研究项目定义的 交付核心四指标 ,即:更改前置时间、部署频率、平均修复时间( MTTR )和变更失败率。这项研究及其统计分析展示了高效能交付团队和这些指标的高度相关性,它们为衡量一个团队、甚至整个交付组织的表现提供了极佳的领先指标。
虽然我们仍是这些指标的支持者,但自从我们最早开始监控它们以来,也得到了一些教训。我们也持续看到被误导的度量方式,这些方式使用的工具仅基于持续交付( CD )流水线。在衡量稳定性指标( MTTR 和更改失败百分比)时,仅依赖 CD 流水线数据提供的信息并不足以确定部署失败对真实用户的影响。 只有包含真实事故(如用户服务降级)的数据时,稳定性指标才有意义。
与其它所有指标一样,我们建议始终牢记度量这些指标的终极目的,并使用它们反复思考和学习。例如,在花费数周时间构建复杂仪表板工具之前,可以考虑定期在团队回顾会议当中进行 DORA 快速检查。这样的做法能够使团队有机会思考哪些能力应被提升以改进这些指标,这比过于详细的开箱即用工具更有效。
The thorough State of DevOps reports have focused on data-driven and statistical analysis of high-performing organizations. The result of this multiyear research, published in Accelerate, demonstrates a direct link between organizational performance and software delivery performance. The researchers have determined that only four key metrics differentiate between low, medium and high performers: lead time, deployment frequency, mean time to restore (MTTR) and change fail percentage. Indeed, we've found that these four key metrics are a simple and yet powerful tool to help leaders and teams focus on measuring and improving what matters. A good place to start is to instrument the build pipelines so you can capture the four key metrics and make the software delivery value stream visible. GoCD pipelines, for example, provide the ability to measure these four key metrics as a first-class citizen of the GoCD analytics.
The State of DevOps report, first published in 2014, states that high-performing teams create high-performing organizations. Recently, the team behind the report released Accelerate, which describes the scientific method they've used in the report. A key takeaway of both are the four key metrics to support software delivery performance: lead time, deployment frequency, mean time to restore (MTTR), and change fail percentage. As a consultancy that has helped many organizations transform, these metrics have come up time and time again as a way to help organizations determine whether they're improving the overall performance. Each metric creates a virtuous cycle and focuses the teams on continuous improvement: to reduce lead time, you reduce wasteful activities which, in turn, lets you deploy more frequently; deployment frequency forces your teams to improve their practices and automation; your speed to recover from failure is improved by better practices, automation and monitoring which reduces the frequency of failures.