Velocity Optimization

for
Software Value Streams
In recent years, there has been an explosion of interest in measurement driven process improvement in software development. This is largely driven by the DevOps movement and the work of the DORA team, highlighting how data can be used to separate high performing teams from the rest and showing the impact that the practices these teams use, have on the outcomes for the businesses they support.

At Exathink, we view DevOps practices as a part of an overall Value Delivery Optimization program. Specifically, the focus of DevOps is Software Delivery Optimization: reducing the time and effort required to take code increments from development environments to production. While it is a necessary part of optimizing value delivery, it is not sufficient.

Velocity Optimization focuses on end to end delivery of value increments of varying granularity, ranging from stores and bug fixes to epics, releases and product increments. In our research and work with clients, we have found that the time to take value increments from design through coding, delivery and deployment is often 10-100 times larger compared to the cumulative time required to produce and deploy their corresponding code increments.

This is because there is significant latency in every phase of value creation and delivery: time when work is waiting around without making progress due to multitasking, capacity constraints, context switches, handoffs and collaboration delays  between people.  

To reduce overall time to value and manage engineering capacity more effectively, we need to manage  latency during the design, coding and testing phases of value delivery, in addition to adopting DevOps best practices for producing and deploying code increments.

In fact, in most cases, there are greater efficiencies to be gained from reducing end to end latency in a value stream, and the practices needed here, like reducing work in progress, reducing batch size of changes, frequent code integration, shift-left testing, minimizing multi-tasking etc. are precisely the building blocks for adopting downstream DevOps practices successfully, 

The key innovations that we bring to the table with  Ergonometrics® and Polaris are a set of techniques and tools for teams to systematically adopt these practices and use real time measurements and feedback loops to quantify their impact on both latency and value delivery.

It turns out that when we do this effectively, it improves outcomes along several key dimensions that positively impact the customer, the company and the team.


How it Works

A faster feedback loop
The conventional model of Plan-Do-Inspect-Adapt in Lean software development is based on inspection of completed work and retrospective analysis of work that was finished in the past.

Continuous Measurement with Polaris allows you to add a tighter inner Inspect-Adapt loop for work in progress during plan execution to let a team review key measurements and react to changes faster as they are working.

The Three Pillars of Velocity Optimization

Pillar 1:  Velocity Analysis
Velocity Analysis, which is based on completed work, uses granular measurements to assess overall delivery performance across a value stream over longer periods, balancing product development flow, and continually aligning engineering capacity to maximize customer value, and quality.

Powered by real time measurements from Polaris, our analytical tools are built bottom up from very granular real time data, and we provide forensic tools to drill down from every high level data point in a trend report through a series of levels of detail, all the way down to an individual commit or card update.
We not only tell you how your team is doing, but give you the tools to understand why.
Pillar 2: Flow Planning
Flow Planning, which is done with a mix of work that is planned, in progress and has completed, gives the product owner forward visibility into what adjustments might need to be made to scope and timing using accurate real time data on the current flow of work.

The main processes in this analysis are efficient backlog management using real time execution data, being able to assess the cost of work in progress and completed work to be able to make proactive decisions on scope using accurate cost/value tradeoffs in real time using data.
Pillar 3: Wip Inspection
Wip Inspection makes all delays and costs of work currently in progress visible in real-time. This is the biggest new innovation in our approach, since work in progress is largely invisible in software development today.

This allows the team on the ground to recognize high effort or high latency work that is at risk quickly, and escalate or adjust the product plan as needed consistent with policies including cycle time limits and effort budgets that are also innovations we are introducing in Ergonometrics™.

By closing the feedback loop with real-time data, Wip Inspection allows teams to detect delays during execution early and react faster.
All three modes of planning and analysis, done on an ongoing basis using rolling time windows, creates an active real-time feedback loop to connect and continuously improve the planning, design, development and testing of  products.