The Ergonometrics® framework builds on four process models, each analyzing a certain aspect of a software development process. Polaris updates all models in real-time, by analyzing the Ergonometric Graph® whenever code is pushed, and work items or pull requests are updated.
Ergonometrics® focuses on work items in the work tracking system that require code changes, i.e the work that actually passes through engineering. We call these dev-items, and focusing on these eliminates noise in the work tracking system related to non-development tasks. We call this the visible work in the system.
Conversely, we also measure invisible work, code changes that are not tracked by work items in the work tracking system.
Together they give a complete picture of where engineering capacity is being consumed at all times, and constitute what we call work in the model.
A key driver measurement here is traceability, the fraction of visible work in the system, and maximizing this is one of the key steps to improving efficacy of capacity allocation decisions.
A large part of the work of making software is thinking and collaboration between engineers. A commit represents an increment of work output - of anywhere from a few minutes to several hours or even days of actual work - by an engineer.
Given this variability, we view commits primarily as evidence of progress towards implementation of work items. The more interesting signal is the lack of progress toward a work item.
Ergonometrics® therefore focuses on delays, and the most important driver measurement here is internal latency, the time between progress events, including commits and state change events in the work item lifecycle.
Measuring internal latency in real time identifies delays due context switches and handoffs on the team and makes them visible while work is in progress.
Almost all additional measurements such as lead time, cycle time, implementation time etc. can be viewed as composites of latency measurements, and we look at monitoring and reducing latency as the key mechanism to drive down non-essential delays in a software implementation process.
The fundamental driver measurement for cost in Ergonometrics® is called effort, and it is defined as the number of developer days needed to implement a work item. Like other measurements, this can be directly read off of the Ergonometric Graph, and Polaris can compute this in real time for all cards in the system at the commit granularity.
This is much more efficient than trying to measure effort manually using time tracking and other tools that are in use today. Polaris computes this directly by analyzing developer activity without any additional interventions.
Analyzing costs of development using effort measurements is a key forensic tool to improve alignment between business and engineering in allocating engineering capacity to maximize customer facing value.
It is one of the most powerful tools in the analytical toolbox that Ergonometrics® brings to the table.
There are three key metrics we derive from this driver measurement.
Cost of Wip
For work in progress, the key metric we track is the Cost of Wip: the total effort of all work items that have entered implementation but have not yet been released to production.
The Cost of Wip metric shows the economic cost of carrying too much work in progress. It is well known that high Wip levels lead to high cycle times and overall reduced efficiency in software development, but it has traditionally been hard to quantify this cost directly. Polaris makes this easy.
In our advisory practice, we use this metric to guide teams to move from relying on utilization based allocation strategies to ones that optimize flow of work through the delivery pipeline.
Effort Throughput
This is defined as the fraction of available engineering capacity in a given period, that manifested as work that was actually released to production in that same period. It is used to shed light on process efficiency and analyze overall capacity allocation decisions. It is a very powerful counterpart to volume based throughput measurements in use today.
Feature Costs
Work item level effort can be aggregated and analyzed along multiple dimensions such as features, work item types, releases, epics etc. to understand development costs along business facing dimensions. This is useful for planning effort budgets for future work and generally understanding what development costs are in more granular detail than previous models allow for.
The quality model in Ergonometrics® starts with conventional measurements of external quality which focuses on customer facing defects, but then extends it include measurements of internal quality, focusing on engineering practices that impact the quality of the code base as work is delivered.
Since the Ergonometric Graph has a detailed model of file and line level changes that go into implementing each work item, it allows us to analyze how the code base evolves as work items are implemented.
This is a much richer source of insight than analyzing commit history at the commit granularity, as the business context represented by a work item allows us to analyze collections of logically related changes needed to implement cards that would not normally be revealed by static analysis of a code base, or even analyzing single commits in isolation.
The first application are we are currently bringing out of the R&D phase is initial quality. We analyze the code changes made for a work item and measure whether tests were written in engineering for a given work item.
Code coverage, which measures what fraction of a code base is exercised by a test suite, and is the most prevalent metric for test quality. However it is backward looking, and typically not easily actionable since it is a property of the code base as a whole.
Ergonometrics®, together with Continuous Measurement, enables test quality inspections at the work item level while work is in progress, allowing teams to know whether tests are being written for specific work items, and whether untested code is being promoted to production for specific cards and pull requests. It is a powerful tool to bring testing discipline to a team and ensure that the policies around this are followed.
This is still an emerging area of applications for Ergonometrics®, and we will be announcing a number of novel features in Polaris based on the quality model in the near future.
The Ergonometrics™ framework build on four process models, each analyzing a certain aspect of a software development process. Polaris updates all models in real-time, by analyzing the Ergonometric Graph™ whenever code is pushed, and cards or pull requests are updated.
Ergonometrics™ focuses on cards in the work tracking system that require code changes, i.e the work that actually passes through engineering. We call these specs, and focusing on these eliminates noise in the work tracking system related to non-development tasks. We call this the visible work in the system.
Conversely, we also measure invisible work, code changes that are not tracked by cards in the work tracking system.
Together they give a complete picture of where engineering capacity is being consumed at all times, and constitute what we call work in the model.
A key driver measurement here is traceability, the fraction of visible work in the system, and maximizing this is one of the key steps to improving efficacy of capacity allocation decisions.
A large part of the work of making software is thinking and collaboration between engineers. A commit represents an increment of work output - of anywhere from a few minutes to several hours or even days of actual work - by an engineer.
Given this variability, we view commits primarily as evidence of progress towards implementation of cards. The more interesting signal is the lack of progress toward a card.
Ergonometrics™ therefore focuses on delays, and the most important driver measurement here is internal latency, the time between progress events, including commits and state change events in the card lifecycle.
Measuring internal latency in real time identifies delays due context switches and handoffs on the team and makes them visible while work is in progress.
Almost all additional measurements such as lead time, cycle time, implementation time etc. can be viewed as composites of latency measurements, and we look at monitoring and reducing latency as the key mechanism to drive down non-essential delays in a software implementation process.
The fundamental driver measurement for cost in Ergonometrics is called effort, and it is defined as the number of developer days needed to implement a card. Like other measurements, this can be directly read off of the Ergonometric Graph, and Polaris can compute this in real time for all cards in the system at the commit granularity.
This is much more efficient than trying to measure effort manually using time tracking and other tools that are in use today. Polaris computes this directly by analyzing developer activity without any additional interventions.
Analyzing costs of development using effort measurements is a key forensic tool to improve alignment between business and engineering in allocating engineering capacity to maximize customer facing value.
It is one of the most powerful tools in the analytical toolbox that Ergonometrics™ brings to the table.
There are three key metrics we derive from this driver measurement.
Cost of Wip
For work in progress, the key metric we track is the Cost of Wip: the total effort of all cards that have entered implementation but have not yet been released to production.
The Cost of Wip metric shows the economic cost of carrying too much work in progress. It is well known that high Wip levels lead to high cycle times and overall reduced efficiency in software development, but it has traditionally been hard to quantify this cost directly. Polaris makes this easy.
In our advisory practice, we use this metric to guide teams to move from relying on utilization based allocation strategies to ones that optimize flow of work through the delivery pipeline.
Effort Throughput
This is defined as the fraction of available engineering capacity in a given period, that manifested as work that was actually released to production in that same period. It is used to shed light on process efficiency and analyze overall capacity allocation decisions. It is a very powerful counterpart to volume based throughput measurements in use today.
Feature Costs
Card level effort can be aggregated and analyzed along multiple dimensions such as features, card types, releases, epics etc. to understand development costs along business facing dimensions. This is useful for planning effort budgets for future work and generally understanding what development costs are in more granular detail than previous models allow for.
The quality model in Ergonometrics™ starts with conventional measurements of external quality which focuses on customer facing defects, but then extends it include measurements of internal quality, focusing on engineering practices that impact the quality of the code base as work is delivered.
Since the Ergonometric™ Graph has a detailed model of file and line level changes that go into implementing each card, it allows us to analyze how the code base evolves as cards are implemented.
This is a much richer source of insight than analyzing commit history at the commit granularity, as the business context represented by a card allows us to analyze collections of logically related changes needed to implement cards that would not normally be revealed by static analysis of a code base, or even analyzing single commits in isolation.
The first application are we are currently bringing out of the R&D phase is initial quality. We analyze the code changes made for a card and measure whether tests were written in engineering for a given card.
Code coverage, which measures what fraction of a code base is exercised by a test suite, and is the most prevalent metric for test quality. However it is backward looking, and typically not easily actionable since it is a property of the code base as a whole.
Ergonometrics™, together with Continuous Measurement, enables test quality inspections at the card level while work is in progress, allowing teams to know whether tests are being written for specific cards, and whether untested code is being promoted to production for specific cards and pull requests. It is a powerful tool to bring testing discipline to a team and ensure that the policies around this are followed.
This is still an emerging area of applications for Ergonometrics™, and we will be announcing a number of novel features in Polaris based on the quality model in the near future.