Measuring DevOps Performance: The Key Indicators

As rapid technological advancements continue to intensify competition across industries, shareholders are applying pressure on organisations to improve their performance and keep up with the fast pace of business, writes Sacha Labourey, CEO of CloudBees. To prove their value, organisations are finding themselves having to present key metrics that measure their IT performance – however, these metrics aren’t always the most suitable.

Sacha Labourey CEO, CloudBees

Before presenting metrics, businesses need to agree on a set of IT key performance indicators (KPIs). This can be made difficult as large companies often yield many invested parties, all with their own preferred indications. In order to better understand how to choose the right KPIs,  businesses must start by defining a set of principles around the indicators. These principles should help organisations understand what meaningful indicators they need to obtain, and how.

The 13 Aspects to Consider When Choosing KPIs:

1: Collect only usable indicators: What would happen if the proposed indicator changed? If no concrete actions comes to mind, then this indicator is probably not necessary

2: Focus on value for the company, not workload for the IT department: The fundamental purpose of DevOps is to increase value for the company. This must be reflected in the indicators. IT performance measures are nevertheless relevant if they affect value for the company and meet the first criterion above.

3: Collect some easy-to-interpret indicators: Do not make it too difficult to collect the KPIs.

4: Associate indicators with a role: Associate enterprise value indicators with the company, programme indicators with programme managers, technical debt indicators with technicians, etc. Identify the target community and purpose, and do not overwhelm users with indicators. Think of indicators presented in an “à la carte” format, where users can choose those that are most relevant to them, to add to their personal dashboard.

5: Automate the collection and organisation of all indicators: This should be obvious to the DevOps professional who seeks to automate the provision of technical capabilities. First of all, the manual collection of indicators is very time-consuming and counterproductive to the programme provided. Secondly, it is impossible to obtain real-time information manually. As a result, teams are more engaged in an after-the-fact collection than a proactive activity.

6: Display all indicators in a unified dashboard: Do not expect users to look for them everywhere. This unified dashboard can be customised for the user and team, as discussed in point four. The key is that the user can find all the necessary indicators in one place.

7: Focus on raw figures rather than ratios: With the exception of the change failure rate (which is actually a ratio), collecting and displaying raw data in highly recommended. This is particularly relevant for indicators for technicians as it encourages the overall use of these figures by technicians within the team. It also reduces the use of indicators at group level to compare the performance of different teams against each other regardless of any context.

8: Use the right indicator as appropriate: The appropriate indicator varies according to the situation. For some products, it is speed; for others, stability and availability. This essential principle concerns the user more than the supplier. The main indicator of success is not necessarily the same for all products.

9: Focus on the team, not on individual indicators: DevOps aims to foster a culture of cooperation and teamwork. As soon as the culture begins to change, team recognition will take precedence over individual recognition. To do this, indicators will also need to take this factor into account.

10: Do not compare teams, compare trends: Given point eight above (i.e. that key indicators differ between teams), then each team has distinct objectives. In addition, comparing teams is not useful if using raw data for many indicators. However, product teams, business units and key partners need to compare trends within their teams and units.

11: Detect outliers: While avoiding direct comparisons between teams, it is always wise to look for outliers. Once identified, determine why some teams perform much better or worse than others. This often allows lessons to be learned that will benefit other teams.

12: Implementation time = time to production, not time to completion: This is a fundamental principle. The initial stages of adoption are often accompanied by initiatives to reduce the time to production. The last step of continuous deployment usually comes next, and it is essential that the implementation time corresponds to the production time, and nothing else.

13: Use secondary indicators to limit adverse effects: Focusing on time-to-market can affect quality.  If the focus is on a specific indicator, then take into account the disadvantages and monitor trends. This applies even if all the consequences are knowingly accepted.

Building on the Foundation

These principals will form the foundation in which businesses can build their DevOps KPIs. Once created, businesses should then reflect on their KPIs using four different lenses: time lost due to missed opportunities, value points for the company, ROI signals, and unintended consequences.

  • Time lost due to missed opportunities

It is essential for large companies to reconnect their business activities with their IT operations. The support of the product manager is vital to ensure that appropriate initiatives are carried out on time. To determine whether the company is sufficiently involved, businesses should measure the time elapsed between the availability of deliverables for production and the actual start of production. If the content developed does not necessarily have to go into production once it has been completed, it may be considered more appropriate to entrust other projects to the persons concerned. Using this indicator, account and business unit managers would be able to identify potential optimisation opportunities.

  • Value points for the company

Often the product team assesses the effort involved in delivering each request, in the same way that value points for the company are used to measure the value of each request for the product owner. The purpose of these two complementary measures is to balance effort and value. This principle has been adopted by the Agile community to facilitate request management. Businesses should not standardise the value of these points between different product owners and teams; their value makes sense within a team, not in comparison. Indicators of this type allow the company to better identify when an active product enters into current operation or, in the event of limited resources, which initiatives are underway and may be overlooked if new priorities arise.

  • ROI Signals

Executives often discuss deadlines and budget, but rarely about the technical state. To encourage discussion on this subject, it may be useful to make it a key indicator on the executive dashboards. Dashboards with ROI signals (red, orange, green) are very common in large companies. Project managers and programme managers then indicate the respect of deadlines and budget by means of ROI signals.

  • Unintended consequences

Beware of unintended consequences and remember that each measure leads to a change in behaviour. Businesses should be aware that any collection of indicators can influence behaviour in unexpected ways. In an extreme case for example, there could be an instruction for the quality of the product to not go into “the red”. While admirable, this may lead to the disappearance of any proactive reporting of problems.

Read this: AWS Goes “All In” on Containers, Launches MicroVM Manager

 

The post Measuring DevOps Performance: The Key Indicators appeared first on Computer Business Review.