Product Backlog Prioritisation Techniques - which is the best?
Glossary

As a product owner you probably have little time and lots of challenges to fight with. In sleepless nights you ask yourself the following: What is my backlog? How do I best prioritise it? A list of required tasks and their optimal order of execution are desirable. This article introduces what a product backlog is all about and shows common prioritisation techniques.




What are product backlog prioritization techniques?

Backlog prioritization techniques describe the approach to analyze and rank backlog items in order to decide which item to tackle first. Priorities are usually described by numbers to model the positive/negative impact of each item, making them comparable.

We compare the following methods:


What is a product backlog?

A product backlog contains tasks to execute for managing a product. Traditionally it contains improvements and enhancements, things to build or refurbish. If you're like most product owners you treat your product backlog like a dumping ground for everything that comes flying at you: customer requests, strategic enhancements, ops improvement and technical refactoring. Naturally, you (think you) know your big ticket items but there are so many backlog entries - things can be overlooked. A list of items helps managing product improvement tasks, like a product's to-do list.


What is the best prioritization method?

Most backlog prioritization techniques fall into one of the two categories: they are ranking formulas or matrix approaches.

  • Ranking formulas help sorting a list of items, usually feature requests and other backlog entries. 
  • Matrix approaches benchmark backlog items in two dimensions to make an informed decision on benefit and disadvantage. 


Matrix approaches



The Value v.s Complexity flavor additionaly considers risk, as a special type of cost measure. Likewise, there are flavors using other KPIs representing the cost aspect: Value vs. Risk, Value vs. Size, Value vs. Leadtime, Value vs. Dependencies. Ultimately, they all consider the cost that comes with building a certain feature. 

Pros of Value vs Cost Matrix
  • Bread-and-butter approach, not considering it is a failure
  • Straight forward, simple and easy to understand
  • Drives towards easy wins
Cons of Value vs Cost Matrix
  • Tends to ignore long term, strategic aspects
  • Doesn’t work well when backlog items differ in orders of magnitude


Ranking formulas


ICE Scoring

ICE means Impact, Confidence, Ease. Impact quantifies the value of the backlog item in your key metric. This can be revenue, user count or similar. Confidence measures how certain you feel about the expected impact. Ease measures the difficulty of building the backlog item. 
The owner or team scores the values of impact, confidence and ease on a scale of 1-10. The ICE score is calculated via Impact * Confidence * Ease simplifies the three dimensional scoring into one ranking. 

Pros of ICE scoring
  • Good for ranking backlog items that compete for the same resources.
  • Good for ranking backlog items when you have a reasonably good understanding of the item on cost and result.
Cons of ICE scoring
  • Highly subjective, items are rated by owner/team. No real user feedback involved.
  • Not time stable, item ranking will largely depend on the timing of trending/pressuring topics.
 

RICE Scoring

RICE means Reach, Impact, Confidence, Effort. Reach quantifies the number of customers who will benefit from the item/feature. Ideally, this is based on actual feedback data. Impact estimates the benefit per user. Confidence measures how certain you feel about the estimated impact. Effort measures the cost of building this item, typically this is development cost. The RICE score calculates by Reach * Impact * Confidence / Effort. 

Pros of RICE
  • Includes user feedback data
  • Includes individual users’ experience
Cons of RICE
  • Time-consuming estimates 
  • Wear down when used repetitively
  • Highly subjective when not based on user feedback data
 

Kano Model

This model was created by Japanese researcher Noiraki Kano in the 1980s. It models user satisfaction to prioritize product backlog items. With Kano, backlog items are scored on multiple criteria from a user/customer perspective: must-be, attractive, one-dimensional, indifferent and reverse. Must-be features are needed by customers for the product to function. One-dimensional features are important and desirable for customers. Attractive features add value and satisfaction. Indifferent features have little or no value. Reverse features have a negative impact on customers. Major challenge for the Kano model is the quality of scoring data, which is in reality usually generated by internal stakeholders instead of actual users.

Pros of Kano
  • User perspective first: ranks backlog items by their customer value 
Cons of Kano
  • Ignores all non-user qualities like cost, risk and strategic fit
  • Highly subjective when not based on real user feedback data


Stack Ranking

When stack ranking all backlog items are placed in order of priority. Items must be ordered, there can be no items with equal priority - there is exactly one #1 item, exactly one #2 item, and so on. Items are prioritized only in releation to other items, you do not need to worry about determining cost, effort or any other measure that is difficult to obtain for an item or can be manipulated. The beauty of stack ranking lies in its simplicity and effectiveness. 

Pros of Stack Ranking
  • Simple and straight forward
  • Enforces clear priorities, less confusing compared to measure-driven techniques
Cons of Stack Ranking
  • Does not support the actual determination of priorities, priorities are by management decision
  • Priorities can change 


MoSCoW Method

The MoSCoW prioritization model originates in agile software development. Developed by data scientist Dai Clegg in 1994, the MoSCoW model places items into the categories Must have, Should have, Could have and Won’t have. The term MoSCoW is derived from the first letters of these categories, the term is not related to Russia’s capital. The categories work as follows.

Must Have
The product/project must not ship without this. Failure to cover this item concludes to overall failure because you can’t deliver without. Typical Must Have items are core features, safety and legal requirements.

Should Have
Should Have features are important, but not notabsolutely necessary to the success of your product. They are painful to leave out and may have a negative impact on your product, but they can be delayed.

Could Have
Could Have items are desirable but less important than Should Have items. These are the typical nice-to-have items that can optimize a product further. Leaving them out or delaying them will cause less pain than a Should Have item.

Won’t Have
Won’t Have features are agreed to be non-critical by stakeholders and come with little or no payback. These items are kept in the backlog for later, they can even be dropped from the backlog.


Pros of MoSCoW Method
  • MoSCoW gives a good ranking matrix and helps managing a large backlog.
  • Capability to align with the product strategy.
Cons of MoSCoW Method
  • Lack of rationale in determining the categories, largely subject to negotiation and judgement.
  • Does not help to rank items with equal priority.
  • Tends to favor new features over technical improvements.


Summary & Recommendation

Product owners can choose from multiple backlog prioritization techniques and pick what's best in their environment. Usually, multiple KPIs are used to measure the value of a backlog and prioritize it. Large organisations tend to use formal approaches with product decisions driven by numbers, while small companies tend to use prioritization techniques as additional decision support. The final decision of what to build next is often taken by the owner of the product.


We do recommend to keep it simple and use any sort of cost vs. benefit flavor as the main decision driver. Cost estimations are highly depending on the actual organization, usually this is cost of development including a risk compensation. The estimation of benefit should be based on actual market and user data, ideally with a feature voting or customer feedback channel.