Feature prioritization is paved with a slew of problems, from lack of relevant data, emotionally charged up decisions, and tackling a mental minefield of guesstimates. A project manager must prove their mettle by crafting a product roadmap that is aligned with organizational goals instead of giving in to their ‘gut feeling’.
Their task is to establish a framework that determines which products, features, and initiatives to prioritize on their roadblocks.
The Problem with Software
Building high-quality software is easier said than done because the development process is always in a state of flux. You have new tools, libraries, and features emerging on a daily basis.
a) There Are Too Many Features
Without any useful features, a software product is as good as dead.
But just as easily, software can become hard to use if it’s plagued by too many features – a phenomenon known as ‘feature creep’. Deciding how to identify and avoid feature creep is an overwhelming part of the development process – worse still is the fact that everyone has opinions about what constitutes as useful and what may be regarded as ‘bloatware’.
b) There Are Too Many People With Opinions About Those Features
Different stakeholders have their own opinions about features which creates difficulties for the product owner as they struggle to agree on priority. The sales team wants one thing, while the CTO says entirely something else. Without a proper roadmap to focus on, it becomes virtually impossible to complete project deadlines.
Project prioritization is difficult in more ways than one:
- It feels more rewarding to work on features you would use yourself instead of features that focus on the bigger picture
- It’s tempting to focus on shiny new objects instead of run-of-the-mill technologies
- It’s exciting to dive into new ideas instead of features that you know you can deliver
- It’s easy to underestimate the monumental effort one feature will have over another
For obvious reasons, product prioritization is not easy to sort out unless you use tools like RICE. You’ve got a list of un-prioritized tasks and features scattered in front of you.
Who decides what gets worked on? The product owner.
And how does the product owner make their decisions? With feature prioritization.
This is where RICE prioritization comes in.
RICE begins from the philosophical standpoint that each feature can be viewed through the lens of its potential reach, impact, confidence, and effort (RICE). Taken together, each of these components informs the overall “RICE score” for a given product feature. Each of those scores allow features to be considered within the context of how a given feature’s RICE score compares to other potential features.
Let’s examine the RICE criteria in detail. RICE establishes a reliable scoring system that helps you consider each factor about a feature with a bird’s eye view of the whole project. The scoring system for prioritization is designed to balance cost and benefits.
Reach is the first factor in determining the RICE score and is used to estimate how many prospects each feature will affect in a given timeframe. For example, you can ask yourself, “How many prospects will use this feature in the next six months?” If your answer is 200, then your reach score is 200.
Evaluating reach within the context of other potential features is a simple way to determine what percentage of your audience will be positively impacted by the development of the feature.
Obviously, data should be used whenever possible to inform reach data and ideally, the same methodology should be applied across the board.
Impact is the second defined measurable used to quantify your RICE score and attempts to measure how a proposed feature numerically drives your results. Often, measuring impact is best viewed through the lens of the potential positive growth of already established KPIs. Thus, impact metrics might include:
- Increased website visitors
- Increased conversion rates
- Increased average order size
Like all RICE measurements, you’ll need to determine on a relative basis the potential impact of a given feature. A popular framework to deploy here is a five-tiered scoring system:
- 3 = massive impact
- 2 = high impact
- 1 = medium impact
- .5 = low impact
- .25 = minimal impact
These numbers are calculated into the final score to prioritize the feature. Though this approach is hardly precise, it at least assigns a value to the impact, something few product owners take the time to do.
Project management has a somewhat chaotic relationship with science – and the metric ‘Confidence’ is a testament to that. While you should try to prioritize as much of the data as possible, sometimes you’ll have no other recourse than to use your intuition and gut feeling.
A confidence percentage can make things somewhat easier. It assigns a percentage to your estimates that directly correlates to your ability to predict the outcome of the reach and impact scores we’ve already established.
Let’s go over a few examples:
- Feature 1: We have quantified several previous results and have demonstrated our ability to predict likely outcomes. The feature gets at least a 99% confidence score.
- Feature 2: We may have some data about how this potential feature will be received, but haven’t tested it enough to have a large degree of certainty. This feature may get an 80% confidence score.
- Feature 3: We might have a new proposed “game changer” feature that has the potential for huge ROI. However, because it’s so out of the box, we’re uncertain how it will be received. For this feature, a 50% or less confidence score could be applied.
The first three letters of RICE analysis are used to quantify the potential of a given feature. However, the difficulty of executing on said feature must be juxtaposed with the potential ROI to determine our final RICE score. For that, we look to measure “effort,” the final component of our RICE score.
Depending on your product, effort could simply take into account story points, or engineering level of effort. However, if a new feature requires new promotion or training, that should be taken into account as well. Here, we’re looking to quantify what the business will be required to contribute to generate the potential returns.
As with all RICE inputs, how you measure “effort” measures a bit less than the fact that you measure it consistently. Whether you have an agile team with a predictable amount of story point velocity in a sprint or you prefer to use terms like ‘person months,’ what matters is that your effort is the same across the board.
Calculating the RICE Score
Once you have estimated all these numbers, it’s time to calculate them into a single score called “total impact per time worked”. Here’s what it looks like:
Assuming each of the individual elements is calculated in a consistent manner, each of your features will now have a numerical score which reflects its potential return in relation to what will be necessary to bring the feature to market.
Pros and Cons of RICE
Pros of RICE
- Shows a more comprehensive picture: RICE includes a number of versatile factors to show project managers the bigger picture. The score is based off of data instead of emotions.
- Actionable Metrics: This feature prioritization technique is mostly based on actual data and KPIs, which can be used to make accurate estimates.
- Valuable to Customer: Because RICE uses metrics rooted in user engagement, it also takes into account user satisfaction. In other words, the user experience is a central component of the RICE method.
- Built for Scalability: RICE applies an individual score to each potential product feature. As such, this framework becomes more useful as your potential feature list swells. If RICE might seem like overkill on a backlog with five features, it could easily become a life-saver on products with eighty.
Cons of RICE
- It’s time-consuming: RICE requires analyzing potential features across four different metrics. As such, product managers are tasked with considering potential outcomes before given features have seen the light of day.
- The data isn’t always available: While reach and impact are metrics well worth chasing, for all but the most mature products, they’re also often difficult to measure. While the proper RICE answer would be to lower your confidence score accordingly, this could result in an amazing feature never being released because its score won’t bubble up to the top.
- Discipline is critical: As mentioned previously, RICE is an equation that derives from four different inputs. If you fail to calculate any of the individual metrics uniquely across even two potential features, your results inherently become flawed. Thus, RICE succeeds the most where it’s deployed the most.
The RICE framework has its share of fans, and for good reason. It offers a mathematical framework to prioritize the biggest of feature lists. At the same time, on relatively light product backlogs, it can easy seem like overkill. Moreover, those tasked with building MVP-type products without predecessors sometimes find it impossible to populate numbers with any real confidence, rendering the analysis moot.