Persona development is a common and powerful component of marketing strategy and product development; ensuring that your team is working with their customers in mind. Less common, but arguably as powerful, is focusing on what those personas are actually trying to accomplish. These “jobs to be done,” as defined by writer Anthony Ulwick, form the basis of “outcome-driven innovation,” as described in Ulwick’s book, What Customers Want.
Organizations seeking a data-driven approach to feature prioritization may find themselves drawn to Ulwick’s approach. Let’s explore further.
Less About Personas, More About Jobs to Be Done
All prioritization frameworks attempt to “satisfy” customers, but where outcome-driven innovation (ODI) is different is in its relentless focus on the tasks (or jobs) customers wish to complete, which inform that satisfaction.
Ulwick posits that within any interaction, a customer is looking for an exchange of value. In return for money or time, they are seeking to complete a job. These jobs vary, depending on the context of the situation, but they reflect the the goals of the customer. A key step in ODI is defining what these jobs to be done consist of.
Ulwick’s suggested language for defining jobs to be done is similar – but not identical – to the user stories commonly used in agile.
In agile development, the proposed framework is structured as such:
As a [role], I want [capability] so I [benefit].
This statement within ODI would be slightly rephrased:
When [situation], I want to [task] so I can [expected outcome].
When compared, we see ODI replaces the role of the user with the the context of the moment. Ulwick suggests that an expected outcome is more objective than a desired benefit, and thus more useful, particularly when dealing with engineers and developers tasked with implementing said features.
Yet, merely identifying the jobs to be done isn’t the only differentiator of ODI compared to other prioritization frameworks. So are how these jobs are categorized and analyzed.
The Axis of Jobs to Be Done
Ulwick next suggests that for proper prioritization, a grid should be used, with the X axis representing the relative importance of a given offering, and the Y axis corresponding to customer satisfaction of existing solutions.
Here, Ulwick’s approach could be compared to Kano Analysis, which also plots satisfaction on the vertical, yet tracks functionality on the horizontal. Unlike ODI, Kano allows that the functionality of a given feature could be done so poorly to lead to dissatisfied users; a distinction you won’t find in ODI.
Once an offering has been released, Ulwick suggests gathering ample data from users, asking them to rate both the importance and satisfaction of the solution, and then plotting them on the graph. To this point, ODI has offered a slight difference from other models, but now will diverge more significantly.
Finding the Sweet Spot with ODI
Once appropriate data has been gathered and plotted (and for what it’s worth, Ulwick suggests a sample size ranging from 150-600, significantly more than other approaches), practitioners will find their results fitting within three distinct triangles on the graph.
The upper left quadrant represents users that – on the whole – are more satisfied than their perceived importance of a given feature. In ODI-speak, this market has been “overserved,” and a new offering of this feature will likely become lost in the sea of competition.
A better opportunity exists just to the right of the original segment, offering solutions where customers are relatively (but not completely) satisfied. This segmentation is considered to be “appropriately served.”
The true opportunity lives in the final triangle to the far right. Here, the correlation is clear; a market that places great importance on an offering that is – on the whole – dissatisfied with available options.
This, according to Ulwick, represents the opportunity to respond to an underserved market.
Ranking Jobs to Be Done
While the ODI graph allows for a quick visual reference, product owners rarely (if ever) need to plot a single feature. More often, they compare several. Thus, plotting all of them can render analysis difficult.
To allow for swift feature ranking, ODI relies upon an “opportunity algorithm,” a simple formula expressed as follows:
Importance + (Importance-Satisfaction) = Opportunity
Written in this manner, added weight is being given to the importance of a specific offering, relative to customer satisfaction of existing offerings, resulting in an “opportunity score.”
To evaluate the results, follow these guidelines:
- Opportunity scores greater than 15 represent extreme areas of opportunity that should not be ignored.
- Opportunity scores between 12 and 15 are “low-hanging fruits” ripe for improvement.
- Opportunity scores between 10 and 12 are worthy of consideration especially when discovered in a broad market.
- Opportunity scores below 10 are viewed as unattractive in most markets, offering diminishing returns.
For those companies seeking to prioritize many features at a time, the “opportunity score,” similar to RICE prioritization, offers an objective approach to ranking all options.
Should Your Company Utilize ODI?
Though Ulwick received his first of twelve patents for his process more than two decades ago, ODI’s framework continues to provide significant value to product owners worldwide. Though originally embraced by companies with physical products, its application is just as useful for those in software development.
Moreover, the framework scales up better than many tactical feature prioritization techniques, and can be applied equally well to entirely new products offerings as well as considering simple, new features on existing solutions.
At the same time, practitioners would be wise to consider the larger datasets of feedback that Ulwick recommends, as obtaining hundreds of survey results can be a challenge in many organizations, as can structuring said surveys in a manner to obtain the most reliable data.
That said, the greatest value for software developers might be ODI’s focus on “jobs to be done,” as framing customer concerns in this manner is directly applicable to software development. When these jobs to be done are paired with the granular approach proposed in Ulwick’s opportunity algorithm, product owners will find themselves with an objective process that’s clearly superior to less rigorous approaches.