uring my time at Greenling and Organic, part of my job was to help the teams come up with a prioritization framework for product development. First step: talk to the customer. Know what they're really looking for. And don't forget about segmenting your customers. MECE - mutually exclusive, completely exhaustive. Understanding what product or product features your customer needs (vs. wants) will only come from 1) studying your analytics like crazy or 2) the simpler option - talking to your customers. What drives their behavior? Why are they using your product? Are they using everything your product has to offer? These are just a few things to consider, but the end result should lead you to a clear set of customer segments.
As an example, consider Greenling customers. Three major drivers of use came down to:
- Saving time / skipping the hassle of the grocery store,
- Health benefits of local, organic and pesticide use, and
- Cutting edge consumer ( requires the best product available)
Each of these types of consumers had very specific needs of the site and we had to make sure whatever functionality we rolled out addressed each of these segments in a clear way. The next challenge we faced was determining how to map customer needs against business priorities. The simplest way to think about business needs is to think about the typical customer journey:
For simplicity sake, it's cleaner to think about this from 3 areas:
- Customer Acquisition - How do we acquire our customers?
- Customer Engagement - How we get them to engage with us and drive revenue?
- Customer Retention - How do we lower our churn rate?
Ok, so you've now got your main customer needs mapped against your business needs. Now comes the fun part - testing each product feature against a battery of questions. Short of mapping functional dependancies, there's no easy way to objectively say "we need to build this feature before this other one". Having dealt with this problem enough times, I crafted a way to make this decision more objective in nature.
Here's how you do this:
On your X-axis: Customer Segment focus points
On your Y-axis: Business Initiatives
Understandably, different business will prioritize different needs. In this case, we added additional weight to customer engagement, as our focus was driving usability and increasing customer lifetime value. This is why you see a feature score of 125 for the Engagement row. By summing up each cell value for the row (25) and multiplying it by the assigned multiplier (5 - on a scale from 1-5) you get to the total value of 125 for the column.
Once you've scored the product feature, you've got a set of objective metrics. For the above feature, a calculated score of 310 of a total 500 was scored. In isolation, 310 means nothing. But against a feature that may score lower or higher, you now know why this feature has a higher development priority against other product features.
Visually, the above looks like this:
Do this for every product feature, print out the score cards, and plaster them up in your conference room. Anytime anyone has a question of why a feature is being built, you'll have a clear objective answer why.
In terms of calculating the score for each of the cells, this is really where collaboration helps. I've always been a fan of collaborative development and when possible I try to create a workshop type environment with stakeholders across all operating divisions. Holistic development (to me) means leveraging perspectives from everyone who touches the product and everyone who's closest to the customer - i.e. marketing, operations, customer service, engineering. Leveraging leadership from each group is generally fine (assuming the organization has a healthy level of communication from the ground-up). For a template copy of the above, click below!
I've been developing versions of this methodology over the past five years and it works great from a dashboard perspective. Some things to consider when using this:
Everything is always important.
- Be as analytical about scoring and weighting as you can be. Remember that nothing is important when everything is important. Keeping as structured of a ranking process as possible, lets you keep every comparison apples-to-apples comparison. Each feature that gets added to the release gets put through x rows * y columns worth of questions. And the more features you have, the more of a grind this process will be - more espresso breaks.
Paper collects dust.
- You can clone the attached spreadsheet and build in an expanded sheet to log the answers to each question. This will help you remember why you scored a feature in a particular way. If you'd like to add to the spreadsheet, go for it! I'd love to take a look (github for spreadsheets anyone?) The reality is, the longer you leave this up without acting on it, the more you're going to forget the context anyway. Bring in as many key stakeholders as you need (in as few visits as possible) when workshopping this, but make sure everyone always understands why they're doing this. The more people who buy-in to what's happening, the better this process gets.
Measure your results.
- After all's said and done, now you get to measure the effectiveness of your product features. The calculations are representations of what your team believes is important, not exact figures. Use these results as a guide. If you're working agile and releasing in sprints, you might quickly see unexpected traction on a lower-scored feature. Don't ignore the shiny data. Readjust prioritization when it makes sense to do so.
You might Also Like
A Holistic Approach to 'Marketing'
The difference between marketing and conversation is you’re selling someone something - whether it’s a thought, a service, or a product. My problem with traditional marketing is how scattered it quickly gets. The idea behind data-driven marketing helps ground ideas back to measurable results, but it tends to work at a tactical vs. strategic level.Read More