• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Random Dynamic Resources Ltd

  • Facebook
  • LinkedIn
  • Twitter
  • YouTube
MENUMENU
  • Home
  • About Us
    • About Us

      Random Dynamic Resources is a frontline market research company providing field research services across Africa. We develop field research tools that enhance productivity and optimize market research values via the use of up to date consumer– oriented technology, in–built strict quality control … Continue reading... about About Us

  • Methodologies
    • Quantitative
    • Qualitative
    • Quality Strategy
    • Data Management and Analytics
  • Our Reach
    • Our Reach

      We have explored most urban and rural communities across the Sub Saharan Africa with diverse local languages and rich cultures: we make use of local staff in each country we fieldwork to bring about the best approach. In Extreme difficult environment, we strive for improvement while learning new … Continue reading... about Our Reach

  • Resources
    • Articles
    • Blogs
    • Webinars
    • Related Links
  • Careers
    • Careers

      If working with a team of highly successful people who are making a positive difference in our business and our community ranks topmost in your mind, look no further as Random Dynamic Resources Ltd. offers you an exceptional career opportunity to achieve great things! At Random Dynamic Resources … Continue reading... about Careers

      • Job Openings
      • Submit Resume
      • Make Enquiry
  • Focus Group Facility
    • Viewing Facility
    • Live Streaming
  • Contact Us

Using MaxDiff to Prioritize Features at Google

The following is written by Chris Chapman, Quantitative User Experience Researcher, Chromebooks and Eric Bahna, Product Manager, Android Auto, as published in the book Applied MaxDiff: A Practitioner’s Guide to Best-Worst Scaling (published by Sawtooth Software). 

Introduction
At Google, MaxDiff has been used by dozens of teams and in hundreds of projects. Common uses are to prioritize users’ interest in features; to assess the frequency of use cases; to measure the appeal of content; and to rate potential messages. We have taught an internal class to over 130 researchers, product managers, designers, and engineers who now use MaxDiff to prioritize users’ needs. Here we describe a slightly more unique application: prioritizing engineering work.

Prioritizing Feature Requests
A crucial activity in a technology firm is to prioritize feature requests (FRs).  An FR may arise from a customer, from an executive, or often from the engineering team itself.  The product management (PM) team prioritizes FRs relative to their importance and required effort, aiming to deliver a set of features with maximal value within a budget for cost and effort.

This does not imply that one should deliver the most important FR #1 first, and then do FR #2 only if there is remaining capability.  Instead, we need to maximize the total value relative to the effort (the “knapsack problem”).  It may happen that feature #1 is very costly, whereas we could deliver #3-#6 with lower effort and a higher total value to users than #1 alone.

We hope you are thinking, “MaxDiff is perfect! Instead of saying all FRs are important, it forces a tradeoff. The results show the value of each FR. You can divide that value by the effort, and stack rank the result to deliver maximum value.”  That is right, but there are two twists.  First, we may not be able to ask customers if they are difficult to reach or features are confidential.  We solve that (imperfectly) by asking PMs, Sales, and Support team members to assess the FRs on behalf of their customers.  That reveals the second twist: team members may show systematic disagreement because their roles provide differing insight into users’ needs.

Assessing Differences
We address differences by highlighting the disagreement and discussing it.  Traditionally this is done in a large prioritization meeting. Unfortunately, the results of such discussion may be dominated by the “HiPPO,” the highest paid person’s opinion (Kohavi & Kaushik, 2006).  This is where MaxDiff is immensely valuable: we can use data instead of opinion to compare assessments by team members’ roles, such as PM vs. Sales.

Exhibit 12.1 – Team’s MaxDiff Rank

Exhibit12.1 shows a simulated example where 20 Feature Requests have been prioritized by a team meeting and are also assessed separately in MaxDiff surveys answered by PMs and Sales engineers.  The diagonal shows the current ranking of importance in the engineering backlog (the results of the meeting), while the average preference of PMs from MaxDiff is plotted as square symbols, and average Sales preference as triangles.  When we read across each row on the plot, we immediately see the areas of agreement and disagreement.

We see, for example, that there is modest disagreement for item #1, which somehow ended up in first place on the current priority list even though neither PMs nor Sales believe it is most important.  Further down we can see areas of larger disagreement: item #3 is very important to Sales (2nd place) but low for PM (13th place). Item #12 is near the bottom for Sales (17th place) but in second place for PM, differing highly from the agreed backlog rank.

These differences are used to reassess backlog priorities.  For instance, item #12 might be important to PM but not to Sales because it will attract new customers.  Through such discussion, item #12 might be moved up due to its strategic importance.  Also, item #2 might be moved up to #1.  MaxDiff data allow us to have such conversations with less opinion, focusing on the areas where additional information and judgment are needed.

Considerations
We have seen two problems in this approach.  First, team members are busy and may need incentives to answer a survey.  When we demonstrate that answers are used to change the product roadmap, participation goes up and respondents express enthusiasm about the process.  Second, not every team member may have insight into every feature.  In this case, we use constructed MaxDiff to select items that are relevant for each respondent (Bahna and Chapman, 2018).

At Google, MaxDiff has been valuable to many teams.  We find the level of interest in the method to be rising steadily, both for assessment with customers and assessment among team members.  We hope you find it as useful as we have.

For More Information on MaxDiff
MaxDiff is an advanced survey research technique for prioritizing and weighting the preference/importance of a list of items.  Respondents see typically 3 to 5 items in each set and choose the “best” and “worst” items.  The resulting scores can be made to sum to 100%

  • Introductory video on MaxDiff: https://youtu.be/Uj5QE9mp3NE
  • Introductory white paper on MaxDiff: https://www.sawtoothsoftware.com/download/techpap/How-Good-Is-Best-Worst-Scaling-2018.pdf

References

Bahna, E., and Chapman, C. (2018). Constructed, Augmented MaxDiff. In Sawtooth Software (ed.), Proceedings of the 2018 Sawtooth Software Conference. Orlando, FL, March 2018.
Kohavi, R., and Kaushik, A (2006). “RE: Hippo?” At: https://exp-platform.com/Documents/HiPPOOrigin.txt For context, see See https://exp-platform.com/hippo/

 

NewsTechnique

Share to: LinkedIn Twitter Google Plus Facebook 

Tags

  • maxdiff

Image icon maxdiff.png

July 15, 2020 By art.flanagan

RDR

Random Dynamic Resources Ltd

Some of Our Clients

  • nielsen
  • clearpath
  • ipsos
  • samsung
  • ifis
  • dialego
  • mmr
  • mindlab
  • airtel
  • pepsi
  • tns
  • nexim
  • ask-afrika
  • gfk
  • visioncritical
  • cola

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

News Update

  • E-Tabs Launches Enterprise Cloud

    January 17, 2022

  • Letter from WAPOR President (January 2022)

    January 11, 2022

  • IPC Q&A: Tim Kunkel of Suzy

    January 11, 2022

  • Aksel Bedikyan Named Vice President of Leger Analytics

    January 11, 2022

  • In Memoriam: Naomi Henderson, Founder & CEO of RIVA

    January 11, 2022

  • InSites Consulting Acquires Gongos, Inc.

    January 11, 2022

  • New Director Robert Santos Takes Over U.S. Census Bureau

    January 07, 2022

  • Opposing Expanded FTC Enforcement Power in the Build Back Better Act

    December 23, 2021

  • Labor Market Competition and Non-Compete Agreements

    December 22, 2021

  • IA Announces 2022 Events Calendar

    December 20, 2021

  • Smarty Pants Adds Davison to Team

    December 20, 2021

  • Dynata Acquires Optimus Analytics

    December 15, 2021

  • CIRQ Announces ISO 27001 Certification of P\S\L Group

    December 15, 2021

  • Resolutions and Revolutions for the New Year

    December 15, 2021

  • CIRQ Announces ISO 27001 Certification of P\S\L Group

    December 09, 2021



read more...

RDR in just 2 minutes

https://www.youtube.com/watch?v=u1o9Vt934FA&sns=fb

Stay informed

The Core of Market Research In Africa

https://www.youtube.com/watch?v=4Npc6UbUg4s

Footer

Key Sectors

  • ICT & Telecom
  • FMCG
  • Automobile
  • Manufacturing
  • Pharmaceutical
  • Financial Services
  • International Development and social Research

Our Expertise

  • Qualitative
  • Quantitative

Get In Touch

  • Contact Us
  • Join Random Dynamic Resources

We are corporate member of

  • esomar1
  • insight-association
  • pamro
  • World Association for Public Opinion Research
  • American-Association-for-Public-Opinion-Research
  • Nigeria-Marketing-Research-Association
  • International Trade Council

Copyright © 2022|. Random Dynamic Resources Ltd. All rights reserved.