Assumption Mapping in Discovery Research
In September, 2021 Mary gave a presentation on “Assumption Mapping in Discovery Research” at QRCA’s UX Flex Summit. Here’s a transcript of that presentation.
I’m going to start by sharing a scenario that might be familiar to some of you. It’s certainly something that I’ve encountered a lot where a team knows or has been told that they need research. So they come to you as a researcher with the request, “Please do some research for us” Or maybe a general statement like, “We want some customer feedback.” And importantly, there’s no specific question. There’s no learning goal. There’s no specific focus. It’s just this generic desire for research. Many in-house teams know they need something called research, but often don’t know how to get started or how research can add value, and we have to help them. But where do you start?
Here’s what I do. First of all, I thank them. I’ve definitely had colleagues tell me that they’re annoyed at still having to explain research after so many years. But I look at it as a chance to help a team that’s come asking me for my help. So thanking them is step number one. Step two is to probe the assumptions that the team has around customers’ needs, and the value that will be delivered by your solution or your product. And then to finally create a research plan to validate those assumptions.
And this talk I’m going to cover two primarily and touch a little bit on step three briefly. Really, this is a much longer workshop condensed into 20 minutes, so it’s going to be high level. But I am going to cover the knowledge landscape and where assumptions are in play within that landscape. I’m going to introduce an animal menagerie of why voicing assumptions is so hard. We’re going to cover some techniques for eliciting assumptions. And then finally touch on some considerations for creating a UX research plan based on reducing the uncertainty and the risks that are in those assumptions.
WHAT ARE ASSUMPTIONS?
So what are assumptions, it’s something that is accepted as true or certain to happen, without proof. Sometimes it’s just defined as a premise that you take for granted. But importantly, it’s a source of uncertainty and therefore risk in your project or your product. So that’s why assumptions are important and to understand where they live, it’s helpful to think about the knowledge landscape.
KNOWLEDGE LANDSCAPE
One way to think about it is in three general buckets, things that are known for sure. Things think you know, but have some uncertainty around. And then things you don’t know.
Think You Know is actually a gradient that runs from things that you are pretty certain about to things that you’re very skeptical about. It also includes things that you think might be true for a subset, but not all, of people or situations and the boundaries between when it’s true and when it’s not maybe fuzzy.

And then that Don’t Know category is often pretty big. Those are the true unknowns, unexplored territory, where there’s no theory. It also includes those things where there’s so much contradictory information that you can’t tell what’s going on.
So this landscape is not necessarily to scale. But the point is that there are three buckets and UX research can help provide clarity and all three of these areas by shrinking the unknowns, refining what you think you know, and potentially uncover things to look out for in that known area.
So I want to update this model a little bit and add some nuance. The first one is within the area of Don’t Know. For the purposes of your project, there’s a slice of some size that’s relevant to the project at hand. So that’s the part that you want to focus on, and then leave mysteries of the universe still out there as unknowns.
For the things in that relevant part, and in the Think You Know bucket, people are pretty open to new learnings; they acknowledge some amount of uncertainty, and you can usually get them to open up about those things. Getting them documented is fairly straightforward, and definitely valuable. And I’ll talk about some techniques for that. But there is one other nuance in the known area,
You might be tempted to move on from the known slice, and with limited research resources, you shouldn’t spend a lot of time rehashing what is already known. But hidden inside, there’s a slice that is believed to be known, but is actually wrong. And those are the blind spots that the team has that can really impact a project.
Mark Twain has a quote that puts it very well. He says, “it ain’t what you don’t know that gets you into trouble. It’s what you know, for sure. That just ain’t so.” So acknowledged unknowns and blind spots should both charter research, but the path to uncovering those assumptions is very different. And so we’re going to talk about how to access assumptions in both those areas. But before I get there, I’m going to talk about four reasons why voicing assumptions is so hard.
WHY VOICING ASSUMPTIONS IS HARD
The first one is just by definition assumptions are something you take for granted. And maybe it’s all you know, maybe you assume that what you see is all that is but asking people to voice their assumptions is like asking a fish to talk about the water. It’s just hard by definition, and if that’s all you know, then you don’t have a language around it. So the takeaway here is, you shouldn’t expect people to be able to answer that direct question, “What are your assumptions?” It’s just hard by definition.
The second reason why voicing assumptions is hard is because we have this human drive to fit in. When the whole room is coalescing around one idea, it can be very threatening to be the lone voice of dissent. And that that drive for conformity is real, especially in low trust environments.
The third reason is people have a strong tendency to comply with authority figures. In the corporate world, this shows up as following the highest paid person’s opinion or the hippo. And if your executive leader is going in one direction, it can be difficult and maybe career limiting to be contradictory. Agreeableness is generally a good trait, but when it crosses into pandering or suppressing dissenting opinions or expressing uncertainty, it can be really threatening to the project success.
The last reason it’s difficult to voice assumptions is overconfidence. Admitting you don’t know something can be perceived as a sign of weakness. The paradox here is that confidence is required to get something started and to keep it moving. But overconfidence is problematic. Projecting confidence can be a leadership or communication technique. But confident assertions don’t mean that leaders are not worried about something. So, as the researcher, you shouldn’t confuse that presentation style with certainty. So that’s four things to be aware of.
ELICITING ASSUMPTIONS
I want to move into some techniques for eliciting assumptions. First, you could do something called an assumptions workshop. If you do a Google search you’ll see that it is recommended in some lean communities, but often with the naive approach of asking that direct question, “What are your assumptions?”, which isn’t effective. Bringing people together can be good, especially because it increases diversity. But there are better questions to ask.
Another possibility is doing one on one conversations. That may seem less efficient, but there’s no peer pressure for conformance. And there’s no need for that posturing confidence publicly. So it can be an effective way to get at what people are actually thinking. And then documenting the input anonymously.
The third one here is a 1-2-4-All technique which can be done either in a workshop or be led by the user researcher. In the workshop, you would ask people to write individually first, then pair them up and have them share with their partner. Then, depending on how many people you have, put them in quads and do the same thing. And then finally, do a group share out to the entire group.
Now, if you were doing it as the user experience researcher, you would document your reading of the key assumptions in an email; your assessment of things that would have a material impact if they’re wrong. And then you would send that around via email and ask people to criticize, get their responses, review it with a few folks and revise redistributed.
The advantages to this technique, whether you do it in a workshop or email, is that it’s good for introverts, it gives people time to reflect, rehearse or refine it, and it also bypasses the hippo, or the cascading opinion, problem that can occur sometimes.
The next two are related in that they both assume a fixed future state and then work backwards to the present. Fixing the future state and working backwards is cognitively easier than looking forward into an open ended future. The first one is called the Pre-mortem, which is a post-mortem in advance when the project starts. The assumption there is that things went wrong and the project has failed. So, with that assumption, you then ask the question of what went wrong? How did this end in disaster? And which of our current assumptions contributed to the project not working out?
The next one is kind of the opposite in that it fixes a successful future. It’s called Remember the Future. But you imagine that you ask people to imagine that this is a number of years in the future, and the customers have been using this product successfully. And you ask the question, What will the system have done to make our customers successful? So that’s the paired techniques of fixing the future.
And then the last one is the All Knowing User, where you imagine that you have an all knowing, helpful, insightful user that’s just outside the door who will answer truthfully any question you throw at them. So what questions would you ask that all knowing user?
PROMPTS TO ELICIT ASSUMPTIONS
So those are some contexts in which you can elicit assumptions, but how do you get into assumptions if you shouldn’t ask that direct question? Here are some better prompts. These are some basic ones. And if you’re in a situation where you’re just starting out, it is really important to nail down who the product is for and what problem are we solving? That’s the foundation that you should really crystallize. And there’s often many assumptions built in there. (Note: If the response you get to the audience questions is “Everybody”, move beyond that and ask people to refine that a little bit.)
And then once you’ve got those basics and you want to refine a little bit, here’s some additional questions. “What worries you about the project?” is especially good if you’re doing one on one interviews along with some version of “What’s keeping you up at night?” And then the last one is, “What data can I bring that would change your mind?” That is really probing on the strength of conviction that people have and how entrenched they are with their beliefs?
MAPPING ASSUMPTIONS
So what do you do with that? Hopefully, it’s some combination of those generated a host of assumptions. What do you do? Eventually, you want to get them on a two by two matrix. I use one that’s impact versus evidence, where the horizontal is the impact if we’re wrong, versus the strength of evidence, the evidence that we already have.
So I said you want to end up here, but I don’t recommend starting here, especially in a workshop setting. Anytime you’re working on a two by two, it’s much more effective to focus on one dimension first, and then layer in the other dimension. In this case that would be starting with the horizontal impact dimension, and then without changing the horizontal position, move things vertically to indicate the strength of evidence.
MAPPING EXAMPLES
So I’m going to imagine that your work generated this list of animal assumptions. You’ll notice some of them are real, and some of them are hypothetical: a friendly unicorn, a mosquito, a grizzly bear and Bigfoot. So if that was what I was working with, and we started on impact dimension, this is the order that I would put them in – unicorns are generally friendly, especially to little girls so that in the leftmost position, and then Bigfoot which is this big, scary, unknown on the rightmost position. And then if we were going to overlay the strength of evidence, in this hypothetical example, we would move the mosquito and the bear up, and then the unicorn and Bigfoot down.

So if this were our real project, we would be investigating Bigfoot pretty seriously because of the significance of his or her impact and the lack of evidence that we have about existence. And then on the other hand, we’d spend very little time on mosquitoes because they already know they exist. And well, annoying as they may be, they’re fairly minor in terms of their impact.
This is a less fun, but more realistic example, from an upgrade manager project that I worked on. Our bigfoot in that bottom right was the root cause of customer dissatisfaction was due to breaking customizations. And what was less important to research, our mosquito, in the upper left hand corner was the number of users we had. We knew that we had around 20 million users. But if the exact number were, 22,000,905, it wouldn’t have made a material difference to our approach.
And then the bottom left corner or unicorn was regarding existing migrations, we knew there was a tool that already existed, but there was uncertainty around how well it worked or how effective it was. And then in the upper right corner, that’s the bear, that customers wanted to use two versions contemporaneously. And it would have a significant impact on our project if it were wrong. But we also had a lot of evidence that that was true, so not a good candidate for research.
USING THE MAP and PRIORITIZING

So what do you do? I alluded to this already with the two by two, you really focus on that bottom right quadrant, things that have a significant impact with weak evidence. And you can scooch a little bit into the bottom left quadrant because research can also bolster weak evidence.
Focusing on that bottom right quadrant quadrant does narrow your focus, but there may still be a lot of assumptions in there. So how do you further prioritize? There’s not a one size fits all answer to that, but I do have some things for consideration.
The first one is momentum. It’s quite a legitimate strategy to do the easiest research first to build up that momentum especially if the team is new to research and you want to jump start a focus on research. Doing the easiest first can help build that momentum.
Second one is the ROI – considering the cost of being wrong versus the cost of doing the research itself. We all know that research comes at a cost of both time and dollars. So you really want to make sure that the value of information is worth the cost of gathering it.
And then entrenchment. What strength of evidence do you need to change people’s minds? If people are deeply entrenched, it does take a lot of effort to dig them out of that trench. And so it may not be a good place to start with your research.
What is the tolerance for imperfect information? In my real world example, 20 million users was not correct, but it was close enough for the team to move forward. So what’s their tolerance for imperfect information?
And then the last one is around absorption and timing. If the project train is already moving, sometimes imperfect information in a few weeks can be much better than “perfect” information in a few months. So consider what the team can absorb and on what timeframe.
CONCLUSION and FURTHER READING
So that’s the whirlwind of assumption mapping. We talked about the knowledge landscape, and where assumptions live. The animals that make voicing assumptions hard, the six techniques for listing assumptions. And then finally, we covered crafting a research plan from a two by two matrix.
So with that, I’m gonna leave you with these resources:
- Gamestorming – Gray, Brown & Macanufo
- Innovation Games – Luke Hohmann
- Testing Business Ideas – Bland & Osterwalder
- Assumption Mapping Template on Mural
- A Stakeholder Interview Checklist – Kim Goodwin
- Assumptions and Assumptions: How to Track Them in the UX Design Process – NNG
- Research Strategy community (forming) – Chris Geison
Both Gamestorming, and Innovation Games have great techniques for workshopping. And then there’s some other articles and additional resources. There’s a checklist on Mural, and Kim Goodwin has a checklist of good questions to ask, and some other resources there. And Chris Geison is forming a community of practice to continue conversations like this and has some related workshop ideas.
Thank you.
My utmost thanks to Sean Murphy of SKMurphy for assistance with this presentation. If you would like to learn more about this topic, or like a repeat presentation of this material, please contact info@practical-insights.com