AndPlus acquired by expert technology adviser and managed service provider, Ensono. Read the full announcement

How to Gather User Insight on Your MVP

Feb 4, 2022 2:55:03 PM

User feedback and insight give you the data you need to make decisions about your product. If no one’s really interested in what you’re offering, there’s little point spending the work-hours and dollars to build a production version. If what you think of as a peripheral feature is the only part anyone wants to talk about, maybe you’re not in the business you thought you were.

User insight also gives you the information you need to make the right decisions about how to build your product. Experienced developers will be able to match use cases and customer needs to development tools and structures, but they can’t do that if they don’t know what your users want.

In short, user insight is growth fuel like no other. In this post, we’ll talk about why it matters in detail, how to get it, and what to watch out for when you’re collecting it.

Why customer insight into your MVP matters

MVPs are Minimum Viable Products. They’re built fast, often seeking to get to market before potential competitors, attract initial customers and validate their core ideas as early as possible, or be shown to investors to convince them to buy-in. Because of this, MVP development often focuses on building as quickly as possible — even if the result is a tangled mess of code, created on a Frankenstack of readily-available technologies. Some businesses build an MVP before anyone works at the company who knows how to code. Production isn’t figured out — let alone future development, new features, support, customer success or any other future needs. Code may be spread across multiple platforms, unannotated, unexamined.

In addition, the MVP is often built almost blind. Even where the aim of the process isn’t to ascertain whether there’s a market for the product, there’s often very little information about how it will be received. You can see the proof of this in the number of tech products that went to market, only for the market to tell them their real product was something else. Notion was envisaged as a no-code development platform, Groupon was planned as a nonprofit consisting of a WordPress website and a few PDFs.

minimal viable product

There’s a range of MVP strategies — high-fidelity builds that accurately mirror the fully-developed product, MacGyvered string-and-paper low-fidelity prototypes, and many other approaches. But they’re all launched before their creators are sure where the target is. They need to be steered, and they need customer feedback to do that.

There’s a point where an MVP has been validated and a business is ready to start pouring money into development, marketing, sales and support and turn it into a business. It is also the point where decisions have to be made, some of them with decades-long repercussions. Like most software companies, we spend a lot of time with people who made decisions about databases, codebases and other technologies — decisions they’re now either outgrowing or trying to rectify.

Customer feedback lets businesses make better decisions about everything from what order they develop features to how they structure their development process, as well as the stack they use to do it with. This is how you make decisions that you’re celebrating and building on down the line.

It also lets businesses figure out if their product has a market at all. The feedback from the first few customers is often about a product that bears little resemblance to the fully-fledged version. But in terms of the service it delivers to customers, it should scratch the same itch. Customer feedback on MVPs lets you figure out if you should be building a solution to this problem. ‘How do I find the customers first,’ asks Noah Kagan, ‘before I really even build anything?’

How should you gather user information?

There are two basic approaches to gathering user information. One is asking them, the other is watching them. Both have their problems as well as their benefits. We’ll start with the first one.

User interviews

One of the simplest ways to find out how someone feels about something is to ask them. If you want to know what your users think of your product, you could conduct user interviews. The benefit is that you get considered, in-depth answers. The drawbacks are that you’re optimizing for depth, not breadth. You’ll find out everything about what ten or fifty people think, not a couple of basic facts about how hundreds or thousands of users feel. You can’t generate statistical data this way.

At the same time, they’re a good choice if you’re interviewing your ICP or most profitable and desirable customers. ‘With interviews, the depth of the understanding is more important than the quantity,’ says Étienne Garbugli. ‘Don’t shy away from face-to-face interviews,’ Garbugli counsels: ‘relationships aren’t built through surveys.’

Reviews and surveys

Users can leave reviews on third-party sites like and But you can ask for reviews directly, including open-ended review requests and surveys. The biggest problem with user surveys is that people don’t really want to fill them out and usually do it carelessly. The biggest problem with reviews is that they tend to attract two self-selecting groups: those with an ax to grind and those who love your product as it is. The users you really need to hear from are those who like the product fine, but not well enough to leave a review — or pay for it. (This is why it’s common to recommend that you read the 3 and 4-star Yelp reviews, not the raves and rants of the 5 and 1-star reviews.)

In-app feedback

In-app feedback can be an effective way of getting statistically usable data about features or aspects of your app. You can solicit it with customer feedback widgets, or short in-app surveys. These tend to get high response volumes, but they’re often a little vague. One of the most useful types of feedback is context-specific, gathering data on how customers relate to a specific stage of a process or a specific feature right when they’re working with it.

User testing

User testing is one of the most common approaches to gathering user data, because it’s one of the most effective and versatile. You can do it even when you have little more than an idea: user testing can start with pencil and paper if you do it right, and if you have the opportunity it definitely should.

User testing works best when the user sample is representative of your actual or desired users, and tests are impartial so users aren’t encouraged to give the ‘right’ answer. Testing sessions should be documented and/or recorded to be reviewed later, and as far as possible the format should stay the same. As time goes on and you user-test new features, you can build a data set of user tests that can be used to assess and plan future testing and product decisions.

User forums

User forums add a new dimension to the efficacy of reviews and surveys. Users can give their feedback, but they can also upvote and comment on one another’s feedback. In theory, the most upvoted reviews can be assumed to contain the most universally applicable feedback.

Why all these can be misleading

People who know they are being watched behave differently. They are less likely to express socially-undesirable emotions such as anger, frustration, or even giving up. They’re more likely to engage with the material they’re offered positively.

Interviews are social situations, and social rules apply. Surveys and reviews are subject to similar rules, which drive us all toward normative behaviors — actions we think other people want. Under the pressure to present the best possible version of themselves, to put it bluntly, people lie; worse, they dissemble, muddying the waters.

Taking a step back, which people are we looking at here? The users who show up to user tests or post in user forums care more about your product than most people do. They’re more likely to have positive feedback, more likely to have already experienced value from the product, and more motivated and concerned — a far cry from the average user. (Of course, there are also some respondents who feel the opposite…)

How do you reach the average user?

We ideally want an unvarnished view of how people really interact with your product’s features, and for that there is no substitute for watching them. Even when people are made aware that they’re being watched they soon forget and return to focusing on their own goals, and you can acquire rich, deep and broad data sets by recording and analyzing user sessions.


In-app analytics tools like Kissmetrics and Amplitude (for web apps) and FullStory or Localytics (for mobile) let you watch exactly what users do. They give you the answers to questions you’d never have thought to ask, and clarify user behavior in sometimes surprising ways, though you will probably find yourself thinking of this when looking over your results:

user experience vs. design


In particular, they let you focus on your goals when designing your analytics and analyzing the data. For example, most games seek to maximize revenue, so analytics typically aims to find behaviors that predict in-app purchases, or actions that help grow the user base. B2B applications will have very different goals, but they can track related behaviors using in-app analytics too.

The shortcoming of this approach is that you don’t get the depth of coherent feedback from an individual that interviews give you. But it is the only approach that gives you a true over-the-shoulder view of what users do, allowing you to troubleshoot UI and structure, identify drop-off points and stumbling blocks, and spot red flags. For example, many SaaS companies are concerned with reducing churn — lost subscribers. Analytics can be used to identify behaviors that commonly precede cancellation and then target those users to retain them.

Open or closed?

Let’s talk about open and closed questions, and about the feedback they generate.

Quantitative feedback is to do with how much or how many. You get this in response to closed questions, and that’s good: you can get clear-cut responses to questions that let you know what needs to be dialed up, dialed back, or dialed in. Examples might be, ‘how many times a week do you log in?’ or ‘what part of the product is most valuable to you?’ These are questions with limited answers.

Qualitative feedback is to do with how good or how bad. You get this when you ask open-ended questions, allowing respondents to express their feelings and thoughts in ways that can’t necessarily be handled like a column of figures. Examples might be, ‘how do you think the product could suit your needs better?’ or ‘what features do you think are missing?’ These are questions whose answers could be anything. You might get anything back, from a couple of words to an essay, and you don’t know what the answers will refer to or consist of at all.

Ideally, you want both, but qualitative feedback is particularly important because it has the capacity to give you the answers to questions that you didn’t think to ask.


  • User feedback on MVPs is the basis for every decision from ‘should we build this at all?’ to ‘which database should we use?’ You can’t create successful products without it.
  • Don’t be satisfied with a single avenue of exploration. Look for in-depth commentary via customer interviews, breadth via surveys, and a true over-the-shoulder view from analytics.
  • Remember to account for users’ desire to present themselves favorably and tell you what they think you want to hear. Trust, but verify.
  • There’s a time when you’re no longer collecting user feedback about an MVP. But you should still be collecting user feedback; that process never stops. The sooner you start and the more organized and serious you are about it, the better will be the data set you use for future decisions.

Image Credits: Featured Image, Watch

Brian Geary

Written by Brian Geary

Brian is a true believer in the Agile process. He often assists the development process by performing the product owner role. In addition to his technical background, he is an experienced account manager with a background in design and marketing.

Get in touch