Introduction
You're bringing customers to your landing page, but they're not converting. You know you're missing out. Your team is small, resources are limited, and you're trying to build everything at once whilst struggling with fundraising. Sound familiar?
Here's the uncomfortable truth: 95% of companies collect user feedback, but only two-thirds actually do anything effective with it. Even more startling? Products that use qualitative user research are two times more likely to achieve product-market fit, and companies that prioritise customer feedback see 60% higher profit margins than those that don't.
The problem isn't that founders aren't collecting feedback—it's that they're using the wrong methods at the wrong times, and they're terrified of what they might hear.
In our 20+ years working with tech founders and VCs, we've seen brilliant teams make the same critical mistakes: collecting shallow data from thousands of customers whilst missing the deep insights from ten. Building features customers said they wanted, only to watch those features gather dust. Running surveys that confirm their biases rather than challenge their assumptions.
This isn't about collecting more feedback. It's about collecting the right feedback, the right way, at the right time.

Why Negative Feedback Is Gold (And Why You're Avoiding It)
Let's address the elephant in the room: you're afraid of negative feedback.
We get it. One founder recently told us: "It's soul-destroying to hear what customers actually think about your product." Whether you're a huge corporation or just starting your startup, our human nature dictates the avoidance of anything negative. We feel criticised. We feel like we're not good enough.
But here's what's actually destroying: building in the dark. Shipping features that don't move the needle. Watching churn rates climb whilst you focus on positive NPS scores and pat yourself on the back.
The corporate world makes this worse. When bonuses are tied to positive metrics, there's a perverse incentive to avoid looking at the negative. We've seen teams deliberately exclude negative feedback from meetings because they're afraid of people pointing fingers and assigning blame.
Controversially, we recommend ignoring positive feedback altogether.
Those customers who are happy and content with what you're delivering? They're not telling you how to improve. They're not showing you where your growth is. The negative feedback—the complaints, the friction points, the abandoned journeys—that's where all the magic is.
If your feedback doesn't make you feel a little bit uncomfortable, it's probably not working.
The 9 User Feedback Methods: From Worst to Best
We've ranked nine of the most common feedback methods based on two decades in the field. Some might surprise you. Others will confirm what you've suspected but been too afraid to admit.
9. Surveys (The Lazy Trap)
The promise: Scalable, cheap data collection that's easy to analyse.
The reality: Shallow, biased results that confirm what you already believe.
Here's a shocking statistic: 80% of SaaS teams still rely on surveys as their number one feedback method. It's controversial to put surveys at the bottom, but the way they're normally done? They're practically useless.
Surveys typically work like this: an email questionnaire or an online form with multiple-choice and occasional open-ended questions. Why do they fail? Because they yield shallow, biased results. You're asking customers to predict their future behaviour, which is a terrible strategy.
The fundamental problem with surveys:
They tell you what is happening, but they rarely tell you why. You'll learn that 60% of customers find your checkout confusing, but you won't understand which part confuses them, why it's confusing, or what would make it clearer.
Even worse? People are terrible at predicting their own behaviour. They'll tell you they'd definitely use a feature, then never touch it. They'll claim they abandoned because of price, when the real issue was trust in your brand.
The questions you write in surveys are usually very biased. It's actually quite difficult to discover something new with surveys—you usually just confirm what you already want to confirm. It's extremely biased, and it's hard to uncover new options and new possibilities.
When surveys work: They're more of a confirmation tool rather than a discovery tool. If you go down the discovery route and set them up in a certain way, they can be really good—but it's not normally the case.
When they don't: When you're trying to understand complex behaviours, or when you need to know why customers are doing something.
Quality over quantity is everything. It's better to have deep conversations with 10 customers than to collect 2,000 shallow survey responses.
Verdict: Low effort to set up, but low efficiency in delivering actionable results.

8. Net Promoter Score / NPS (The Single Number Delusion)
The promise: One simple question that predicts customer loyalty and growth.
The reality: A meaningless number that tells you nothing about how to improve.
You know those annoying pop-ups that appear the moment you land on a website, asking you to rate your experience before you've even used it? That's NPS—and everyone hates it.
Net Promoter Score asks: "How likely are you to recommend us?" Then it gives you an optional comment box. But here's the fundamental problem: it reduces a complex user experience to one meaningless number.
It really doesn't explain why somebody gave that number. And most critically, it doesn't tell you where in the journey you're actually rating. The whole journey might be fine, but maybe just one page was bad. Or you got the pop-up before you even started using the product.
What NPS doesn't tell you:
- Why they would or wouldn't recommend you
- What's blocking them from converting
- Which features are actually valuable
- Where your product experience breaks down
- What specific problems need fixing
We've seen countless founders celebrate high NPS scores whilst their churn rate tells a completely different story. Senior teams focus on the positive NPS responses whilst deliberately ignoring the negative ones that would actually help them improve their roadmap.
When it works: As a high-level health check alongside other metrics. It's used across the board because people in the C-suite like numbers—it provides some security on how they're delivering.
When it doesn't: When you're trying to figure out what to build next, improve conversion rates, or understand why customers aren't using the features you've already built.
Verdict: Not difficult to set up, but the efficiency is really low, and you will not get any meaningful information from it.
7. Social Media Monitoring (The Vanity Metric)
The promise: Real-time, unfiltered feedback at scale from customers talking freely about your product.
The reality: Noise without context from people who may not even be your customers.
Social media monitoring means tracking brand mentions on Twitter/X, LinkedIn, Reddit, using social listening tools. It shows trends, but it lacks context, and you're getting opinions from people who may not even really be your customer.
Think about it: when was the last time you complained about a brand on social media? Were you actually using their product properly? Were you even their target customer? Or did you just hear something and re-share it?
The challenge with social media feedback:
The way social media is fundamentally set up is around engagement. What gets engagement? The extreme. The shocking. The attention-grabbing. It's not necessarily one of the best places to find out what your customers need, or how you can improve your product.
You might complain about a brand online and still use them because you actually like them—you just stumbled across one experience that bugged you. Or you might hate a brand and have never even used them.
Social media gives you bite-sized, reduced-context reactions. It might be good to get a vibe and sense around news cycles, but it doesn't tell you about:
- Product-market fit
- Whether your product is easy to use
- How to improve the customer experience
- Which features are actually valuable
- Where your onboarding breaks down
We've seen big clients with huge screens constantly monitoring all social media—tracking sentiment, recent tweets, and recent comments. Interesting? Yes. Useful for informing the actual product? Not really.
When it works: For catching major PR crises, gauging general sentiment about brand changes, or if you're in e-commerce selling products where social influence matters.
When it doesn't: When you need to understand actual usability, improve conversion, fix your funnel, or make product decisions.
Verdict: Medium to high effort to set up (unless you use automated tools, then it's low). But efficiency is not great. If you have other options, don't make it your main strategy.

6. In-Platform Feedback (The Interruption Strategy)
The promise: Easy, frictionless feedback collection right where customers are using your product.
The reality: Passive collection that interrupts your customer experience and yields limited insights.
In-platform feedback means you either have a pop-up in your platform or app, or you have a place within the experience where customers can flag feedback. And yes, everyone absolutely loves popups—nothing better than that.
When done well, it can be fantastic:
One example: an app where users could shake their phone, and it would take a screenshot of where they were in the app. They could draw on the screen, highlighting what they didn't like, and write underneath what problems they had. This was fantastic for quickly fixing bugs and understanding user frustrations.
When done poorly (which is most of the time):
Scenario 1: The random pop-up You land on a website, and before you've even used the platform, a pop-up asks for feedback. How can you give meaningful feedback when you haven't experienced anything yet?
Scenario 2: The journey interruptor You're booking a ticket, going step by step, racing to get it before someone else takes the seat—and suddenly a popup asking for feedback appears. This is the moment you want to throw your phone through the window.
The fundamental problem:
This is passive feedback collection. You're sitting there waiting for people to give you feedback. And guess what? People hate doing this. Nobody wants to spend time giving feedback unless something really badly goes wrong.
We've spoken to founders who say: "We have this whole tab on our app saying 'Give us feedback,' and we have thousands of users, but nobody ever gives us feedback." That's exactly the problem—it's passive.
The mobile mess:
Be very careful if your main platform is on desktop, but some customers also use mobile web. We've seen platforms with:
- A pop-up on top
- A massive chat icon at the bottom
- A floating chat button
- A "give us feedback" floating item
- Another floating element
You couldn't see what you were looking for because it was overlapping everything.
When it works: After someone completes a journey or task. "Did this work? Give feedback." That's fine—it's part of the completed experience, not interrupting what they're here to do.
When it doesn't: When it interrupts the customer journey, when it's passive (waiting for people to come to you), or when you're trying to understand deeper motivations and behaviours.
Verdict: Quite easy to integrate (one line of code). Efficiency is so-so—it depends on how it's done. It's not our favourite way of collecting user feedback.
5. Focus Groups (The Loudest Voice Wins)
The promise: Rich, qualitative feedback from multiple customers at once—sounds efficient and fun.
The reality: Biased opinions dominated by group dynamics, hierarchy, and the loudest voices in the room.
Focus groups mean gathering everyone in a room for a slightly guided discussion to get people's views and opinions. They're very sociable, with nice treats and expensed food to share with everyone.
They work great in Hollywood for TV shows, and in marketing for figuring out if an idea, concept, or branding perception is good. But for digital products? Not so much.
The problem with group dynamics:
You've got all the psychology playing out. If people know each other, or if you're in an organisation with a hierarchical tier, not everyone is going to be as honest. Even if it isn't the hierarchy thing, it could just be the loudest voice which wins, dominating the viewpoint for everyone else.
Real-world example:
We worked with a big corporation where they wanted us to talk to everyone at the same time—a head of the team plus five to seven team members. We explained it wasn't the best approach, but we still did it to show them the difference.
When we were in the room with the whole team, nobody else was talking—just the head of the team. He was answering all the questions, maybe sometimes asking team members small things, but he was basically running the whole show.
When we ran one-on-one sessions with each person, we got a completely different picture of what needed to be delivered. When we presented the results, the head of the team was shocked: "They never told me this. I didn't know about this."
The pattern problem:
In user research, you want to search for patterns. You want to see the same problem repeated by several people. This is very easily done when you do it one-on-one. It's very difficult in focus groups because:
- People talk over each other
- You don't get the same quality of information from each person
- You might only get answers from three people
- A stronger personality can completely take over the conversation
- You miss out on data quantity and won't see the patterns
Exception to the rule:
Focus groups really work for one thing: building products for kids. Kids very often do things together and play with the same toys or games. So if you're building a product for kids, go ahead and do focus groups.
When it works: Early-stage concept validation, brainstorming new directions, marketing and branding perception research, or products for children.
When it doesn't: When you need accurate insights about actual behaviour, when you're trying to improve an existing product, or when you need to identify patterns across different user experiences.
Verdict: Medium to low efficiency. Unless you're building for kids, we do not suggest doing that.

4. Reviews and Ratings (The Surface-Level Scanner)
The promise: Direct customer feedback from App Store reviews, marketplace ratings, and Trustpilot comments.
The reality: A great starting point for bigger companies with volume, but surface-level without enough data.
Reviews and ratings mean collecting user feedback from App Store reviews, marketplace ratings, maybe Trustpilot, and then analysing what the overall situation is with your application.
When it works well:
It's a great starting point, especially for bigger companies that have a bigger audience and more volume. With enough data, you can get a feel and a vibe of what challenges people are talking about.
The competitive intelligence angle:
Here's a brilliant use case: if you have a competitor doing really well in the market and nailing it, look at what people are saying about their service. Why do they love it? What's good about it? This can bring out themes you might want to consider if you're having challenges on the other side.
If you're building something in FinTech and you know who your competitors are, you can get so much information from what's working or not working for them.
The limitations:
Reviews are often quite concise unless it's a really big problem and someone properly lets rip with a few paragraphs. They're good for identifying themes that people are talking about, but you're not necessarily looking for the good reviews. You want the bad stuff—that's where you understand what to fix.
The data volume requirement:
The caveat is you really need a lot of data. You need to have a lot of comments. It's not going to fly with just two or three comments. It provides kicking-off points for areas to dig into, but it's supporting information rather than the full picture.
When it works: As a starting point when you've got enough volume, for competitive intelligence, for identifying recurring themes that need deeper investigation.
When it doesn't: When you don't have enough review volume, when you need to understand why something is a problem (not just that it is a problem), or when you need actionable insights for specific product improvements.
Verdict: We're getting warmer—this is number four, so it's not too bad. You can get some good data from this, but only if you have sufficient volume.
3. Customer Support Tickets (The Pain Detector)
The promise: Direct feedback about specific problems from committed customers who are already using your platform.
The reality: When used properly, this is an absolute goldmine of actionable insights.
Customer support tickets mean analysing recurring support issues and complaints for patterns. If you've got recurring issues on your platform and people are asking for support and raising tickets, it's a great indicator: here's an area we need to fix, quick and sharp.
Why it's better than most methods:
This is direct feedback about a specific area in your experience. It's from committed customers who are already using the platform. They're providing real challenges around actual problems. It's meaty information that you can act on.
The challenge:
You need to have some kind of system for collecting tickets. This is quite often forgotten about and not really considered as a source of insights.
Real-world examples:
Example 1: The bathroom disaster We worked with a brand creating an application to support customers through creating a bathroom. When we sat with their customer support team, the insights were incredible.
The process involved sizing up the room, ordering all the stuff, getting tradespeople to fit it—a whole series of steps. What was happening? One item would be missing, which would delay the job. Because tradespeople weren't directly related to the business, they'd have to be rebooked even further out. Entire bathroom orders were being delayed because they were missing yellow maintenance bags worth 60p.
The customer support person said: "Yeah, this happens a lot." Nobody else in the organisation knew about it. With that understanding: "Oh shit, okay, let's just order some more bags manually, outside of the order." A couple of happy customers, problem solved.
Example 2: T-Mobile's transformation The CEO of T-Mobile (who became CEO in 2012 in the States) would dial into customer complaint calls a couple of times a week—some say with a glass of wine in the evening instead of watching TV. He would just listen to what people were complaining about.
The results were absolutely insane. Revenue doubled during his tenure, the stock went through the roof, and the overall customer experience of T-Mobile increased exponentially.
Who would have thought? You listen to your customers, and it helps your business.
Why customer support is forgotten:
Customer support people are often completely forgotten in companies, but they could be an absolute source of gold. Calling is very expensive for companies, so everybody wants to stay away from that. But you can set it up easily with Zendesk to handle tickets, then read those tickets. It's an amazing source of information for what needs to be fixed.
When it works: When you have a system for collecting tickets, when you analyse them for patterns, when you actually act on what you discover. It's high efficiency and quality because it's actionable—there are things you can fix.
When it doesn't: When you don't have a ticket system, when support issues aren't being tracked or analysed, or when support is siloed away from product teams.
Verdict: Medium effort to set up (depends on your system), but the efficiency of the data you get is very, very high. Very good quality, potentially the most actionable insights outside of our top two methods.
2 & 1 (Joint): In-Person Interviews AND Usability Testing
These are the gold standard. They're joint number one because they're two sides of the same coin.
In-person interviews help you improve product-market fit.
Usability testing helps you improve the actual ease of use of your platform.
But when you run usability testing, you can also discover things that could impact product-market fit. And when you do in-person interviews, you can uncover usability issues. They work together beautifully.

2. In-Person Interviews (The Insight Generator)
The promise: Deep understanding of customer motivations, pain points, and decision-making.
The reality: When done correctly, this is where breakthrough insights live. This is one of the absolute best ways to collect user feedback.
In-person interviews (or video calls) are one-on-one conversations with open-ended questions and follow-ups. This is where you really get deep insights into user behaviour, motivation, the context of the world they're living in, how they're using your tool, when they're going to use it, and beliefs around stuff.
It's a really magical thing. You're entering into people's worlds and really understanding what's happening. One of us ended up speaking to 37 vets over in the States—fascinating conversations that revealed the real insights.
The fundamental question: How many people is enough?
This comes up a lot. People worry: "If I only interview a couple of people, is that information actually valuable?"
This is the difference between qualitative and quantitative data:
Quantitative data: Analytics, tracking on the website, page visits, where people get to in a journey—what is happening.
Qualitative data: The intangible stuff that's really hard to quantify—the insights that tell you why things are happening.
Research from the Nielsen Norman Group shows five is the magic number. When you go above that, you start to just repeat and rehear the themes you've already heard. Five is where you get the patterns.
What about multiple user types?
If you have a platform with several types of people—several personas or ICPs—should you interview five of each?
It depends on where you are with the product and its maturity. If you're early days, just kicking off, and there's not a huge amount of clarity on those different target personas, you could be overdoing it with 15 interviews.
Start with five, but make sure you get a selection of all those types in there. That will probably be enough to discover actionable insights, improve the product and experience. Then you can start digging into that detail as you mature.
User feedback is not a one-off event. This is an ongoing process. You need to keep speaking to people constantly. As your platform grows, as your proposition evolves, as your offering changes, you should constantly be doing these interviews and tracking what's going on. Are you still meeting their needs?
Phone or video?
Always choose video or in-person when possible. Here's why:
You're not just collecting what people say—you're observing body language, facial expressions, and emotional reactions. Sometimes people will say things, but their face tells you the completely opposite. This is incredibly valuable data.
Some people are really comfortable talking; others are not. Some people express themselves very emotionally. It's hard to pick that up even over a screen, but you're not going to get it at all if you're just doing it over the phone.
The most common mistake:
Leading questions that confirm what you already believe.
We recently worked with a fantastic founder from America. After we finished, she said: "Now I understand what you meant by saying we can get any answers we want if we lead our customers."
Amen to that.
You can bias the results in countless ways:
- Asking about future behaviour instead of past behaviour ("Would you use this?" vs. "Tell me about the last time you...")
- Proposing solutions instead of understanding problems
- Seeking validation instead of challenging assumptions
- Talking more than listening
When it works: When you need to understand customer motivations, improve product-market fit, validate your ICP, decide what to build next, and understand the "why" behind customer behaviour.
When it doesn't: When you need to understand how people actually use your product in real-time (that's where usability testing comes in).
Verdict: High effort (organising one-on-ones, knowing what questions to ask), but absolutely one of the best ways to collect user feedback. This is where the magic is.

1. Usability Testing (The Reality Check)
The promise: Watch real customers use your product in real-time to identify friction, confusion, and unspoken needs.
The reality: Combined with in-person interviews, this is the ultimate feedback method. This is the other side of the coin.
Usability testing means we either have a product or a prototype that is interactive, and we have a customer who interacts with it as if we are not there, as if they were just using it on a normal day.
Our task—and this is critical—is to ideally behave as somebody who runs the session as a fly on the wall. We don't tell them what to do apart from asking them to perform specific tasks, but we don't tell them how to get there.
The golden rule:
You cannot say "just push this button, scroll down, take a left, use that bit." The moment you guide them, you've destroyed the value of the test. This is really easy to fall into—"It's really easy, you just do this, do that, follow that"—but that completely defeats the purpose.
What usability testing reveals:
This reveals the gap between how customers actually behave versus how they say they behave. It's a fantastic way to understand:
- Whether navigation makes sense
- If button labels are clear
- If the order of screens is logical
- Whether headers and descriptions make sense
- Where friction points and confusion exist
- Unspoken user needs that emerge as they interact
The Hollywood disaster:
One of Hollywood's biggest platform launches failed spectacularly because they relied heavily on focus groups during development. The feedback seemed unanimously positive in group settings. But when customers actually tried to use the platform independently? The experience fell apart.
Watching someone use your product reveals truths that asking them about it never will.
Even preparation reveals problems:
Even when you take the time to prepare for usability testing sessions, you can discover: "Oh yeah, our customer journey is 20 pages long. Why? Therefore our customers aren't getting through this."
The setup:
The majority of companies don't do usability testing because it's quite difficult. You need to know:
- What questions to ask
- How to prepare the platform
- How to prepare the script
- In what order
- What to test
It's a lot of things you need to know. But it identifies friction points, confusion, and unspoken user needs that people actually reveal as they're using the product.
What makes it powerful:
You're observing actual behaviour in real-time. Not what they say they do. Not what they think they would do. What they actually do when faced with your product.
When it works: For improving conversion, reducing churn at specific steps, understanding why customers aren't discovering features, optimising the overall user experience, validating that what you've built actually works the way you think it does.
When it doesn't: When you need to understand motivations or decision-making context (that's what interviews are for), or when you don't have a working product or prototype yet.
Verdict: High effort (requires proper setup, script preparation, knowing how to observe without leading), but absolutely one of the two best ways to collect user feedback. Combined with in-person interviews, this is the gold standard.

The Gold Standard Combination
Use in-person interviews to understand what to build and why.
Use usability testing to ensure what you've built actually works the way you think it does.
Together, these two methods give you:
- Deep understanding of customer needs and motivations
- Real-time observation of actual behaviour
- Identification of friction points and confusion
- Validation of your product decisions
- Actionable insights for immediate improvement
This is how you move from "doing a decent job" to "doing a great job."
Your Feedback Collection Checklist
Here's your practical framework for collecting feedback that actually drives improvement:
Before You Start
☐ Define your research goals
- What specific decisions are you trying to inform?
- What don't you know that you need to know?
- How will you act on what you learn?
☐ Identify your ideal participants
- Who is your actual ICP?
- Are you talking to the right people or just available people?
- Do they represent your target market?
☐ Choose the right method
- Need to understand motivations and pain points? → In-person interviews
- Need to identify usability issues? → Usability testing
- Need to find recurring pain points? → Customer support tickets
- Need to measure impact of changes? → Reviews and ratings (with volume)
- Need to validate hypotheses? → Surveys (after deeper research)
During Collection
☐ Ask the right questions
- Focus on past behaviour, not future intentions
- Ask "tell me about the last time..." not "would you..."
- Listen more than you talk
- Don't lead or propose solutions
☐ Observe behaviour
- Watch what people do, not just what they say
- Pay attention to body language and facial expressions
- Note where they hesitate or seem confused
- Let awkward silences happen—they often precede insights
☐ Dig deeper
- Ask "why" at least five times
- Explore the context around behaviours
- Understand the problem, not just the symptom
After Collection
☐ Analyse objectively
- Look for patterns across multiple customers
- Don't cherry-pick responses that confirm your beliefs
- Pay special attention to negative feedback
- Identify the recurring friction points
☐ Prioritise ruthlessly
- Which issues affect the most customers?
- Which problems are blocking conversion or causing churn?
- What's the impact versus effort?
☐ Act on insights
- Feedback without action is expensive procrastination
- Test changes with the same customers who raised issues
- Measure the impact of your improvements
- Iterate based on results
What To Do Right Now
Here's your challenge this week: sit down with one of your customers and watch them use your product for 30 minutes.
Don't help. Don't explain. Just observe.
See what happens. See if they stumble. See what confuses them. See what they skip over. See where they hesitate.
We guarantee you'll discover something that will change how you build your product.
Because here's the truth: you're probably not lacking feedback. You're lacking the right feedback, collected the right way, with the willingness to act on what makes you uncomfortable.
Final Thoughts: The Uncomfortable Truth
If your customer feedback isn't making you a little bit uncomfortable, you're doing it wrong.
The positive responses? The NPS promoters? The people who love everything you've built? They're not showing you where to grow.
Growth lives in the negative feedback you're avoiding. In the customers who abandoned your onboarding. In the friction points you haven't fixed because you're too busy building new features. In the confusion, you'd see if you just watched someone actually use what you've built.
Yes, it's destroying to hear what customers really think about your product. But you know what's more destroying? Building in the dark. Burning through your limited resources on features that don't move the needle. Struggling with fundraising because your metrics are stuck.
The founders who are "doing a great job" instead of just a "decent job"? They're the ones who've embraced the gold standard methods—in-person interviews and usability testing. They're the ones who've stopped avoiding negative feedback. They're the ones who've stopped trying to build everything at once and started building the right things.
They're not guessing what customers want. They're watching. They're listening. They're learning.
Stop guessing. Start watching. Start listening. Start building great tech.



