You're bringing customers to your landing page, analyzing dashboards, tracking engagement metrics. Your AI tools are showing patterns everywhere. But here's the uncomfortable truth: if you haven't spoken to a real customer in 30 days, you might be building on completely faulty assumptions.
Everyone's talking about AI for customer insights. Some companies are even using "synthetic users"—AI-powered personas that mimic real people—to replace actual customer conversations. It sounds efficient. It promises scale. But there's one critical thing it can't do, and that's what we're exploring today.
The Promise and Peril of Synthetic Users
Companies are now using large language models to conduct user research without any actual people involved. You can pay for platforms that provide synthetic users matching your target audience—running women, portfolio managers, enterprise buyers, whatever your ideal customer profile looks like.
The concept isn't entirely without merit. These AI personas can identify standard usability challenges, check mobile-first patterns, and spot obvious button placement issues. For basic usability testing, they're genuinely helpful.
But here's where it falls apart.

The Mannequin Problem
Synthetic users resemble people, but they don't feel like people. Imagine you're building a trading platform for portfolio managers who work with specific instruments in particular markets. Sure, an AI platform might have generic information about portfolio managers. But does it understand the day-to-day context of your specific users? The particular business challenges they face? The workflow nuances that make or break adoption?
The answer is almost certainly no.
This matters because human behaviour is complex and context-dependent. Synthetic users simplify it into one dimension. They give you answers that sound convincing—sometimes startlingly so—even when they're completely wrong.
Where AI Fails Spectacularly
AI can process massive amounts of data and spot patterns humans would miss. That's genuinely powerful. But when it comes to understanding why customers behave the way they do, AI has five critical blind spots:
1. Usage Context
AI sees: "Users spend three minutes on the dashboard daily."
Reality could be: They're confused and clicking randomly, trying to figure out where to go next.
For AI, this looks like engagement. In reality, your customers are frustrated as hell. The quantitative data says one thing. The qualitative truth—the why—tells a completely different story.
2. Real-World Factors
AI sees: "Churn increased 23% this quarter."
Reality could be: Your biggest competitor just launched a free tier, or customers are price shopping across the market.
AI can tell you what's happening. It cannot explain the competitive dynamics, market shifts, or external pressures affecting your business. Unless it's been updated on super recent information—and even then, there's always a delay—these insights will be incomplete.

3. Emotional Triggers
AI sees: "Support tickets mention 'frustrated' 47% more."
Reality could be: It's not about the bug itself. It's about customers feeling unheard when they report it three times with no response.
This distinction changes everything. Fix the bug? That's one solution. Fix the feeling of being ignored? That's an entirely different (and arguably more important) strategic decision. AI struggles to understand these nuanced emotional contexts unless someone explicitly writes "I've asked you three times now to fix this" in the ticket.
4. Workflow Integration
AI sees: "Feature adoption is very low."
Reality could be: The feature works brilliantly, but it doesn't fit into your customers' existing workflows.
This is the difference between abandoning a feature entirely versus repositioning it within the customer journey. One decision wastes development time. The other creates value. AI alone can't distinguish between the two.
5. Inaccuracy and Hallucination
Here's the uncomfortable stat: 28-50% of AI outputs hallucinate—they make things up. They present fabricated information as fact, often in ways that sound completely convincing.
When ChatGPT first emerged, Building Great Tech tested it on research analysis. We took data we'd already analyzed manually and spent time writing prompts to replicate our results. The outcome? Around 20-30% made-up information, even with careful prompting.
Today, we still repeat instructions three or four times in the same prompt: "Don't make up information. Run the analysis again. Verify your findings." Because AI fills gaps whether or not it has accurate information to fill them with.
The Human Conversation AI Can Never Replace
Consider this example from an NN Group study. An interviewer asked both AI and a human the same question: "Have you finished all the lessons in the course?"
AI's response: "Yes, I have finished all of them and found them very useful, especially the parts about choosing the right clients."
Straightforward. Positive. Actionable? Hardly.
Human's response: "Well, I started okay with three lessons, but later on I got distracted. I had holidays booked, and then when I was back, I forgot what I had already done. But for some reason, I had no access anymore to those three lessons, so eventually I had to just carry on with other lessons without the basics of the first three sessions. Overall, I felt it was not great."
Look at the richness of that response. The human brought in real-world context: holidays, forgetting progress, access issues, incomplete learning journeys, overall frustration.
AI will never say it went on holiday. It won't forget what it learned last week. Those contextual factors—the messy, human reality of how people actually use products—are invisible to AI.
But they're essential for building products customers love.

Three Customer Insight Mistakes Killing Your Growth
When companies over-rely on AI analysis, they make three predictable mistakes:
Mistake 1: Optimizing for Patterns Without Validation
What companies do: AI identifies a pattern, so they immediately build for it.
Better approach: Talk to customers. Understand if this pattern actually works for them and aligns with your business direction.
Yes, AI spotted something. That doesn't mean it's the right thing to prioritize. Without human validation, you're essentially building blind—making expensive bets on assumptions that haven't been tested against reality.
Mistake 2: Confusing Correlation with Causation
What companies do: "Users who do X are 40% more likely to upgrade."
Better approach: Understand why that behaviour is happening and whether it genuinely correlates with upgrades.
Correlation isn't causation. AI can spot the pattern. Only human investigation reveals whether there's an actual relationship worth building around.
Mistake 3: Neglecting Qualitative Context
What companies do: Rely mostly on quantitative AI analysis.
Better approach: Combine quantitative patterns with primary qualitative customer conversations.
As George Bull puts it: "Data tells you the what. Conversations tell you the why."
The numbers show behaviour. The conversations reveal motivation, frustration, context, and opportunity. You need both.

The Hybrid Approach That Actually Works
Smart companies aren't choosing between AI and human intelligence. They're combining both in a strategic three-step process:
Step 1: Use AI for Scale
AI excels at processing huge amounts of data that human brains can't analyze efficiently. Use it to:
- Identify patterns and anomalies across thousands of data points
- Flag urgent issues automatically
- Track sentiment and trends over time
- Understand what's happening on your platform
This is where AI genuinely shines. It gives you the landscape view—the big picture of customer behaviour at scale.

Step 2: Add Human Context
Before making any product decisions based on AI insights, add the crucial layer AI cannot provide: human understanding.
- Talk directly to customers, especially when they're showing concerning behaviour patterns
- Focus on understanding why something is happening
- Investigate the competitive landscape
- Understand customer context—where they're coming from, what they're trying to accomplish
Remember: you can't automate empathy. You need human touch to understand human context.
One team shared how they used AI insights to optimize a signup form. The AI flagged issues, they immediately made changes, and conversion dropped dramatically. Why? They didn't understand what was actually working in the original experience. They reverted the changes, but the lesson was clear: acting on AI insights alone is dangerous.
Step 3: Validate With Real Behaviour
Before going live with any changes:
- Test your findings with actual users
- Watch what customers do, not just what they say
- Measure business impact—profitability and revenue—not just engagement metrics
- Iterate based on real outcomes and genuine user needs
If you're building for users you've never spoken to, you're building blind. It's that simple.

Your 30-Day Reality Check
Here's your challenge for this week. Look at your product insights process and honestly assess how much you rely on AI:
Red flags:
- If 80%+ of your insights come from AI dashboards alone, you're probably missing critical context
- If you haven't spoken to a real customer in 30 days, you're operating on assumptions
- If you built a feature based solely on AI insights, test it now—it might be completely wrong
- If your team trusts dashboards more than direct customer quotes, it's time to rebalance
The winning formula:
Smart companies validate, observe, and listen. They use AI to flag problems, then talk to users and track outcomes before taking action.
AI is powerful—enormously so. It can spot patterns, predict churn, and track sentiment at scale. But it's incomplete. It cannot understand context, emotion, or user intent.
Synthetic users mimic feedback but miss the nuances that differentiate your product from competitors'. Without human validation, you risk building on faulty assumptions that waste development time and miss market opportunities.

The Bottom Line
AI can tell you something is happening. It cannot tell you why—at least not as of today.
The hybrid model wins: AI helps you scale insights, but human judgment turns that data into strategic decisions that drive profit and growth.
Companies winning in product-led growth aren't choosing between silicon and human senses. They're mixing both.
They're doing a decent job analyzing data. But they want to do a great job building products customers love. That requires conversations, validation, and genuine understanding of the people you're building for.
If you're using AI alone, you're only seeing half the picture. And in a market where conversion rates and retention metrics determine survival, half the picture isn't enough.


