Somewhere in the last decade, 'data-driven' became the talisman that every product person put on their LinkedIn profile, somewhere between 'customer-obsessed' and 'results-oriented'. It sounds great. It sounds rigorous. It sounds like you make decisions based on evidence rather than guesswork, which - in fairness - is a perfectly sensible ambition. The problem is that for a lot of product teams, 'data-driven' has quietly become a way to abdicate the responsibility of actually making a decision. Instead of using data to inform their judgement, they've outsourced their judgement to data entirely, and the results are often worse than if they'd just trusted their gut in the first place. Data is a tool. An incredibly useful one. But it's a tool nonetheless - not a replacement for thinking.
Let's start with the most obvious issue, which is also the one most commonly ignored: you never have 100% of the data you need to make a perfect decision. You might have 40%, or 60% if you're lucky, but there will always be gaps - things you can't measure, things you haven't thought to measure, and things that are actively misleading.
And the data you do have? It's probably not as reliable as you think. Analytics tools have bugs, tracking gets implemented inconsistently, users behave differently depending on what browser they're using or whether they're on a dodgy mobile connection. I've been on teams who made major product decisions based on funnel data that turned out to be skewed by a tracking script that wasn't firing in the right order. Nobody questioned it, because the numbers looked plausible and confirmed what they already suspected.
Which brings us to the second problem. Data can tell you what is happening, but it very rarely tells you why. You can see from your analytics that 60% of users are dropping out at step three of your onboarding journey. That's useful information. But it doesn't tell you whether they're confused, bored, distracted or just don't need the feature you're pushing them to configure. The what without the why is only half the picture, and making decisions based on half a picture is how you end up optimising the wrong thing.
Finally, getting good data takes time. I've lost count of the number of times my team has been trying to decide which direction to go in, and wished we had more data to base the decision on. Sometimes we've even added new tracking, waiting weeks for a strong enough signal. But while you're waiting for perfect data, the world moves on. I've watched competitors take market share because they moved quickly on an acceptable decision whilst we were still gathering evidence for the best one. An okay decision made now will almost always beat a perfect decision made too late.
Correlation isn't causation
Even when you do have data, there's a surprisingly common trap that catches out smart people all the time - confusing correlation with causation. There's a brilliant TED talk by Ionica Smeets that opens with the claim that ice cream is one of the leading causes of drowning. And she's got the numbers to prove it - plot ice cream sales against drowning rates and you'll see a clear correlation.
Of course, the real explanation is the weather. When it's hot outside, more people buy ice cream and more people go swimming. It's not the ice cream causing the drownings. The example is deliberately absurd, but the logical error behind it - the assumption that because two things happen in tandem, one must be causing the other - is one of the most common mistakes in data-driven decision-making.
Kodak is a perfect illustration of confirmation bias making this predicament even worse. In the 1990s, their internal data showed that film sales were still enormously profitable and that consumer adoption of digital cameras was slow. Both things were true, but the leadership interpreted those data points as evidence that the transition to digital would be gradual, giving them plenty of time to adapt. The data wasn't wrong - their interpretation was. They saw what they wanted to see, because the alternative meant cannibalising the most profitable part of their business. By the time they acknowledged the reality, Canon, Nikon and everyone else had eaten their lunch.
Data can be spun any way you like. Whether it's someone else skewing information to suit their own agenda, or you unconsciously cherry-picking the numbers that make your preferred option look sensible, the risk is always there. You have to be brutally honest with yourself about whether the data is actually saying what you think it's saying, or whether you're just looking for permission to do what you were going to do anyway.
The offline fallacy
I saw this exact problem first-hand in a previous product leadership role. We sold software to the construction and rail infrastructure supply chain - companies with site teams working in remote locations, often in tunnels and other internet black-spots. Over 90% of our customers told us that our mobile app working fully offline was a key factor in choosing our product over the competition. The sales team heard this constantly. The leadership team's assumption was clear - we needed to invest heavily in making our offline capabilities smarter, quicker and more capable.
Something about it didn't feel right, but the data seemed overwhelming. Ninety percent is a big number, after all. So we went looking for a second opinion. When we pulled up our actual product usage metrics, it turned out that less than 1% of real-world usage occurred with no internet connection at all. The other 99%+ had some form of connectivity - mobile network or wifi.
When we dug further and actually spoke to the people on site (the qualitative bit that dashboards can't give you), we discovered something the data never could have told us: many principal contractors had banned the use of mobile devices in tunnels altogether, for safety reasons - distractions, bright screen glare in dark spaces etc.
The argument that work in tunnels meant they needed bulletproof offline capability was nonsense. It was just an assumption that economic buyers, far removed from the realities of the work site, had made - and that our sales team had believed - because it sounded right.
Neither dataset was wrong, but each one told a completely different story. It was only by combining the two, and adding the human context that neither could provide on its own, that we arrived at a decision that actually made sense. We had enough offline capability to satisfy buyers during the sales process, but stopped short of investing in bells and whistles that 99% of site teams would never actually benefit from.
The small business dilema
There's another dimension to this that rarely gets discussed in the product community, because most product thought-leadership comes from people working at very large companies with millions of users. At that scale, data-driven decision-making works reasonably well. You've got statistically significant sample sizes, you can run A/B tests with confidence and your funnel metrics are based on enough volume to mean something.
But when you're a scale-up with 150 customers, the rules are completely different. Five customers churning in the same quarter can look like a terrifying trend when it's actually just a coincidence - or one bad account manager. Your data set is too small for most of the techniques that Silicon Valley product influencers evangelise. Running an A/B test with 200 users is like tossing a coin three times and concluding it's biased because you got two heads.
If you're operating at this scale (and most B2B SaaS companies in the UK are), then you need to be honest about the limitations of your data. Use it, absolutely - but acknowledge that the signal is weak, the sample is small and the margin for error is enormous. Go in with your eyes wide open rather than pretending you're running experiments at Google scale.
The human element
My position on all of this is pretty simple: be data-informed, not data-dependent. Use as much data as you can get your hands on, and make the effort to collect more where you can. But treat it as just one input alongside your instincts, your domain knowledge, your empathy for the customer and - most importantly - your conversations with real people. Quantitative data tells you what is happening. Qualitative data tells you why. You need both, and the people who are best at product decisions are the ones who can hold both in their head at the same time and weigh them against each other.
The current fashion for metrics-driven product management has convinced a lot of people that dashboards and analytics tools can make decisions better than humans can. They can't. They can inform decisions, surface patterns and highlight things you might otherwise miss. But the actual decision - the weighing up of incomplete information, the reading of context, the instinct that something doesn't feel right even when the numbers say it should - that's a human skill. It always has been, and it will be for a long time yet.
Make it your business to talk to your customers regularly. Not because a framework told you to, but because every conversation sharpens the instincts that no dashboard will ever give you. The best product decisions you'll ever make will be informed by data, but they'll be made by you.