The Reasonability of the Unreasonability of the Cost of Unmoderated User Testing

Covet is starting to move to the stage where we need to validate some of the assumptions we've been making. Why? A few reasons:

Okay, so we've acknowledged we want to do this. Our costs so far haven't been huge, so this must be pretty cheap too — right?

Not quite.

Userbrain
Pay-as-you-go at $45 per tester per session, with tester incentives included. 10 sessions comes to $450 flat with no subscription needed.
PlaybookUX
Pay-as-you-go at $65 per participant for unmoderated tests, no subscription required. 10 sessions comes to $650.
Trymata
The lightest plan that includes 10 panel testers is the Team tier at $399/month.
Maze
The Pro plan at $99/month covers screen-recorded unmoderated testing, but panel participant credits are purchased separately at an additional per-credit cost that isn't publicly listed.
Lyssna ⚠️
Starts at $75/month, but does not support screen recording for unmoderated sessions — a significant drawback for this use case specifically.

So, in order to validate the concept with 10 users, the most readily available flat-cost option lands at $399. Which is a lot — but let me make the case for why it isn't.

The Hidden Costs of Building the Wrong Thing

One of the things we're really bad at as people is pricing our own time (unless you're a contractor, in which case — well done, you're the exception... sort of).

Research in behavioural economics shows that spending money can activate the same neural pathways associated with physical pain — particularly in the insula, the part of the brain that processes both bodily discomfort and the anticipation of loss. It's why parting with cash feels disproportionately bad relative to the rational utility of what you're getting. And it's why so many people instinctively reach for "I'll just do it myself" as the default, even when that trade-off makes no economic sense.

So quite often, people would rather spend a month working on something inefficiently than spend £500 to do it properly. But if your notional annual salary is £60,000, that month of your time has cost you £5,000. Suddenly that extra £500 starts to look a lot more reasonable.

And here's the thing — one of the big assumptions we're making is that we're moving in the right direction.

The sunk cost fallacy is even more dangerous for founders than for salaried employees, because we have a vested interest in the success of the outcome. When you've already invested months of your life and a non-trivial amount of money into an idea, the psychological pull to keep going — regardless of what the evidence says — is enormous. It feels like perseverance. It can actually be avoidance.

The big distinction between playing with software in the garage and having a successful business that builds software is whether people are willing to use it, and willing to pay for it. Nothing more, nothing less.

By not validating the concept with real users, we risk building software only for ourselves — constructed on top of a model of the world that feels true but may not be. The even more dangerous outcome is that some of our assumptions happen to be correct purely by chance. Because then we put more faith in unfounded instincts, mistake luck for insight, and go further down the rabbit hole with more confidence than is warranted.

Tell Me What You Want, What You Really Really Want

So how do we sort this out? We need to reframe research as a necessary precondition for the work we're doing — not an optional extra at the end.

Don't see it as spending £500 to talk to people. See it as spending £500 to have the knowledge and understanding to stop yourself spending £50,000 on something that will not work.

One of the genuinely exciting things about user research is that it surfaces feedback in directions you didn't anticipate. You go in expecting to validate one thing and come out having learned something entirely different — about a market segment you hadn't considered, a use case you'd dismissed, or a friction point you'd completely normalised. People are really bad at articulating what they want in the abstract, but they're very good at telling you — immediately, viscerally — when something doesn't click or feel right.

That signal is gold. Use it. Chase it.

One more thing worth keeping in mind: just because people will happily use your product when it's put in front of them doesn't necessarily mean they'll pay for it. Willingness to engage and willingness to pay are two very different things, and collapsing that distinction is one of the most common early-stage mistakes. That's a whole different conversation — but it's worth flagging now.

Coveting a Solution

What are we doing to get around the cost problem? Contrary to what I said before, there are cheaper ways of running this kind of research.

I recently received a letter from the Government — from the DWP, specifically — explaining that they offer gift cards in exchange for participation in user research. It's a model that's been quietly standard in academic and public sector research for years, and for good reason.

One of the core problems with soliciting feedback from friends and colleagues is that they're compromised by proximity — they want to be nice, they want you to succeed, and that shapes what they tell you. But strangers have no such obligation. They won't give you their time unprompted, because they have no reason to. A small financial incentive resets that equation: it acknowledges that their time has value, removes the social awkwardness of asking for a favour, and tends to produce far more candid responses as a result.

So, I'm giving away 5 Amazon gift cards of £10 each to anyone who emails me at david@covet.digital with the subject line "User Research Participation".

Because I want to build the right things that actually fix your problem — not the things I think fix your problem.

There's a difference, and right now I need your help to understand it.

David A'Hearne

About the author

David A'Hearne

ML Engineer and founder with 13 years building production systems, the last two focused on applied ML. I've shipped NLP scoring pipelines, RAG systems for clinical code retrieval, and LLM-integrated APIs with production MLOps tooling. Currently completing a BSc in Mathematics alongside a Cambridge Data Science Career Accelerator. I'm also building Covet, a hiring platform.

Python ML & NLP Cloud Architecture Distributed Systems Go

// what we're building

Covet is fixing the exact problem you just read about.

Structured mutual qualification: candidates and companies matched on fit, not keyword density. No spray-and-pray. No ghosting. Rejections that actually tell you why.

Get early access