Three things Quirk's Chicago told us about the future of product testing

Quirk's Chicago just wrapped. Two days, four rooms running simultaneously, a lot of coffee, and enough conversations to keep me thinking for weeks. Rather than walk you through every session (there were a lot), here are the three genuinely useful things for anyone who works in product testing or consumer insights.

Tasneem Dalal Linked In
Tasneem Dalal
24 Apr, 2026 5 min read
Quirks Chicago

1. Your data might be less reliable than you think

Okay, starting with the uncomfortable one. The CloudResearch session on AI fraud in survey research said something the industry has been tiptoeing around for a while now: AI agents can pass almost all standard attention checks and completing a survey via a bot costs almost nothing. Like, cents. The financial incentive for bad actors is significant and growing, and if you're running online panels without actively checking for it, there's a real chance some of your data isn't what you think it is.

Not a reason to panic. But a reason to ask your research partners some pointed questions. The good news is that these agents leave detectable behavioral signatures and the tools to catch them exist. The question is whether your setup is using them.

For anyone in product testing, this matters more than it might seem. Decisions about reformulations, new variants, pricing thresholds all that rest on data quality. Bad inputs, bad outputs. It really is that simple.

2. Insights that don't travel don't count

This one came from Maersk and GetWhy. Tt's the kind of point that sounds obvious once someone says it out loud, but somehow still isn't how most teams operate. The problem isn't generating strong insight. It never was. The problem is making sure it goes somewhere after the readout.

Research gets presented once. A handful of stakeholders are in the room. Then it lives in a deck on a drive nobody checks, and six months later the next team starts from scratch because nobody could find the last study. Intelligence doesn't compound. It just... expires.

This is close to home for us at Product Hub. When product testing runs as a series of disconnected studies, you lose the ability to build on what you already know. A benchmark from last year should be informing your decision today. A test in Germany should be easy to compare with a test in the US. That only happens when the system is designed for it from the start, not bolted on later.

The Maersk session framed it as a storytelling problem. I'd push back slightly and say it's a structural one. Better formats help. But what really changes things is building research infrastructure where knowledge doesn't have an expiry date.

3. The researchers who will thrive are already doing something different

The panel on rethinking the insights’ function asked a question a lot of people are quietly sitting with right now. As research gets faster, more scalable, and easier to run, what is the insights function for?

Honest answer: "running studies" is becoming a smaller part of the value. The teams making real impact are the ones who can interpret what data means for the business, get that interpretation in front of the right people at the right moment, and build systems that make good research repeatable instead of heroic. That's a very different job description than it was five years ago.

Pepper Miller's session on trust in research made a related point, and she made it with zero hedging. Neutrality, she said, isn't the same as rigor. Researchers who avoid taking a position to protect their credibility, often end up producing findings that are technically correct… but strategically useless. The best researchers explain what the data really means, they don’t just deliver it and leave you to figure it out yourself.

Both sessions were pointing at the same shift. The value is moving upstream. That requires different skills, different mindsets, and different tools.

Cea8e0d0 c953 4b6d 9e0f bd1fbd532a87

The rest of what was in the air

Those were the big three for me but there was plenty more. Here's the quick round-up.

Chili's and YouGov - one of those case studies that reminds you that brand tracking done well is actually exciting. Three years of brand resurgence, Gen Z going back to a chain restaurant, and a team that connects daily consumer signals to real marketing decisions in real time. Really well executed and genuinely fun to hear about.

Ferrara (Nerds, Trolli, SweetTarts) - they now test advertising ideas before committing serious production money. Simple concept, enormous practical value, and the kind of thing that makes you wonder why more CPG teams don't do it this way.

Synthetic personas came up constantly. Toluna, Knit, T-Mobile - several others. The conversation has matured from "is this legitimate?" to "here's how we're actively using it in live pipelines." The teams doing it well are clear on one thing: it's a tool for moving faster between moments that require genuine human insight, not a replacement for those moments. Validation, validation, validation.

One conversation that didn't happen on any official agenda but came up enough times to count: whether AI adoption is quietly hollowing out the learning pathways that develop good junior researchers over time. Nobody had a tidy answer. But it's a question the industry needs to keep asking.

What we brought to Chicago

Simone and I presented "From Projects to Programs: Fixing the Hidden Flaws in Modern Product Testing" on Day 2. It's one of my favorite talks to give because I genuinely believe every word of it!

The argument is simple. Most product testing still works the way it did ten years ago. Studies run in isolation, managed manually, never connected to each other or to what came before. Insights expire. The next project starts from scratch. And most teams have just... accepted that as normal.

What we showed was what it looks like when that changes. Standardized methodologies that still flex for local markets. Multimarket testing that learns from itself. Benchmarks that build over time rather than reset with every new study. Every project makes the next one smarter. That's Product Hub, and the conversations we had afterwards really solidified the value in this approach. People already feel this problem and are looking for a way out of it.

If that resonates, it's probably worth a conversation!

See you in DC!