Okay, hear me out. I just let an AI bot take a full online survey. On purpose. And honestly? It might be one of the best things I’ve done for survey testing in a while.
We’re all (rightfully) freaking out about bots ruining our data. But instead of swatting them away like flies, what if we harnessed their superpowers for good not evil?
The Problem with Testing Surveys the Old Way
You know how survey testing usually works? You build the thing, send it to a few coworkers (who half-read it), maybe test it yourself a couple times, and pray nothing breaks when it goes live.
Cool cool cool. Except:
- Humans miss stuff
- Logic skips can be funky
- And no one wants to take a 15-minute mobile survey more than once (if ever)
Enter: Manus.
Meet the Bot That Actually Wants to Take Your Survey
Manus is an AI bot trained to do lots of different types of tasks, and you can interact with it WHILE it does them. I've used Manus for lots of business tasks, but then I was testing a survey we programmed, and I thought, why not?
I gave Manus a persona that would get it past the screener and told it take surveys like a real respondent. It did. No eye rolls. No fake data. Just a focused, obedient little machine that clicks through everything you throw at it.
I watched it take a full survey start to finish. It gave me an estimate of the length of the survey which I thought was fairly accurate (give or take a minute or two). Natively (without any specialized programming Manus can:
- Stress-test logic flows
- Time the survey as if a human were taking it
- Take on a persona to pass screening criteria
- Fix that one typo that would’ve haunted me forever
I'll admit that I haven't yet asked it to do other things beyond the basics, but I can imagine a scenario where I can train it to do lots more:
- Spot confusing questions
- Predict dropout rates
- Recommend a suggested incentive amount for customers
- Identify areas to improve respondent engagement
Not to imitate a late night infomercial, but wait there's more...
What If a Bot Could Tell You When Your Survey is Boring?
Let’s be real (yes...used my daughter's current favorite phrase): most surveys aren’t competing with other surveys. They’re competing with TikTok, dating apps, mobile games, and email doomscrolling. You have to earn attention now.
So I’m thinking about a way to turn bots like Manus into survey sensors—tools that detect boredom before your respondents feel it.
I'm calling it a “Boring Score,” but probably ChatGPT or Manus could come up with a better name.
I'd want it to help me predict likely engagement (and disengagement) with the survey instrument.
Because if an AI is getting bored, you better believe your humans are too.
Want to See It in Action?
I filmed Manus taking a survey in real time. Watch it here → https://youtu.be/bXexSEXZ0xg
Why This Matters
What if there aren't nearly as many bots taking surveys as we think? What if it's really bored humans. Our crappy data is not fraud—it's fatigue. And that’s where I think that we desperately need a new vision for survey engagement. Maybe AI bots aren't the enemy after all, maybe they can lead the way.
I’d love your thoughts:
💬 Would you use bots to test your surveys?
💬 Want a copy of the “Boring Score” rubric we’re building?
💬 Already doing this? Tell me how!