Generative AI doesn’t have expertise, it has “common practices.” But what happens when common practice isn’t best practice?
In my Applied Marketing Research course, we experimented with using Generative AI to assist in creating survey questions. What do you think happened?
The model produced a survey filled with common survey design mistakes. Why? Because AI doesn’t truly know best practices; it reflects what’s already out there. And since the internet is full of poorly constructed surveys, that’s exactly what AI learns and reproduces. A perfect example of why, in many fields, you don’t just need a body of knowledge, you need an expert to guide you. And GenAI is not that expert.
Some of the most common survey design mistakes I see out there:
1. Poorly structured response scales (e.g., incorrect use of Likert scales)
2. Leading or biased questions (“Do you like…?” instead of neutral phrasing)
3. Double-barreled questions (asking about two things at once, e.g. “How would you rate our product’s quality and price?”)
4. Incomplete response options (too few options, failing to include “none,” “other,” etc.)
5. Overlapping response categories (e.g., “18-25” and “25-30” instead of mutually exclusive ranges) and non-mutually exclusive options in a multiple-choice question (forcing respondents to pick one option, when multiple options are applicable)
6. Vague response options (“sometimes,” “rarely,” “often”, “frequently” – what do these even mean?)
In class, when we asked GenAI to generate survey questions, it didn’t just make some of these mistakes… it made ALL of the above in one survey! Why? Because AI doesn’t distinguish between what’s common and what’s correct.
This is a reminder that GenAI isn’t an expert in best practices, it’s a mirror reflecting how things have already been done. And when the status quo is flawed, AI simply amplifies those flaws.
Of course, this issue extends far beyond survey design. AI systems also reflect, and reinforce, societal biases found in decades of content. But that’s a conversation for another time.
Have you noticed AI making similar mistakes in your field?


Leave a comment