A summary of how AI, especially ChatGPT, was used in survey-based research—from problem exploration and assumption validation to synthesizing insights from both qualitative and quantitative data. The focus is on speeding up time-consuming stages like structuring questions, analyzing open-ended responses, and summarizing findings—without removing researcher control. The goal isn’t full automation, but rather making the research process lighter, more iterative, and easier to share across teams.
Role
Process
Output
Surveys are often used to support exploration and validation—to understand user problems, test team assumptions, or gather reactions to a new concept or solution.
But the process can be lengthy:
In these situations, AI tools (like ChatGPT) can support specific moments:
The point isn’t to automate everything, but to accelerate thinking, filter noise, and help us focus on areas that need human judgment.
The process typically doesn’t begin with a blank prompt, but with:
Instead of rewriting everything from scratch, you feed AI the context and structure, then ask it to summarize or develop further.
Prompt Pattern: "Here’s the project context: [paragraph]. Here’s a previous framework: [structure]. Help refine this for user problem exploration. Feel free to ask what’s missing."
AI is used to speed up the initial mapping—not to replace thinking, but to expand options for discussion.
Once concerns are gathered and context is set, you can ask AI to help create the question framework.
This is not about generating a final survey, but:
At this stage, AI outputs usually still need curation—you refine the tone and content because AI lacks the sensitivity to context.
Once data is collected, especially open-text responses, don’t just dump everything into AI. First, skim through it yourself to:
Then ask AI to:
AI greatly improves efficiency here, but judgment remains yours. If the AI overgeneralizes, it’s your job to override.
Use AI for two things:
But always review and sometimes reorder the output. Strong insights often emerge from your intuition after reading many responses.
The AI draft helps:
AI replaces some synthesis tools. For basic quantitative analysis (averages, distributions, anomaly detection, simple comparisons or regressions), AI is reliable and can replace initial needs for SPSS, R, or Python—especially when the goal isn’t complex computation, but actionable synthesis.
Qualitative analysis becomes more scalable. Hundreds or thousands of open-ended responses can be quickly processed, categorized, and summarized. This is especially valuable with large datasets where early insights are needed for cross-team discussions, not academic reports.
More people can engage with insights. Since AI outputs are easy to digest and share, insights aren’t exclusive to researchers. PMs, ops, and even CS teams can engage directly with findings without waiting for a polished report.
Research becomes more reflective, not just operational. Freed from technical busywork, researchers can focus on what the findings mean for product or policy decisions. This shifts the role from data processor to sensemaker.
Less mental load and clerical work. The hardest parts of research aren’t always complex—they’re repetitive and manual. AI helps filter, clean, or draft content that used to drain our energy. It’s not magic, but it frees up our minds for deeper tasks like cross-analysis or spotting correlations.
Faster path to team discussions. Because there’s an initial output, teams can start discussing even before full analysis is done. For example: AI detects 3 major themes—you can immediately share that with PMs for feedback. This makes the process more iterative, not "report-first."
The researcher’s role evolves—more filtering and guiding. AI can structure, classify, and rephrase, but it’s still up to us to validate what’s useful. In fact, AI allows us to focus more on parts that need context and judgment.
We can start from rougher inputs. With quicker iteration, there’s less need to wait for perfect input. Start with raw feedback, throw it into AI, get a structure, and bounce it back to the team. Not everything has to be polished before it begins.
Not every AI output needs to be used. Sometimes it’s too general or off-target. But even then, it helps by showing what doesn’t make sense—which can clarify your framing.
What matters: we stay in control. AI opens doors, but we steer the process. The question isn’t "what can AI do," but "what can it take off our plate so we can think more clearly."

Lathifah Halim
Researcher