This is from 2025 | 3 minute read

No, you should not offload synthesis of your generative design research to ChatGPT.

Generative research uses qualitative methods to learn about people in their context, and to help us see the world through their eyes. The research produces a large quantity of data. While we learn a lot during our limited time doing the research, synthesis that occurs after the research leads to a much deeper understanding of that data and, in turn, develops the implications and value of that data. In a design or UXR practice, the output of that synthesis is an artifact: a readout, a presentation, a distilled summary. But the output is not the point of synthesis. The point of synthesis is for you and your team to make sense of the participants’ world, to generate a deeper, more thoughtful understanding of what people do and why they do it, and to build and refine your own perspectives about people and culture, and your role (and your company's role) in shaping human experiences.

That deeper and thoughtful understanding comes from you connecting what you heard, saw, and learned, to things you already know, perceive, and believe.

Sensemaking is a well-studied and well-acknowledged part of being a human in the built world, and it occurs through externalization, discussion, forced connection-making, and reflection. When you synthesize the data by making diagrams, or by reading each utterance, or by forcing judgment by claiming two things are alike, or by being pressed to rationalize that claim, or by moving similar utterances together, or by discussing and debating the meaning of what someone said, or by recreating an image of your research experience in your head, or by listening to the recordings over and over, or by trying to capture the depth of lots of data in the brevity of an observation, or by identifying the main implications of the observation and writing it as an assertive insight statement, you are learning. The learning is the goal.

Take the most generous, positive view of ChatGPT. Pretend it actually is sentient and intelligent and can make that meaning by forming the same sort of connected fabric that would occur in your own mind. If that’s true, and you offload your synthesis to it, then it owns that knowledge, and it has integrated it, and it has learned to view the world in a new light. In some perverse way, you've become the dumb utility, and the AI has become richer. You've done the gathering; it got to do the meaning making. You just get to see its scraps, and all you get to do is integrate those pre-digested bits into your worldview and your PowerPoint presentation to management. Your sensemaking is both nerfed in theory and nerfed in practice to a point where the benefit (at least for you, the human) has entirely been lost. You saved a lot of time. You lost nearly all the value.

The output of research is not the point of research. The knowledge that is produced in your own head, and shared—in its richness—with your whole team—is the point of research. The arduous synthesis is the job. The report at the end is extra. ChatGPT can make you that report. It can't form new connections in your head for you.

And, why would you want it to? That's one of the richest parts of life. And it’s why a job in research is one of the most generous and privileged jobs one can have.

Want to read some more? Try Looking for a job? Get off LinkedIn.