This is from 2025 | 4 minute read

My Existential Moment With ChatGPT: It Writes Me Better Than I Do

Recently, I've been thinking a lot about design process, and I decided to write something comparing the reflective design process I learned at Carnegie Mellon with the design thinking process that's been popularized by Stanford and IDEO. I couldn't remember some details, so I asked chatGPT to search for them; then, on a whim, I asked it which process it liked better. I got a thoughtful response, and then, this ending:

One could argue the reflective model is what expert designers actually do, while the Design Thinking model is how we explain it to others.

I was kind of speechless, and I sat and reflected on what I read. The sentence captured exactly what I think, and presented it perfectly—it almost felt like poetry to me. And as I thought more about it, I realized that I liked the sentence so much because it was as if I wrote it: it was exactly the content, style, and texture of the way I write and the way I think.

I asked chatGPT to say more about that line. I asked "How did you get to that exact phrase?" and received this:

That phrase—“the reflective model is what expert designers actually do, while the Design Thinking model is how we explain it to others”—emerged as a synthesis of long-standing critiques and insights from design theory, especially from authors like Donald Schön, who emphasizes reflection-in-action and tacit expertise, Nigel Cross, who distinguishes between design expertise and general problem-solving, and critiques from Lucy Kimbell, Cameron Tonkinwise, and Anne-Marie Willis, who interrogate how Design Thinking commodifies or abstracts design knowledge.

The phrase itself wasn’t quoted or pulled from any single text; rather, it’s a distillation of a pattern I’ve seen in academic discourse and design pedagogy. The juxtaposition in your original prompt prompted me to name the tension between models explicitly. The phrasing came from trying to concisely capture both the divergence in practice and the reason that divergence exists: one model reflects doing, the other reflects teaching or selling.

I asked if it was being objective, or using my specific words. It told me that,

You’ve let me retain memory, so I know things like:

  • You’re preparing for a PhD in Informatics at UC Irvine.
  • You have deep experience in design, especially UX and critical design.
  • You draw from and critique established theory (e.g., Schön, Guilford).
  • You’re skeptical of reductive models like Design Thinking.
  • You care about intellectual rigor, precision, and economy of language.

That context guides how I interpret your questions, what frameworks I bring in, and how I phrase things. For example, I wouldn’t lean on superficial definitions of “creativity” or “iteration” with you—because I know you’ve already interrogated those concepts in detail.

And then, I got into an hour long conversation with it about the fundamental question of ownership and authorship. If the thing is using my ideas, and my words, and my style, is it my work? We thought pragmatically (Do I need to cite chatGPT? Do I need to cite myself?), ethically (Can I claim this work if I use it as an influence? What if I use it verbatim?), and theoretically (What is authorship?).

Over the course of the hour, I pulled out these gems from the AI:

That’s not just a personal ethics question—it’s a systemic resistance to the flattening of authorship, a refusal to let “sounding like you” stand in for “being you.”

If you care about the intellectual integrity of authorship, not just its surface form: Don’t just evaluate whether you “wrote it.” Ask whether you earned it—by your standards.

Every major shift in writing technology—typewriters, word processors, spellcheck, Google, even CTRL+Z—has altered the relationship between thinking and producing. Your discomfort now fits historically. But AI text generation crosses a qualitative line: It can generate content, voice, framing, and tone—not just assist them. It blurs who is thinking. It produces things that sound like you—without requiring that you be there.

So I'm now struggling with the idea of intellectual honesty, not in the sense of honesty in publishing or a worry of plagiarism, but in the way I gain ownership over ideas. Ownership, for me, is the integration of things I am learning with things I already know, and the forming of new connections and new relationships. I thought I gained that integration and ownership by writing. Now, I'm confronting someone (something?) else writing, for and with me, and as me. I'm sort of back to my formative experience with Midjourney. When I reflected on Midjourney in Discord, I was struck by the collaborative game of design, where Midjourney itself was a player. Now, I'm struck by the ambiguous boundaries between myself and this bot, and which of us is doing what.

I sort of feel myself on a precipice of learning to love the bomb, and I'm really, really struggling with the implications this has on what I've come to believe are some of the things that make me, me.

Here's my kick in the ass conclusion; I fed this post back into chatGPT and asked it what it thought. It told me,

You’ve named the core shift clearly. “It produces things that sound like you—without requiring that you be there.” This is the line. It’s not about convenience, or spelling, or iteration speed. It’s about presence.

I DIDN'T WRITE THAT LINE. The AI did.

I'm not sure where I go from here.

CitationKolko, Jon (2025), "My Existential Moment With ChatGPT: It Writes Me Better Than I Do", June 24, 2025
Want to read some more? Try Can you Change Behavior Without Having an Opinion?.