Paper summary - Toward good practice in thematic analysis: Avoiding common problems and be(com)ing a knowing researcher, by Virginia Braun and Victoria Clarke
Paper Summaries
Research Methods

May 12, 2025 | 6 minute read

Paper summary - Toward good practice in thematic analysis: Avoiding common problems and be(com)ing a knowing researcher, by Virginia Braun and Victoria Clarke

What I read

In this article, the authors describe their method of thematic analysis in more detail, particularly focusing on ways that other authors have been using the method in ways that were not originally intended.

First, the authors describe their analysis of how the TA method has been used, by looking at 20 papers that cited their original work, and where the papers claimed to follow the method in detail. These papers primarily used qualitative studies. The authors identified three main problems with how the papers leveraged the TA method. First, the papers tended to “’swim unknowingly in the waters of positivism’”. Next, they used top-down theming rather than bottom-up interpretation. Finally, the reviewed papers’ creators failed to “own” their perspectives.

The authors indicate that many of the papers describe TA as a single thing, but that it is really a family of things or methods. The items in the family shares traits, like coding, meaning-making, switching between inductive and deductive reasoning, and recognizing the flexibility of the methods. The “small q/Big Q” distinction is a way of exploring the differences in the family. Small Q is a positivist approach, and papers that take this approach focus on reliability, repeatability, and an almost scientific manner of trying to remain objective. Big Q is interpretative and “reflexive.” It recognizes the role of the researcher, and permits their opinion to be relevant. The subjectivity is something to be valued, not something to be avoided. This way of thinking rejects “the notion that coding can ever be accurate.”

A challenge emerges when researchers use both styles in a “mash-up.” This is an approach to recognize the perspective of the researcher, but to then try to keep it in check through forms of validation—to try to ensure that the subjective interpretation was actually “correct.” As the authors describe, “Researchers cannot coherently be both a descriptive scientist and an interpretative artist.”

One of the biggest problems the authors noted in their examination of papers was the use of positive, top-down ways of theming—using pre-defined categories and mapping things into them, rather than viewing the data and the meaning-making that’s occurring on top of it as a way of describing story containers. This pre-definition typically is simple and noun- or verb- based, and is a summary rather than an examination. The examination takes the form of narrative and story building, which is more valuable in interrogating the research data itself. The authors describe that topic matching “makes no conceptual sense in reflexive TA” because the method is intended for rich storytelling, not summarization.

The authors also explain that researchers do not “own” their own perspective, and indicate overtly that the perspective shapes and changes the data. The researcher takes an “active” role in the creation of themes, and those themes are generated, not found. The approach isn’t theoretical and shielded from realism, and the assumptions should be stated explicitly in a paper. This calls into question more positivist approaches that have become more normal and common, such as member- or participant- checking and validation, as those assume there is something to be validated.

Finally, the authors provide a set of “snappy” heuristics for researchers. These include:

Thinking critically about small Q and big Q, and making a selection purposefully

Identifying a set of research values and ensuring that a method selection aligns with those values; avoiding the language related to bias, and using, and acknowledging the value of, personal reflexivity

Understanding the difference between top-down theming and bottom-up interpretation and storytelling

Making the themes and structure overt and stated in the output discussion

The authors indicate that they’ve written a great deal more about the topic of reflexive TA, and provide links and references to those materials.

What I learned and what I think

I love this, because it’s exactly the method, approach, and philosophical style of design research and synthesis in practice, at least when that research is done rigorously and when the output is valuable. The primary contribution that I see is the calling out of top-down categories vs. bottom-up storytelling, and that’s what we’ve always called our “red truck” matching problem—describing things as red trucks, vs exploring the reason that the color and the object were mentioned, and the meaning of the truck comment. At Narrative, and in our training, we’ve always discussed how the researcher’s perspective matters and “counts,” and that’s exactly what this paper is saying. Not only that, it’s calling out the same tendency for some sort of pseudo objectivity that has driven me absolutely crazy from our clients, who feel that they need some sort of guarantee of bias-free analysis. This happens in the synthesis, and all the way back to participant selection, as if we can control for the subjective bias that occurs later.

I think the value of this, and the reason I’ve always described this as a better way (the only way?) of managing research synthesis, is that the whole point of doing research at all is to shape an empathetic understanding of the world, and the world isn’t objective. Ultimately, in design, my purpose is to make something new, and knowledge generation is the creation of something new, too. Science is about the natural world; the rest of the world has been designed, is artificial, and as the paper says, doesn’t hold answers that need to be found. It holds stories that need to be explored.

It's interesting to me to see that the same problems in industry are present here, and I think it’s probably because research papers are held to the same objectively-centered scrutiny as design work; and that scrutiny has been created in a world that values logic and objective reasoning, rather than exploration, storytelling, and abductive reasoning and thinking. The way I’ve always mitigated this in my output in research is to be charismatic and convincing, which isn’t much of a mitigation strategy when my deliverable is a written paper that lives without me there to convince people it’s worthwhile.

Additionally, my experience with peer-reviewed work is that output without reference to an accepted, objective format will be rated as poor process and poor method. It’s slightly disappointing that this conversation is the same one I was having at CHI in ~2007, primarily led by Bill Buxton (and in Jodi’s work), which gestures that while the method has advanced in that it’s been published and is respected in research circles, acceptance of the method in paper analysis is still broken.

All of this means that I need to develop language that’s equally convincing as I am personally. I don’t think that’s impossible. It’s what the authors are talking about in “owning” bias, and finding intellectual and real ways to show how that bias is valuable, not destructive.

This paper was a recommendation from Katie, specifically in response to my first proposal for research, which I’ll share here when it’s more developed. There is probably another paper, likely not publishable, in discussing these similarities and discussing synthesis and interpretation as a shared element “in between” research and output, but that the output is one of the places where academia and industry begin to differ.

Download Toward good practice in thematic analysis: Avoiding common problems and be(com)ing a knowing researcher, by Virginia Braun and Victoria Clarke, here. If you are the author or publisher and don't want your paper shared, please contact me and I will remove it.

Want to read some more? Try Paper summary - Reprioritizing the Relationship Between HCI Research and Practice: Bubble-Up and Trickle-Down Effects, by Colin Gray, Erik Stolterman, and Martin Siegel.