Paper summary - Can I use TA? Should I use TA? Should I not use TA? Comparing reflexive thematic analysis and other pattern-based qualitative analytic approaches, by Virginia Braun and Victoria Clarke
Paper Summaries

May 18, 2025 | 7 minute read

Paper summary - Can I use TA? Should I use TA? Should I not use TA? Comparing reflexive thematic analysis and other pattern-based qualitative analytic approaches, by Virginia Braun and Victoria Clarke

What I read

In this paper, the authors compare thematic analysis to other methods used in qualitative research analysis, and prescribe that there is never a “correct” method to use—instead, the use of thematic analysis as compared to any other method is contextual.

First, the authors describe that there is an observed phenomenon of researchers seeking out the best, or ideal, method. Researchers are often urged to pursue an “off-the-shelf” methodology, such as grounded theory, interpretative phenomenological analysis, and discourse analysis, rather than a thematic analysis. The authors disagree with the selection of one of these methods as being the best. Instead, they explain that the “method used ‘fits’ the project’s purpose, that theoretical assumptions, research questions and methods are in alignment, and that the overall research design is coherent.” They then explain that the focus of the paper will be to compare reflexive thematic analysis to the other methods that have been mentioned. The comparison will specifically discuss how each method proposes analyzing patterns of meaning across data items.

The first method that is described is reflexive thematic analysis, which they note is not a single approach but is a family of approaches. A common aspect of the approaches is that the data is analyzed inductively and deductively, and strives to capture both semantic and latent meanings. They explain that, within the family, there are three types or approaches.

Coding reliability describe the efforts by researchers to remove bias through the use of coding and cross-coding across multiple evaluators; it is a search for agreement in the results.

Reflexive approaches push theme development to after interpretation occurs, and recognize that themes are generated by the researcher, not found within the data. Coding is unstructured, and because there is no goal of eliminating bias, there is no need for multiple coders looking for alignment or agreement. This reflexive approach, they explain, has phases of “familiarisation; coding; generating initial themes; reviewing and developing themes; refining, defining and naming themes; and writing up.”

Codebook approaches combine both of the above values, but use the codebook as a starting point for the interpretation.

The authors describe that the goal of reliability is grounded in a positivist set of values, and replicability of the study findings is important. This, dubbed “small q”, is compared to “Big Q”, or “fully qualitative.” It is considered full because both the technique used and the values of the researchers are qualitative.

In the next four sections, the authors compare thematic analysis to the methods already mentioned. First, they focus on qualitative content analysis. They note that this method shares many qualities with thematic analysis, but differ in the use of codebook objectivity – that multiple coders are used in order to find agreement or continuity within the analysis. This part of the method is intended to minimize subjectivity and improve accuracy. The authors note that the methods are very similar. A difference is, as they describe, “the focus on ‘themes’—what you're aiming to get to—rather than ‘content’—what you're working with.”

Next, the authors compare thematic analysis to interpretative phenomenological analysis, which has been commonly used in counseling and psychotherapy research. This approach focuses on personal experience of participants, gathered through small-sample interviews. In addition to thematic exploration, it also emphasizes “idiographic” interest in each participant’s interview, individually. Analysis is performed on each interview in isolation, and the things a participant says, in addition to the conversational style, is given importance. Themes are identified within interviews. Then, super-ordinate themes are identified across participants. Interpretative phenomenological analysis provides depth of engagement with any individual set of data, at the expense of an understanding of the whole.

The authors compare thematic analysis to grounded theory. Grounded theory attempts to place (or ground) sociological theories in empirical evidence. Its roots are in positivism, looking objective qualities of human behavior in the context of the world. Generally speaking, the “flavors” of grounded theory move from coding at an explicit line-by-line level, to broader coding and comparative analysis. A difference between this approach and thematic analysis is in the segmentation of data, where small bits are coded, at the expense of the larger whole. The defining feature of grounded theory is the parallel collection of data and analysis of data.

Finally, the authors compare thematic analysis to pattern-based discourse analysis. Discourse analysis is based on the idea that language is a social practice, and focuses on the analysis of conversations, often at a micro level; these then lead to macro patterning of meaning. Other uses of discourse analysis are used in a “discursive” sense. One of these approaches is when meaning is viewed as “socially constituted through linguistic and other signifying practices.”

The authors summarize the work as a comparison of thematic analysis to the other four methods. They recognize that they are not neutral in their analysis, as they are published authors of papers and books promoting thematic analysis. The main goal is described as urging researchers to avoid finding a “perfect” method.

What I learned and what I think

Reflexive thematic analysis has a very large overlap with the way we’ve always interpreted research data with our clients. Specifically, the overlap is in allowing themes to bubble up from our interpretation of them, and in recognizing that the researcher is biased and the bias is valuable. The lens through which the data is viewed matters.

I like the proposed distinction between the meaning being in the data as compared to the meaning being in the researcher, although I don’t agree that meaning in the researcher is right, while meaning in the data is wrong. The data is not neutral, in the same way that the researcher is not neutral. It is a reflection of the people from which it was gathered and observed, and it does hold the meaning of those people’s behaviors or utterances (at least as much as a word can represent behavior, which becomes a pretty silly conversation quickly.)

Put another way, we find data both in searching for idea similarity; the similarity is proposed by us, through our interpretation of the data, but emerges from the data in a conversation, too.

Maybe it doesn’t matter.

In fact, I’m not sure that a lot of the naming and arguments around the method smatter, at least not in practice and not to me (that’s kind of a broad generalization about a whole and important academic field, so maybe I should further open my mind to the idea that it does matter.) Instead, the biggest distinctions I see, that do matter to me, are:

The idea of proving the output has generalizability, through positivist approaches, as compared to celebrating the uniqueness and avoiding even trying to prove some sort of generalizability

The recognition of the researcher’s lens on the data, and the richness it provides

Interpretative frameworks being applied top-down, as compared to bottom-up

The level of detail of coding and analysis, from micro examinations of individual words or lines, to looking at broader utterance chunks.

I feel like in a lot of these conversations, the point of and value of the research (looking at how people live, love, want, need, desire) is lost in the attempt to nuance each individual approach. There’s almost a weird meta-meta to this, where a purely phenomenological argument for a method is still an argument, trying to use objective measures to claim one idea is better than the other. Since our goal here is not prediction, but biased interpretation, maybe the details of the reflexive method don’t matter, as long as the method is used rigorously.

Want to read some more? Try Paper summary - Null effects of boot camps and short-format training for PhD students in life sciences, by David F. Feldon, et al.