
May 5, 2025 | 7 minute read
Paper summary - Reprioritizing the Relationship Between HCI Research and Practice: Bubble-Up and Trickle-Down Effects, by Colin Gray, Erik Stolterman, and Martin Siegel
What I read
In this article, the authors describe the relationship between practitioners and researchers in how they approach and think about design methods. They identify a disconnect, which they indicate is slightly informed by a “bubble-up” and “trickle down” interaction of experiences from both groups.
The authors begin by describing the ongoing discussion in academic contexts about the miss between research and practice. They indicate that the main contribution of this paper will be the bubble-up effect, which “describes the efforts of the practice community—and ideally the academic community as well—to refine and abstract… methods, tools, or concepts into theory”, while the trickle-down effect is the “way adaptation of research and theory is commonly seen in design practice.” They note that this relationship is not strong or positive, and it’s generally accepted that the way design is studied in academia is not how it is conducted in practice.
Next, the authors explain that their research focused on the academic/practitioner division as it related to design methods, and specifically on the methods of affinity diagramming and affordances. They briefly note that they “are well aware of the fact that these two methods are not necessarily methods in any strict sense.” They indicate that the selection of these methods was based on “their difference in apparent source.”
The authors briefly explain what affinity diagramming and affordances actually are, and the historic source of their name, approach, and introduction into HCI. Then, they explain their data collection method: an interview study with practitioners. They interviewed 13 practitioners from 12 companies, with the majority having worked from 3-10 years. 8 had an HCI background, 3 an engineering background, and one was experienced in visual design. They then extracted a subset of five participants, which were used to drive the discussion in the paper.
The findings that are presented in the article describe where the participants learned a method, how the participants conceive of the relationship between research and practice, how they use the methods in practice, and if they know about the history of the method.
Participants learned methods through encounters in their work. Some participants felt the methods were “common sense,” indicating that they felt they didn’t learn them from anyone. Participants indicated that practitioners attend different conferences than academics, and that they don’t use methods in the “proper” way because clients don’t want to pay for the amount of time the methods might take. Methods that are used are selected because they are easy to explain and show, and because the methods feel right at the right time (or are “opportunistic.”) Methods are also adapted to the unique context of use.
The authors spend the most time discussing the finding related to historical context, and found that interviewees lacked understanding of the creation of the method. Again, some participants felt the methods were common sense. Others indicated that they cared about when to use the method rather than when the method was created. Nearly none of the participants indicated an understanding of historical context for either method.
Next, the authors describe the impact and relevance of their findings. They indicate three ideas were evident in the results: appropriation, abduction, and disseminating agents.
Appropriation indicates how academic ideas are used by practitioners, and is often “radical” in nature. The methods are reshaped and appropriated in an “abductive sense.” Disseminating agents are ways in which academic research finds its way into practice. One of these agents is from practitioners informing their own community of other practitioners, through norms. People and the internet are the main disseminating agents. Coworkers refine people’s feelings and knowledge about methods.
The authors arrive at their main finding and initial theoretical construct of trickle-down and bubble-up. They confirm that there is a “substantive disconnect” between academia and practice. Trickle-down manifests emotions in academia related to their work being misunderstood and co-opted poorly, as they lose fidelity and purity. Design judgment is the process by which methods are used or manipulated, and a master designer is “highly synthetic” in their use of many different methods. This design judgment is missing in the academia/practitioner models. The bubbling up that the authors describe is the potential for academia to embrace practitioner use of methods in the synthetic manner, and can be researched in order to further inform theory about practice. The summary of the up and down bubbling and trickling is a “cycle-around”—the idea that one can inform the other, and the other can then inform the first.
The authors conclude by first recognizing there is room for further study, which can be primarily driven by a closer look at everyday practice. This closer look will add detail to when methods “degrade” from their pure form, and why they are adapted and evolved by practitioners. Looking at the methods as they are actually used will change the way methods are taught, as pedagogy is intertwined in academic theory as well. Finally, the authors describe that the most important contribution from the research is that the relationship between research and practice “is not only a question of practice not using research… practitioners use numerous design methods in an opportunistic manner.”
What I learned and what I think
The findings of this study seem very obvious to me, but that’s probably because I’ve been a practitioner for 25 years, and that is also probably the point: if the paper was accepted at DIS, it’s because the reviewers felt it was new and novel—that the academic community really doesn’t understand what I would consider pretty base. I’m not sure how we arrived at a place where what seems like such a simple observation is new to such a broad group of very smart people, but I see it in the program at UCI that I’m teaching in right now: the grad students are learning methods in a very academic sense. No one in industry, for example, is doing a lit review, but they are learning to do that as part of their work.
I’m surprised that the number of participants is so low, and the generalizations made are so sweeping in their assertions. I believe the generalizations, but I didn’t realize that a reviewer would consider this rigorous, with such a small sample. I’m also curious about how a sample of 13 becomes a sample of just five for the paper, and why it’s necessary to remove such a large number of participants (and, therefore, richness of content.) Maybe it would have made the paper too confusing, jumping back and forth between so many names.
I don’t get the method selection. The authors gesture to the fact that, while an affinity diagram is a method, affordances are ideas or attributes, but then used it as a method anyway. Affinity diagramming (“moving notes into groups”) is definitely a widespread activity (although we could split hairs on what a method actually entails in order to “count” and if this one is rich enough to be useful here.) But there are definitely methods in widespread usage that would have been more interesting and informative to examine, although they wouldn’t necessarily have changed the outcome. I hate personas, but that’s perhaps the most used method I can think of across design disciplines.
I’m unclear about the trickle-down bubble-up thinking; is it aspirational (We should aim to have more things trickle and bubble), indicative of a problem (trickling and bubbling are passive), or both (trickling is slow and bad, bubbling-up is good because it indicates an opportunity for academia to study more real data)? I think it’s showing the combination of the problem (trickle) and an opportunity (study the bubble.) I’m not sure.
I found the findings related to disseminating agents to be accurate, but a little thin; the main finding is that people are the agents, as if the knowledge is passed through a more casual manner. That is definitely true, often through informal mentorship in the context of a real project. But how does that happen? And, what about conferences, bootcamps, youtube videos, “influencer” articles, and all of the other noise that people are sponging? This is from 2014, and all of those were in-vogue approaches to knowledge sharing then, just as they are now.
Stylistically, I really enjoyed reading this. It uses plain language, and while it gestures to existing literature, and cites 30 references, the content itself is given the most space and focus. It sort of practices what it is preaching: that practitioners will find the work intimidating and void of immediacy if it’s grounded in some of the traditions of academia (the “purity” of it.) And, it’s nice to know that this simplicity in writing and structure is acceptable for a major conference—that the paper wasn’t rejected because it didn’t cite enough theory, or because it didn’t substantiate the work for the majority of the paper (and therefore minimize the contribution of the study itself.)
Download Reprioritizing the Relationship Between HCI Research and Practice: Bubble-Up and Trickle-Down Effects, by Colin Gray, Erik Stolterman, and Martin Siegel, here. If you are the author or publisher and don't want your paper shared, please contact me and I will remove it.