Paper summary - Null effects of boot camps and short-format training for PhD students in life sciences, by David F. Feldon, et al
Paper Summaries

May 17, 2025 | 7 minute read

Paper summary - Null effects of boot camps and short-format training for PhD students in life sciences, by David F. Feldon, et al

What I read

In this article, the authors explore the relationship between students who participate in bootcamp programs, and students who do not, on the way to a PhD in life sciences. They conclude that there are no measurable benefits to students of participating in the programs.

First, the authors describe the prevalence of bootcamp programs and intensives (ranging from 2 days to 2 weeks) for PhD students. These are used to quickly help incoming students learn fundamentals of conducting research, and to help students become more familiar with the scholarly community. They indicate that these types of bootcamp programs have broad proponents, and show that the programs are valued by citing the amount of money provided by the NSF and NIH for learning about and supporting them (27.8M in 2016.) The authors show that, while there has been increased use of these programs, there has been little research into the efficacy of the programs. A number of studies have explored the perceived value of the programs (perceived by students), but not the actual outcomes. This perceived value is typically inaccurate. The authors then cite several existing papers that have covered the same topic, but in less depth.

Next, the authors describe their research method. This method was both broad and deep. They included 294 PhD students in life science, split between those involved in bootcamps (48 students) and those who were not involved in them (the remainder). They gathered survey data and graded output from the students at various times over two years.

Graded writing samples were the most objective measure of success and growth, and these samples were gathered and graded before entering the program, after year one, and after year two. Additionally, the authors identified if the students had published articles in the duration of the study.

They also gathered data related to the “socialization” of the student. This data was gathered with content from five accepted surveys.

They controlled for gender, underrepresented racial/ethnic minority status, international status, and undergraduate experience.

The authors then describe the results of the study. They summarize their findings, and list the main and overarching finding as:

Based on the results from first-year and second-year cross-sections, as well as gains over the course of the first and second years, we conclude that, despite prior studies reporting high levels of student satisfaction with boot camps and other short-format training (3, 5), participation in these activities by individuals in our sample is not associated with any quantifiable advantages in research skill development, scholarly productivity, or socialization in comparison to students who did not participate.

The authors discuss their findings, with a focus on reconciling how students feel, against how students perform, as a result of bootcamp involvement. A first hypothesis is that students “conflate the intensity of the experience with its effectiveness.” This is aligned with other studies that have gathered only self-reported perceptions of results. A second hypothesis is that the way a bootcamp is structured does not provide room for students to gain improvement in a skill that has proven to be a pre-requisite for learning another skill. They also point out that, while faculty have reported the value of the bootcamps, most of that value is on reducing teaching demands and providing time to interact with other faculty.

Finally (prior to describing their methodology in depth), the authors offer their views of the implications of the findings, which is that further study should be performed because of the important nature of these findings; they conclude that “the current findings suggest that a more critical and methodologically diverse approach should be taken to determine the extent to which boot camps and other short-format instructional activities can contribute to vital training goals.”

Next, the authors describe their methodology in-depth. They describe the data collection methods. These included web-based surveys and the collection of writing samples over two years. The surveys explored personal demographic data, self-efficacy, goal commitment, institutional commitment, scholarly socialization, mentorship, the academic and social climate, access to research infrastructure, and publication success. They also explain how they objectively measured the research skills that were compared across groups. Expert raters followed a rubric and evaluated the writing samples across various research skills (related to the creation, distribution, and analysis of research studies.) Finally, they describe the statistical analysis of the data itself.

What I learned and what I think

This study corresponds with what I think about the nature of design bootcamps, but there are key differences that make it inappropriate to make a full-on jump from one to another. First, these were PhD students. Most designers aren’t, and those attending bootcamps really aren’t. There’s a generally accepted view that design bootcamps successfully provide training for those who would find traditional academic programs out of reach (due to the investment of both time and money.) Next, the bootcamps were on the way to more training, while design bootcamps are the only training. I don’t know how important this is, but it’s a material difference. Additionally, the subject matter is, obviously entirely different. There may be a real difference in skill acquisition related to design skills and academic research skills. It’s probably much easier to learn to make a dumb persona then to do a proper statistical analysis of data (although maybe that’s not fair to dumb personas.) And, there’s likely a real difference between the type of people who are interested in and accepted to do a PhD and the type of people who are interested in and accepted to do a bootcamp, which almost always takes anyone who applies; I can’t actually think of any that have an application process, as compared to a purchasing process.

But, the broad guesses about the conclusions seem very much transferable. One of those conclusions is that intensity of a learning experience means people feel that they learned a lot. In my experience of actually teaching this form of compressed bootcamp education, this is completely true. At the end of a week, the idea of “surviving the week” seems more important than how much someone feels they learned, and I have seen the first overwhelm the second; I’ve always actually viewed the purpose of our bootcamps as providing a feel-good experience and a survey of content, and have evaluated little real learning emerging from the programs as I watch the participants during the experience. The other main conclusion is that real learning requires a this-before-that set of building blocks, with appropriate practice and study between them. This is true for design, where there are fundamentals (drawing, 2D design, 3D design, 4D design) before advanced skills. The fundamentals are skipped entirely in most of the bootcamps I’m aware of and have run. (I found the point about faculty pretty funny—that faculty think bootcamps are valuable because they have to teach less.)

I skipped writing the details of the statistical analysis above, because it’s not really relevant to my thoughts about the paper and the findings, and probably isn’t going to be anything I need to actually do one most of my work. But, this is one to come back to if I ever need to do a Monte Carlo analysis.

I want more of this type of study, but I think there aren’t very many papers that focus on efficacy of bootcamps. I will keep searching. I may also contact the authors to see if they are aware of any future work on this, and will definitely research their future work, too.

Want to read some more? Try Paper summary - Design Systems for Conversational UX, by Robert Moore, et al.