Paper summary - Camp or College, by Quinn Burke and Cinamon Bailey
Paper Summaries

May 21, 2025 | 7 minute read

Paper summary - Camp or College, by Quinn Burke and Cinamon Bailey

What I read

In this article, the authors describe interview-based research with coding bootcamp attendees, and with computer science undergraduates, focused on the development of soft skills. They conclude that there are different reasons for the two groups to go back to school; they are inconclusive about the implications of the different learning styles on soft skill development.

First, the authors describe that there is a gap in the software engineering workforce, and that graduates are lacking necessary skills to take these jobs. The missing skills include problem-solving, working together, and leadership. A LinkedIn study is cited as showing these soft skills are more important than software development skills. Bootcamps are shown to be a new solution to this problem, as a training program focused on staffing these open roles. The bootcamps attract older learners, who have more industry experience.

They describe that the goal of this study is to examine these research questions:

(1) Who are these programs attracting? (2) How do these students perceive their respective learning environments and their capacity to help them develop these requisite “soft” skill sets? and (3) What are the programmatic outcomes characteristic of each training ground?

The authors then reiterate and expand on the soft skills that software companies describe as important; these include the above problem solving, leadership, and collaboration, as well as networking and adaptability. They describe their earlier research, which found that hiring managers look for a four-year degree as a requirement, but consider bootcamp hires to have more hands-on experience.

Next, the authors explain their study approach, which focused on interviews with 50 students, split between those from four-year colleges, and coding bootcamps. Participants first completed a pre-survey, which asked them about their perceptions of themselves as learners. Then, initial interviews discussed the admissions process, the skills obtained, and teaching methods used during their education. Finally, a post-interview survey was sent to the participants. The data from the various instruments was analyzed through the coding process, with the goal of “capturing data representing the participants’ meaning of events as opposed to interpretations based on the researchers’ perspectives.”

The authors then describe their findings.

First, they describe the profile of students entering the programs. Their participants who were in bootcamp programs were older, and the overwhelming majority were looking to “change careers, advance within their own company, or update their skill set to include computing and/ or software development.” The vast majority also indicated they had previous technical knowledge and experience. And, the majority already had a bachelor’s degree or higher.

University students had less work experience. Some were exposed to computer science in high school. Few selected the program style because they received financial aid; few also selected the program because they felt the degree carried industry “clout.”

The authors describe their second main area of findings, related to the perceptions students have of the different program environments. Some bootcamp respondents indicated that bootcamps provide them with a base of skills. The majority described that projects and collaboration were emphasized, and that there was engagement with industry. Only a few indicated that the coursework was difficult. Comparatively, the students in universities were not taking their coursework with an intended career outcome. Most described taking introductory classes before moving to advanced classes. Half of the students described that they were learning soft skills.

The authors then describe the outcomes of the programs with regards to careers. A post-survey from the bootcamp students, with 12 of 22 respondents, indicated that 7 had obtained a job. Some described that the job they received wasn’t in software development, and one explained that the hiring company had experienced poor hiring with previous bootcamp graduates. Bootcamp attendees described that they had entered the bootcamp with pre-existing soft-skills. Only five described that the bootcamp helped them develop those soft-skills.

7 of the 28 college participants responded to the post-interview survey, with only two indicating they had received jobs.

The authors conclude that “These results indicate that bootcamp students more often perceive themselves as already personally and socially developed—perhaps unsurprising given their greater maturity and levels of experience.”

Finally, the authors summarize their findings and indicate shortcomings of the research approach. Citing a Deloitte study, they describe that there is a trend towards the need for soft skills in software engineering roles, but many report that undergraduate CS programs are not preparing students with these skills. Approximately half of the students in the undergraduate program described learning the skills, and approximately half from the bootcamp program described already having these skills, yet employers are not necessarily seeing them demonstrated. They conclude by indicating the study results were driven by self-reporting, and a “modest” sample size, and so the results present limitations in drawing conclusions.

What I learned and what I think

This is the follow-up to my earlier reading of the authors’ work, and I have similar reactions to this as before, although I’ll backtrack from some (but not all) of my criticism of the use of funding for this type of study.

My biggest issue is the publishability of this, as it relates to knowledge generation. Simply, I don’t see or understand what knowledge was actually developed at all, generally or related to their stated research goals.

I view a sample of fifty to be meaningful, not in drawing extrapolations, but in making useful observations to inform the way these educational programs are showing up. But the mechanism is so difficult to understand, and so inconsistent, and with such indeterminate findings because the narrative is so confusing or lacking.

I think that 50 people completed the pre-survey, 22 of whom had completed a bootcamp and 28 of whom had completed an undergraduate program. All 50 then completed an interview. Only 19 completed a post-survey, 12 of whom participated in a bootcamp and 7 of whom completed an undergraduate program. 12 then participated in another interview, although the split is unclear. The document also references a focus group, which is not explained at all.

But what is the value, then, of saying something like “In the second set of interviews, all three CS graduates indicated that they were able to learn “how to learn” in college.”? Or “five students (23%) indicated that the bootcamp setting very much helped them develop soft skills, such as collaboration, effective problem-solving, time management, determination, and accountability”? Ignoring the confusing segmentation discussion, it is useful to say “We spoke with some participants who…” and then explaining their answers in depth. But 23% did something—is that meaningful? Actionable? Interesting?

Reiterating above, the authors stated their initial goals as

Who are these programs attracting? (2) How do these students perceive their respective learning environments and their capacity to help them develop these requisite “soft” skill sets? and (3) What are the programmatic outcomes characteristic of each training ground?

This study provides a view of the first. I don’t know the types of people (even just one) in any depth, but I do know some demographic information that’s generalizable within the sample itself. As a result of reading the paper, I don’t know very much at all about the second, and I don’t know anything about the third.

I don’t think it’s realistic or valuable to tie the presence or absence of meaningful knowledge production to the existence of a study. The whole point of research is to actually see what is happening, not to see only the meaningful or important or surprising things that are happening. I don’t fault the paper or study for not actually finding anything interesting or material. But I do fault them for trying to publish this, and I do fault the conferences that accepted this for supporting that publishing.

I’m a grouchy old man on this because there really is no great research on the efficacy of these programs at all. They are so important, because they have fundamentally changed a centuries old education model, provided room for women, people of color, and people in lower socioeconomic situations to afford an education that used to (and still does, in traditional contexts) exclude them, and have shifted the discussion away from “vocational education = bad” to something more real and powerful. But if these people aren’t actually learning anything, or the hiring managers don’t want to hire them, or if they soon get fired, or if they are bringing soft skills and no hard skills, or if they are doing a bad job once they get hired, or, or, or, we need to know that and need to change it. This type of work doesn’t do that, and publishing and funding is a zero-sum game, which means other valuable work didn’t get done as a result of this.

Want to read some more? Try Book chapter summary - An Introduction to Discourse Analysis, Second edition (chapter 3), by James Paul Gee.