October 22, 2025 | 3 minute read
Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI
by Philip E. Agre
Text Exploration
In this text, the author describes how his perspectives on the purpose and capabilities of AI changed from one of deterministic problem solving to one of supporting the nuances and subtleties of experiencing everyday life. This, in turn, shifted his perspective on the way in which computation, as a field, pursued technological advancements; this eventually led to a critique on the obstructive nature of AI researchers, who were unwilling to hear arguments against their philosophical foundations of their work. The author argues that a critical perspective on AI and AI theory is not antagonistic and does not need to be supported by working code to “count,” and instead should be a fundamental part of realizing new technical ideas.
The author begins by casting computational technology as a “kind of imperialism,” where technology (and therefore, a technology advocate) pushes into all parts of life. This comes from a view of the world as a single closed technical system. The history of AI matches a military perspective of a closed world, and so AI research and military development have been closely related, with academic researchers supported directly by the military. These researchers were not particularly constrained by the military partnership, yet they largely rejected behavioralism, and the value of the emergent field was “technical formalization.” The author refers to this as mechanized intelligence. The only way of gaining credibility in this field was to prove mechanized intelligence. Without the proof of a working system, arguments against it were dismissed.
The researchers building these mechanized systems created a system to mirror human intentionality, and focused on reasoning, planning, and problem-solving. These were viewed as appropriate and mature areas to focus on, while more subjective qualities of human life were non-starters (perhaps because systems could not be made to mirror these, and so advocates of these systems did not gain the necessary credibility, offered only through proof of a working system.) Humans do more than reasoning and problem-solving, but to AI advocates, “the idea that human beings do not conduct their lives by means of planning, for example, is just short of unintelligible.”
In reflecting on his own experiences, the author describes that his focus in life and in his work began to look at everyday aspects of life: the concrete things that a person actually did, as compared to the theoretical ideas of humans as logical beings. He studied “hassles,” or small things that happen in life, and began to capture the nuance of these hassles through rigorous self-ethnography and note-taking. This prompted his own curiosity into the social sciences and phenomenology, and his views began to move away from a “mentalist foundation” towards one focused on the routine interactions with the world.
The author describes the challenges he had in introducing these views into the established space of “intellectual conservatism” of AI researchers. This, he explains, was the hardest part: “maintaining constructive engagement with researchers whose substantive commitments I found wildly mistaken.” AI researchers were not interested in hearing critical engagement of their work, and so dismissed it out of hand.
The author concludes that there must be a middle-way for technical research to advance, one that balances “self-reinforcing conceptual schemata” with philosophies from outside.
