The gears of modernity: why our institutions are the original “machine minds”. “Kafka’s castle, Smith’s invisible hand, the Protestant ethic, and the Manhattan Project are slow-AI systems that have long built—and haunted—modern life. GPT LLM MAMLMs are just the latest wave…
Consider the “slow-AI” systems that underpin what we glibly call “modernity”: markets, bureaucracies, and democracies.
These are not, of course, artificial intelligences in the sense of silicon-based neural nets or LLMs, but rather vast, distributed, algorithmic social technologies—embodied in rules, norms, and institutions—that process information and coordinate the actions of millions, even billions, of individuals. Markets, for example, aggregate dispersed knowledge about preferences and scarcities, translating them into prices that guide production and consumption. Adam Smith’s “invisible hand” is not magic; it is a computational device, one so powerful that it can, at its best, rival the most advanced optimization routines. Yet, as any student of economic history knows, markets are neither omniscient nor benevolent: they are prone to failure, manipulation, and—when left unchecked—can produce outcomes that are efficient only in the narrowest, most technical sense, while being socially catastrophic.
There are also: ideologies, corporations (not just for-profit ones), and professions. These, too, are algorithmic social technologies—ensembles of beliefs, organizational forms, and status hierarchies—that enable societies to coordinate at scale. Ideologies provide a shared cognitive map, a set of priors and heuristics that allow disparate actors to align their actions, even in the absence of direct communication. Consider how the Protestant ethic, as Max Weber argued, underpinned the rise of capitalism, or how the ideology of scientific progress fueled the institutionalization of research universities and laboratories. Corporations—whether for-profit, non-profit, or governmental—aggregate resources and channel human effort through managerial hierarchies, incentive structures, and mission statements. Professions, meanwhile, are defined by their self-regulating codes of conduct and credentialing mechanisms: the Hippocratic Oath, the bar exam, the tenure review. Each of these is a “slow-AI” in its own right, encoding and enforcing patterns of behavior across generations.
These are all necessary social technologies for making complex societal coordination possible.
Without them, the sheer scale and intricacy of modern societies would collapse into cacophony, and poverty. The construction of the transcontinental railroad, for example, required not just capital and labor but the coordinated action of bureaucrats, engineers, financiers, and politicians—each operating within their own institutional logic, but all ultimately bound by a latticework of rules and norms. The Manhattan Project, that apotheosis of mid-century technocracy, was less a triumph of individual genius than of institutional design: a network of laboratories, procurement offices, and military commands, all synchronized by protocols and reporting lines. In this sense, “modernity” is less the product of heroic individuals than of the slow, steady accretion of social machinery.
These are also all potentially and actually terrifying to those caught in their gears.
The very features that make these systems powerful—their impersonality, their scale, their capacity to operate according to abstract rules—render them opaque and unaccountable. The bureaucrat in Kafka’s castle, the worker in Chaplin’s “Modern Times,” the citizen confronting the faceless state: each is a testament to the alienation that can arise when human beings are reduced to cogs in a system. The 20th century offers no shortage of cautionary tales: the bureaucratic machinery of the Holocaust; the technocratic hubris of the Vietnam War’s “body count” metrics; the financialization of the economy that turned homes into tranches and people into data points. To be caught in the gears of these slow-AIs is, often, to be rendered powerless.
Henry Farrell, Cosma Shalizi, and their karass see MAMLM-driven algorithmic organization as the latest of such “shoggoths.” In his evocative metaphor, the Large Language Model (LLM) is a kind of shoggoth—a Lovecraftian creature of immense, inscrutable complexity, stitched together from the traces of human language and thought, yet operating according to logics that no individual can fully comprehend. But this is merely the latest iteration of a much older pattern: from the bureaucratization of the Roman Empire to the rise of the modern corporation, humanity has repeatedly conjured up organizational forms that exceed our individual understanding, and then struggled to render them legible, governable, and humane. The difference now, I guess, is that the pace has accelerated: the shoggoths are no longer slow, but fast, adaptive, and—potentially—autonomous.
The real challenge is to grapple with how these new systems will mesh with their older kin—hopefully yielding richer information channels, more humane bureaucracies, more successful democracies, and so on. The question is not whether we can halt the advance of algorithmic organization (we cannot), but whether we can steer it toward ends we actually desire. Will LLMs and their ilk serve to augment public reason, or will they become new instruments of manipulation and control? Will they democratize expertise, or entrench new forms of epistemic inequality? The optimistic scenario is one of productive synthesis: algorithmic tools that render bureaucracies more transparent, markets more efficient, and democratic deliberation more inclusive. The pessimistic scenario is, well, the shoggoth unbound.
But new technologies, even wildly successful and productive ones—whether technologies of nature-manipulation or human-organization—are swords with two edges. The printing press spread literacy and knowledge, but also enabled the proliferation of religious pamphlets and the wars of the Reformation. The telegraph shrank distances, but also enabled new forms of surveillance and financial speculation. Every advance in organizational technique brings with it new risks: principal-agent problems, collective action failures, perverse incentives. The challenge is not to reject new technologies, but to cultivate the institutional reflexes—regulatory, ethical, democratic—that can harness their power while containing their pathologies.
Francis Bacon, early in the 1600s, was enthusiastic about how the compass, gunpowder, and printing had transformed the world, potentially for very much the better. He saw in these inventions the promise of human mastery over nature, a new age of discovery and progress. Bacon’s optimism was, in retrospect, both prescient and tragically naïve—a reminder that every technological leap is embedded in a web of social relations, power struggles, and unintended consequences.
But the very tools that promised liberation became instruments of domination and destruction. The latter two—gunpowder and printing—were then also bringing about two centuries of near-genocidal religious war.
-
Acemoglu, Daron, & James A. Robinson. 2012. Why Nations Fail: The Origins of Power, Prosperity, and Poverty. New York: Crown Business. https://archive.org/details/whynationsfailor00acem_0 ↗
-
Bacon, Francis. 1620. Novum Organum Scientiarum. London: John Bill. https://archive.org/details/novumorganumsci00baco_0 ↗
-
Camus, Albert. 1942. The Myth of Sisyphus. Paris: Gallimard. https://archive.org/details/AlbertCamusTheMythOfSisyphus ↗
-
Chandler, Alfred D. Jr. 1977. The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: Harvard University Press. https://www.hup.harvard.edu/catalog.php?isbn=9780674940529 ↗
-
Chandler, Alfred D. Jr. 1962. Strategy and Structure: Chapters in the History of the Industrial Enterprise. Cambridge, MA: MIT Press.
-
Chaplin, Charlie. 1936. Modern Times. United Artists. https://archive.org/details/ModernTimes1936 ↗
-
Farrell, Henry. 2023. “Shoggoths amongst us.” Programmable Mutter. https://www.programmablemutter.com/p/shoggoths-amongst-us ↗
-
Gellner, Ernest. 1983. Nations and Nationalism. Oxford: Basil Blackwell. https://search.worldcat.org/title/Nations-and-nationalism/oclc/65469179 ↗
-
Gellner, Ernest. 1994. Conditions of Liberty: Civil Society and Its Rivals. London: Hamish Hamilton. ttps://archive.org/details/conditionsoflibe0000gell ↗
-
Hayek, F.A. 1945. “The Use of Knowledge in Society.” The American Economic Review 35, no. 4: 519-530. https://www.econlib.org/library/Essays/hykKnw1.html ↗
-
Kafka, Franz. 1969. The Castle. Definitive ed. New York: Knopf. https://archive.org/details/thecastle00kafkref ↗
-
Keynes, John Maynard. 1936. The General Theory of Employment, Interest
& Money. London: Macmillan. https://www.marxists.org/reference/subject/economics/keynes/general-theory/ ↗ -
Polanyi, Michael. 1962. Personal Knowledge: Towards a Post-Critical Philosophy. Chicago: University of Chicago Press. https://archive.org/details/personalknowledg0000pola ↗
-
Scott, James C. 1998. Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. New Haven: Yale University Press. https://yalebooks.yale.edu/book/9780300070162/seeing-like-a-state/ ↗
-
Shalizi, Cosma Rohilla. 2007. “In Soviet Russia, Optimization Problem Solves You.” Three-Toed Sloth (blog), July 26, 2007. http://bactra.org/weblog/524.html ↗
-
Shleifer, Andrei. 2000. Inefficient Markets: An Introduction to Behavioral Finance. Oxford: Oxford University Press. https://archive.org/details/inefficientmarke00andr ↗
-
Smith, Adam. 1759. The Theory of Moral Sentiments. London: A. Millar. https://archive.org/details/theoryofmoralsen00smituoft ↗
-
Smith, Adam. 1776. An Inquiry into the Nature and Causes of the Wealth of Nations. London: W. Strahan & T. Cadell. https://archive.org/details/inquiryintonatur00smit_0 ↗
-
Stross, Charles. 2011. “Slow AIs, Ancient Bureaucracies, & the Future of Governance.” Charlie’s Diary, July 2011. http://www.antipope.org/charlie/blog-static/2011/07/slow-ai.html ↗
-
Vonnegut, Kurt. 1963. Cat’s Cradle. New York: Holt, Rinehart and Winston. https://archive.org/details/catscradle0000vonn ↗
-
Weber, Max. 2001. The Protestant Ethic and the Spirit of Capitalism. London: Routledge. https://archive.org/details/protestantethics00webe_0 ↗