Artificial Intelligence, the Knowledge Problem, and Why Universities Will Become More Important, Not Less

Every technological breakthrough produces two kinds of reactions. The first is excitement—often exaggerated. The second is fear—almost always misplaced. Artificial intelligence is no exception. As large language models and memory-augmented systems advance, a familiar claim resurfaces: that university teaching and research will soon be automated away, and that professors will become obsolete. This argument misunderstands both artificial intelligence and universities. More importantly, it misunderstands knowledge itself.

From the perspective of the Austrian School—particularly Friedrich Hayek’s theory of dispersed knowledge and Jesús Huerta de Soto’s analysis of computers and economic calculation—the recent evolution of AI does not threaten the university. It sharpens its mission. AI increases the quantity, speed, and complexity of information. Universities exist to judge, discipline, and integrate that information into coherent knowledge.

Why “Attention Is All You Need” Changed Everything—and Why That Matters for Knowledge

The 2017 paper Attention Is All You Need marked a turning point not because it made machines “think,” but because it changed how information could be processed. By introducing the Transformer architecture and self-attention, it replaced sequential processing with global context awareness. Words no longer had to be remembered step by step; relationships could be evaluated simultaneously.

This mattered because it allowed models to scale. Parallelism made it possible to train on massive corpora, pretrain general linguistic competence, and fine-tune models for specific tasks. In practical terms, this architecture made tools like ChatGPT possible.

But from a Hayekian perspective, the deeper significance lies elsewhere. Self-attention does not eliminate the knowledge problem—it amplifies it. By enabling models to relate vast quantities of information, Transformers dramatically increase the number of possible interpretations, summaries, and synthetic outputs. They expand the informational environment rather than closing it.

This is precisely the condition Hayek described: more articulated knowledge does not imply more coordinated knowledge.

From Stateless Tools to Adaptive Instruments: The New Memory Turn in AI

Early large language models were powerful but forgetful. Each interaction was largely isolated. Context windows expanded, but memory remained shallow. Recent work—It’s All Connected, Titans: Learning to Memorize at Test Time, and Google’s Titans + MIRAS—attacks this limitation directly.

These contributions do not merely improve performance; they change the ontology of AI systems. Inference is no longer static. Attention weights adapt. Caches act as ephemeral memory. Dedicated memory modules allow selective retention. Retrieval systems integrate short-term context with long-term storage.

Together, these developments move AI from being a tool for answering questions to an instrument that can accumulate context, track trajectories, and adapt during use. AI begins to resemble a research assistant rather than a search engine.

Yet this shift does not resolve the knowledge problem. It intensifies it.

The Austrian Knowledge Problem Revisited in the Age of AI

Hayek’s core insight was not that information is scarce, but that relevant knowledge is dispersed, contextual, and often tacit. No central authority can aggregate it without distortion. This insight applies as forcefully to AI as it did to socialist planning.

Jesús Huerta de Soto makes this explicit in Socialism, Economic Calculation and Entrepreneurship. In Chapter 3, section 5 (“Why the development of computers makes the impossibility of socialism even more certain”), he argues that advances in computing do not solve the calculation problem. They worsen it.

His reasoning is precise. Entrepreneurial knowledge is tacit and situational. Computers can only process the information humans input. As computing power grows, the volume and richness of available information expand, making coordination more difficult rather than less. Huerta de Soto notes that even the most advanced computers cannot centralize this knowledge—and that by generating more data, they raise the premium on judgment.

This is the crucial point. Computers—and now AI—do not replace human decision-makers. They multiply the informational environment in which decisions must be made.

Why This Improves University Teaching

Teaching has never been about information delivery at its best moments. If it were, textbooks would have eliminated professors long ago.

Real teaching involves diagnosing misunderstanding, sequencing ideas, adapting explanations, and cultivating intellectual habits. Memory-augmented AI enhances these functions without replacing them. An AI system that remembers how a student consistently misunderstands marginal analysis or confuses correlation with causation can support personalized scaffolding. It can track progress, not just performance.

But it cannot decide what matters. It cannot judge when a misunderstanding is conceptually deep or merely procedural. It cannot bear responsibility for intellectual formation.

As AI becomes better at repetition and recall, bad teaching—mere recitation—becomes more visible. Good teaching becomes more valuable.

Why This Improves University Research

Research is path-dependent. It is shaped by rejected hypotheses, abandoned models, and tacit judgments about what is promising. Memory-enhanced AI can track this history, recall why certain approaches failed, and assist with literature synthesis across time.

In this sense, AI becomes a junior research collaborator: tireless, patient, and encyclopedic. But it is not an author. It cannot decide which assumptions are defensible, which identification strategies are sound, or which questions deserve attention.

Huerta de Soto’s argument applies directly here. AI produces more candidate explanations, more drafts, more conjectures. This does not automatically yield better research. It raises the value of human judgment, because the cost of error rises with informational abundance.

Why Professors Will Not Disappear

The claim that professors will vanish rests on three confusions.

First, it confuses content with formation. Universities do not exist to only transmit information; they exist to form researchers and thinkers.

Second, it assumes intelligence is computation. Academic value lies in framing problems, defending interpretations, and bearing epistemic responsibility.

Third, it ignores history. Printing, libraries, calculators, and the internet were all supposed to end universities. Each time, universities adapted and became more specialized.

AI is deeper—but it follows the same pattern.

The Likely University of the AI Era

As AI absorbs memory, retrieval, and repetition, universities miht shift toward smaller, more intensive seminars. Methodology, epistemology, and philosophy of science will matter more, not less. Professors will act as mentors, research directors, and gatekeepers of standards.

AI will handle scale. Humans will handle meaning.

This is not the end of the university. It is a return to its original mission—freed from mechanical overload and administrative bloat.

Conclusion: More Information, More Judgment

From the Austrian perspective, the development of AI confirms rather than refutes the knowledge problem. As Huerta de Soto argued, computers make the informational environment more complex. AI does the same—at a higher level.

The result is not the elimination of professors, but the elevation of their role. When information explodes, judgment becomes scarce. Universities exist to institutionalize that judgment.

Artificial intelligence does not replace the university. It reveals why the university is necessary.


Reference

Behbahani, F., et al. (2024). Titans: Learning to memorize at test time. arXiv preprint. https://arxiv.org/abs/2501.00663

Google Research. (2024). Titans + MIRAS: Helping AI have long-term memory.
https://research.google/blog/titans-miras-helping-ai-have-long-term-memory/

Hayek, F. A. (1945). The use of knowledge in society. American Economic Review, 35(4), 519–530.

Hayek, F. A. (1978). New studies in philosophy, politics, economics and the history of ideas. University of Chicago Press.

Huerta de Soto, J. (2010). Socialism, economic calculation and entrepreneurship (2nd ed.). Edward Elgar Publishing.
→ See Chapter 3, Section 5: Why the development of computers makes the impossibility of socialism even more certain.

Vaswani, A., Shazeer, N., Parmar, N., et al. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30.
https://arxiv.org/abs/1706.03762

Behrouz, A., et al. (2023). It’s all connected: A journey through test-time memorization, attentional bias, retention, and online optimization. arXiv preprint. https://arxiv.org/abs/2504.13173


Copyright Notice: This article is the intellectual property of its author. If you wish to reproduce or share it, you must clearly indicate the original source and provide proper attribution. Unauthorized copying or distribution without acknowledgment is strictly prohibited.


How to Cite this Article (APA 7th edition)

Wang, H. H. (2025, December 17). Entrepreneurs, not the state, are the driving force of the market economy. [Blog post]. William Hongsong Wang. https://williamhongsongwang.com/2025/12/17/artificial-intelligence-the-knowledge-problem-and-why-universities-will-become-more-important-not-less/

Leave a comment