The "Abstraction Fallacy" and the Limits of AI Consciousness

A recent paper by Alexander Lerchner, a senior staff scientist at Google DeepMind, has reignited the debate on the nature of consciousness in Artificial Intelligence. Lerchner argues that no AI or computational system will ever acquire true consciousness, a conclusion that stands in stark contrast to the optimistic visions promoted by some industry leaders. Among these, Demis Hassabis, CEO of DeepMind, has repeatedly spoken of the imminent advent of Artificial General Intelligence (AGI), predicting an impact ten times greater than the Industrial Revolution, but happening at ten times the speed.

Lerchner's paper, titled "The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness," highlights a significant divergence between corporate narratives, often marketing-driven, and a more rigorous analysis of AI capabilities. Although other philosophers and consciousness researchers have expressed similar arguments for decades, the emergence of this thesis from a leading AI company like Google DeepMind adds new weight to the discussion.

The Dependence on the "Mapmaker"

The core of Lerchner's argument lies in the concept that any AI system is intrinsically "mapmaker-dependent." This means that AI requires an active, experiencing cognitive agent โ€“ a human โ€“ to "alphabetize continuous physics into a finite set of meaningful states." In simpler terms, a preliminary organization of the world by a person is necessary to make data useful to the AI system. A striking example is the image labeling work performed by workers in various parts of the world, which is fundamental for creating training datasets.

The "abstraction fallacy" manifests in the mistaken belief that because we have organized data in such a way that allows AI to manipulate language, symbols, and images in a manner that mimics sentient behavior, it could actually achieve consciousness. Lerchner disputes this idea, stating that consciousness is impossible without a physical body and the intrinsic motivations that arise from it. Johannes Jรคger, an evolutionary systems biologist and philosopher, emphasizes how humans have complex motivations related to survival โ€“ eating, breathing, performing physical work โ€“ which no non-living system possesses. An LLM, for example, is a collection of patterns on a hard drive, devoid of intrinsic meaning; its meaning derives from how an external human agent has defined it.

Practical Implications and the AGI Debate

Lerchner's conclusions, while not new to experts in the philosophy of consciousness, have significant implications for the practical and commercial future of AI. Mark Bishop, a professor of cognitive computing at Goldsmiths, University of London, and Jรคger agree that Lerchner's argument is not an abstract philosophical point, but rather a "hard cap" on what AI can realistically accomplish. This calls into question AGI predictions that promise impacts ten times greater than the Industrial Revolution.

Lerchner himself admits that the development of highly capable Artificial General Intelligence does not inherently lead to the creation of a "novel moral patient," but rather to the refinement of a "highly sophisticated, non-sentient tool." Despite this, DeepMind continues to operate as if AGI is imminent, as demonstrated by their recruitment for "post-AGI" research scientists. Lerchner's paper includes a disclaimer stating that the conclusions represent his personal research and do not necessarily reflect Google's official stance. Curiously, after a request for comment, Google's branding was removed from the PDF version of the paper.

The Need for Interdisciplinary Dialogue

The debate raised by Lerchner's paper also highlights a certain insularity within the AI research community. Jรคger criticizes the lack of knowledge of the biological and philosophical origins of terms like "agency" and "intelligence" among many high-level AI researchers. This lack of interdisciplinary perspectives can lead to "reinventing the wheel" and a limited understanding of the broader implications of the developed technologies.

Emily Bender, a Professor of Linguistics at the University of Washington, observes that many "paper-shaped objects" coming from corporate labs do not go through a traditional peer-review process, which might otherwise highlight a lack of citations or the replication of existing work. A better world, Bender suggests, would exist if computer science saw itself as one discipline among peers, rather than the pinnacle of human achievement, especially in AGI labs. For companies evaluating on-premise deployment of LLMs and AI systems, understanding these intrinsic limits is crucial for setting realistic expectations and allocating resources effectively, avoiding investments based on promises not supported by rigorous analysis. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate these trade-offs.