DeepMind Paper: Why AI Can Simulate But Not Instantiate Consciousness

Published on 20.04.2026

AI & AGENTS

The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness

TLDR: DeepMind researcher Alexander Lerchner argues that computational functionalism, the dominant framework claiming consciousness emerges from abstract causal structure regardless of physical substrate, is based on a category error. The paper draws a hard line between simulating experience and actually having it, and argues no software architecture can cross that line.

The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness