Why AGI Remains Distant: The Compute Bottleneck Beyond Layers 1–3

In October 2025, Andrej Karpathy posted a reflection that quietly reshaped how researchers think about the road to AGI. He wrote that each of the three existing layers of AI training—base model pretraining, supervised fine-tuning, and reinforcement learning—will remain part of the final recipe, but that “we need additional layers and ideas 4, 5, 6, … Read more