Abstract:
The idea that advances in the cognitive capacities of foundation models like LLMs will lead to a period of rapid, recursive self-improvement — an “intelligence explosion” or “technological singularity” — has recently come under sustained criticism by academic philosophers. I evaluate the extent to which this criticism successfully undermines the argument for a singularity, arguing that, while the most extreme takeoff scenarios are highly improbable, the idea that recursive self-improvement may lead to safety-relevant changes in the capabilities of foundation models in a short span of time is credible and should be taken seriously in discussions of AI safety.