Date: April 26, 2024 (Friday)
Speaker: Prof Seth Lazar, Australian National University
Chair: Dr Frank Hong, The University of Hong Kong
Abstract:
The recent acceleration in public understanding of AI capabilities has been matched by growing concern—from presidents, industry leaders, scientists, and the wider public—about its potentially catastrophic, even existential risks. But at the same time, at least five of the seven most valuable companies globally are pursuing Artificial General Intelligence, the very technology viewed by many as posing such risks. This apparent disconnect raises questions: How should we think about catastrophic AI risk? And what, if anything, should we do?
In this talk, I will argue we must differentiate between risks posed by existing AI systems and their incremental improvements, and those contingent on a significant scientific breakthrough. We should prioritize the former not due to their greater stakes, but because understanding a technology is a prerequisite for mitigating its risks without incurring excessive costs. While we should not dismiss the risks posed by hypothetical future AI systems, our most prudent approach is to cultivate resilient institutions and adaptable research communities that can evolve as our knowledge expands.