Upcoming Event
Alignment
Abstract: The speaker will distinguish some different conceptions of alignment, exploring how each conception relates to safety and existential risk.
Does anything matter to an LLM?
Abstract: Central to the inquiry of cognitive science is the idea of agents for whom there is the possibility of both success and failure. If we think that LLMs are the sort of thing that fall within the domain of cognitive science, then we should also ask whether they are agents of this kind –
Does anything matter to an LLM? Read More »
Aliens, Octopuses, and Robots, Again
Abstract: Since the 1960s, philosophers have seriously considered the possibility that psychological states could be multiply realizable in not only in humans and other terrestrial creatures, but also in potential alien and artificial cognizers. For many it has become an accepted constraint on our theories of the nature of perception and cognition that they would
Aliens, Octopuses, and Robots, Again Read More »
Aliens, Octopuses, and Robots, Again
Abstract: Since the 1960s, philosophers have seriously considered the possibility that psychological states could be multiply realizable in not only in humans and other terrestrial creatures, but also in potential alien and artificial cognizers. For many it has become an accepted constraint on our theories of the nature of perception and cognition that they would
Aliens, Octopuses, and Robots, Again Read More »
What remains of the singularity hypothesis?
Abstract: The idea that advances in the cognitive capacities of foundation models like LLMs will lead to a period of rapid, recursive self-improvement — an “intelligence explosion” or “technological singularity” — has recently come under sustained criticism by academic philosophers. I evaluate the extent to which this criticism successfully undermines the argument for a singularity,
What remains of the singularity hypothesis? Read More »
Evaluating LLM Ethical Competence
Abstract: Existing approaches to evaluating LLM ethical competence place too much emphasis on the verdicts—of permissibility and impermissibility—that they render. But ethical competence doesn’t consist in one’s judgments conforming to those of a cohort of crowdworkers. It consists in being able to identify morally relevant features, prioritise among them, associate them with reasons and weave
Evaluating LLM Ethical Competence Read More »
Why ChatGPT Doesn’t Think: An Argument from Rationality (Co-authored with Zhihe Vincent Zhang, ANU)
Abstract: Can AI systems such as ChatGPT think? This paper presents an argument from rationality for the negative answer to this question. The argument is founded on two central ideas. The first is that if ChatGPT thinks, it is not rational, in the sense that it does not respond correctly to its evidence. The second
LLMs Can Never Be Ideally Rational
Abstract: LLMs have dramatically improved in capabilities in recent years. This raises the question of whether LLMs could become genuine agents with beliefs and desires. This paper demonstrates an in principle limit to LLM agency, based on their architecture. LLMs are next word predictors: given a string of text, they calculate the probability that various
LLMs Can Never Be Ideally Rational Read More »
Using ChatGPT to Improve Writing Skills
Date: May 3, 2024 (Friday) Time: 14:00 – 16:00 Venue: CPD 2.42, HKU Centennial Campus Speaker: Kathryn Goldstein, HKU Registration: https://hku.au1.qualtrics.com/jfe/form/SV_cwDbZRfzjnQDbAq This workshop will discuss plagiarism and ethical use of ChatGPT, but we’ll also work on building up prompt engineering skills. Examples of specific skills students will learn, using prompt engineering: Use ChatGPT as a
Using ChatGPT to Improve Writing Skills Read More »

