Upcoming Event

It probably doesn’t matter whether software can be made sentient

Abstract: Around 40% of philosophers believe that future AI systems will be conscious (Bourget and Chalmers 2023). Others have argued that the realization of phenomenal consciousness in conventional computer hardware is infeasible, nomologically impossible, or even metaphysically impossible (Block 2009; Godfrey-Smith 2016; Searle 1992; Shiller 2024; Tononi and Koch 2015). Whether or not the AI systems …

It probably doesn’t matter whether software can be made sentient Read More »

Desire homuncularism – Agency, ethical standing, and skin in the game

Abstract: Late in his career, Daniel Dennett changed his mind about minds. If his new view is correct, it has deep implications for both the moral standing of nascent AIs and for the existential danger they may pose. He suggests in his (2017) that genuine minds require a different kind of architecture than he previously …

Desire homuncularism – Agency, ethical standing, and skin in the game Read More »

Conscious as a solution to a biological problem

Abstract: Much of the recent discourse on AI consciousness has centered around two questions: is artificial consciousness possible in principle and do current systems meet the requirements laid out by prominent theories. For many practical purposes, it matters more whether we are likely to see nontrivial numbers of conscious artificial minds as the technology matures. …

Conscious as a solution to a biological problem Read More »

AI Zombies: Global Workspace Theory and Resource-Rational Analysis

Abstract: One of the more promising empirical theories of phenomenal consciousness is the Global Workspace Theory (GWT), which says that a mental representation becomes conscious when it enters a global workspace from which it can be broadcast to various specialized psychological modules [1-3]. There has been some discussion of GWT in connection with recent advancements …

AI Zombies: Global Workspace Theory and Resource-Rational Analysis Read More »

The Fair Treatment of Unequal Claims: The Moral Importance of Injecting Randomness into Algorithmic Decision Systems

Abstract: One compelling approach to value alignment treats value alignment as fundamentally a question of the fairness of the procedure for determining the principles that AI systems should align with (Gabriel 2020). This talk examines the question of how a procedure should treat different stakeholders’ claims fairly, especially in multi-stakeholder cases where people have both …

The Fair Treatment of Unequal Claims: The Moral Importance of Injecting Randomness into Algorithmic Decision Systems Read More »

A Matter of Principle? AI alignment as the Fair Treatment of Claims (co-authored with Geoffrey Keeling)

Abstract: The normative challenge of AI alignment centres upon what goals or values to encode in AI systems to govern their behaviour. A number of answers have been proposed, including the notion that AI must be aligned with human intentions or that it should aim to be helpful, honest and harmless. Nonetheless, both accounts suffer …

A Matter of Principle? AI alignment as the Fair Treatment of Claims (co-authored with Geoffrey Keeling) Read More »

Evaluating AI Agents for Dangerous (Cognitive) Capabilities

Abstract: AI agents based on Large Language Models (LLMs) demonstrate human-level performance at some theory of mind (ToM) tasks (Kosinski 2024; Street et al. 2024). Here ToM is roughly the ability to predict and explain behaviour by attributing mental states to oneself and others. ToM capabilities matter for AI safety because, at least in humans, …

Evaluating AI Agents for Dangerous (Cognitive) Capabilities Read More »

Scroll to Top