Upcoming Event

April 26-27, 2025 – Workshop in Philosophy & AI at Peking University

  The Peking University, Beijing Normal University and The University of Hong Kong has jointly organised a workshop on Philosophy and AI, funded by the Research Funds for Excellent Interdisciplinary Research Groups at Beijing Normal University. Dates: April 26-27, 2025Venue: Berggruen Institute China Center, No. 54 Yannan Garden, Peking University, Beijing Host Department of Philosophy […]

April 26-27, 2025 – Workshop in Philosophy & AI at Peking University Read More »

Artificial intelligence and destabilized moral concepts

Date: April 11, 2025 (Friday) Speaker: Professor Regina Rini Associate Professor Canada Research Chair in Philosophy of Moral and Social Cognition Department of Philosophy, York University PhD, New York University, 2011 B.A., Georgetown University, 2004 Canada Research Chair in Social and Moral Cognition and Associate Professor of Philosophy. Research background in moral psychology, ethical theory,

Artificial intelligence and destabilized moral concepts Read More »

What’s hidden inside predictively successful deep learning models?

Abstract: In recent work, I argued that there is an AI-specific no miracles argument (NMA) which is at least as plausible as the traditional NMA. In this talk, I will express two doubts about this claim, relative to the AI-specific NMA that I previously proposed. I will then develop a new AI-specific NMA in response,

What’s hidden inside predictively successful deep learning models? Read More »

Transparency of What?

Abstract: Despite its importance, the concept of algorithmic transparency has yet to be fully explicated. By asking what is transparent or opaque, we propose a comprehensive framework dividing transparency into four forms: use transparency, which discloses algorithm goals and uses; data transparency, which informs sources, processing, and data quality; model transparency, which explains how the

Transparency of What? Read More »

Digital Doppelgangers, Moral Deskilling, and the Fragmented Identity: A Confucian Critique

Abstract: Artificial intelligence (AI) systems are increasingly capable of learning from and mimicking individuals, as demonstrated by a fairly successful effort to replicate the attitudes and behaviors of individuals by generative AI with a 2-hour interview (see, Park et al. 2024). This technical advancement has afforded the creation of increasingly indistinguishable (online, digital) doubles of

Digital Doppelgangers, Moral Deskilling, and the Fragmented Identity: A Confucian Critique Read More »

Recurrence, Rational Choice, and the Simulation Hypothesis (co-authored with Simon Goldstein)

Abstract: According to the doctrine of recurrence, we are reborn after our apparent death to live our life again. This paper develops a new  doctrine of recurrence. We make three main claims. First, we argue that  the simulation hypothesis increases the chance that we will recur. Second, we argue that the chance of recurrence affects

Recurrence, Rational Choice, and the Simulation Hypothesis (co-authored with Simon Goldstein) Read More »

Algebraic and Geometrical Structures of Concepts

Abstract: Deep neural networks are known to construct internal representations to process and generalize information. Understanding the structure of these representations is crucial not only for improving machine learning models but also for aligning them with human cognitive representations—namely, the concepts we use in everyday reasoning and scientific inquiry. This study examines how mathematical frameworks

Algebraic and Geometrical Structures of Concepts Read More »

AI, Normality, and Oppressive Things

Abstract: While it is well-known that AI systems might bring about unfair social impacts by influencing social schemas, much attention has been paid to instances where the content presented by AI systems explicitly demeans marginalized groups or reinforces problematic stereotypes. This paper urges critical scrutiny to be paid to instances that shape social schemas through subtler manners. Drawing from

AI, Normality, and Oppressive Things Read More »

Counterfactual Explanations in AI: A Tension between ‘Causality’ and ‘Plausibility’

Abstract: Making AI explainable has become an increasingly urgent demand, and one of the popular approaches is to generate so-called counterfactual explanations. In this talk, I attempt a philosophical critique of the key aspects of this approach, and raise an important challenge arising from a tension between two constraints on a good counterfactual explanation in

Counterfactual Explanations in AI: A Tension between ‘Causality’ and ‘Plausibility’ Read More »

Scroll to Top