Abstract:
Deep neural networks are known to construct internal representations to process and generalize information. Understanding the structure of these representations is crucial not only for improving machine learning models but also for aligning them with human cognitive representations—namely, the concepts we use in everyday reasoning and scientific inquiry. This study examines how mathematical frameworks used to analyze machine representations can inform philosophical theories of concepts. In particular, we explore the neo-Kantian view that concepts impose constraints on possibilities, shaping how information is structured and processed. These constraints manifest in algebraic and geometrical forms, corresponding to conceptual lattices and representational manifolds, respectively. Finally, I consider possible ways to bridge these algebraic and geometrical perspectives, aiming toward a more integrated understanding of conceptual structures in both artificial and human cognition.
