Understanding robots. Contributions from cognitive neuroscience and philosophy
January 30, 2025 – Building U7, room 2104 (second floor), University of Milano-Bicocca, Milan
When we interact with technological artefacts such as AI systems and social robots, we often attribute mental states such as goals and desires to them. As the late Daniel Dennett famously argued, mental state attribution is in some cases a good strategy for explaining and predicting their behaviour. Recent empirical studies show that mental state attribution significantly shapes the way we interact with robots and AI systems, as well as our overall acceptance of these technologies. What happens in the human brain when we ‘humanise’ robots, and what kind of mind do we attribute to them? These questions will be discussed in the seminar with the help of a cognitive neuroscientist and a philosopher of science. This seminar is organized by CISEPS (Center for Interdisciplinary Studies in Economics, Psychology and Social Sciences) and by the RobotiCSS Lab (Laboratory of Robotics for the Cognitive and Social Sciences) of the University of Milano-Bicocca.
14:00 |
Investigating interactions with humanoids from a Social Cognitive Neuroscience perspectiveThierry Chaminade For decades now, it has been claimed that humanoid robots are due to become part of our everyday lives. While progress is still being made regarding technical practicalities, design and affordability, the fact that these agents will be accepted as social partners is largely taken for granted as their human shape and behaviour are considered sufficient to elicit the same mechanisms as social behaviours with fellow humans. Yet one can doubt these premises. Not only is their artificial nature largely visible, and may be sufficient to elicit a design stance instead of an intentional stance in the framework of the late philosopher Daniel Dennett, but human history is also full of examples, such as slavery, in which some human groups de-humanized other human groups as a consequence of their distinctiveness. Investigating the neural bases of increasingly complex interactive behaviours, from mere observation of emotional expressions to natural conversations, allowed a better understanding of cognitive phenomenon preserved or, on the opposite, impaired, when the human-robot interaction is compared to a human-human interaction. Together, results not only provide answers regarding how humans react to artificial agents, they are also informative regarding the neural bases of human complex social behaviours. |
15:00 |
What kind of mind do we attribute to robots?Edoardo Datteri People often talk about (and to) robots and AI systems in mentalistic terms. This phenomenon is often referred to as the ‘attribution’ of mental states to robots and AI systems, and has been empirically investigated under this label in several cognitive and neuroscience research studies. One question that is rarely, if ever, explicitly addressed in these studies is what it means to ‘attribute’ mental states to AI systems or robots. In fact, there are different forms of mental state attribution. First, the attribution may be ‘superficial’ (consisting of the mere production of mentalistic verbal utterances) or it may be ‘deep’ (rooted in the subject’s thoughts about the robot). Second, there may be different ‘deep’ forms of mental state attribution, corresponding to different ontological stances taken by the subject towards the system. Specifically, people may adopt forms of psychological realism, eliminativism, reductionism, fictionalism, agnosticism, or instrumentalism about the mind of the robot with which they are interacting (where the instrumentalist option is inherent in the so-called intentional stance discussed by Dennett and often referred to in the empirical literature on human-machine interaction). These alternatives illustrate different ways of ‘humanising’ or ‘dehumanising’ robots and AI systems from a psychological point of view, and can usefully be taken into account when studying how people perceive and understand technological artefacts and computational systems. |