Extensive research has been conducted on the attribution of intentionality, mind, and emotions to robots. However, the majority of present-day studies does not provide a detailed analysis of the explanation structures that underlie interactions between humans and robots. We believe that a finer-grained understanding how people interpret and explain the behaviour of robots is essential to unravel the dynamics of human-robot interaction.

 

About the project:

 

The goal of the HERB project is to elucidate how people explain the behaviour of the robots they interact with. HERBs – Human Explanations of Robotic Behaviour(s) will be analysed along three dimensions: the characteristics of the robotic behaviour to be explained (explanandum), sets of theoretical assumptions and statements about the robot’s functioning (explanans), and the relationship between explanandum and explanans.

Analysing the explanans reveals assumptions about the robot’s architecture, while examining the explanans-explanandum relationship uncovers the explanatory pattern employed. This analysis aims at illuminating why some HERBs foster understanding, and how they can predictively or interventionally guide human-robot interactions.

Furthermore, the project aims at investigating teachers’ explanations of robotic behaviour, i.e. what internal structures do teachers attribute to robots? This question begs an examination of how the attributed internal structures depends on the teachers’ educational background and the characteristics of the robot itself. By analysing the assumptions underlying teachers’ HERBs, this line of research seeks to understand the cognitive models employed to make sense of robotic systems. The findings can inform the design of educational robots that align with teachers’ mental models, facilitating effective human-robot interactions in educational contexts.

Central Questions:

 

  • What theoretical vocabulary is used to describe robotic behaviors? What are human’s mental models of robotic behaviours?
  • How do people describe the mechanisms governing the behavior of robots in ordinary interactions?
  • What are people’s dispositions when giving explanations? For instance, are people more inclined towards giving teleological or causal-mechanistic explanations?
  • In case of mentalistic explanations, what kind of ‘mind’ and mental entities (e.g., propositional attitudes, information-processing modules, …) are attributed to robots?
  • What makes something a good explanation of a robot?

To address these questions, one must go beyond the study of mental-state or trait-attribution and reconstruct the finer-grained explanations of robotic behaviour. We believe that reconstructing human explanations of robotic behaviour (HERB) is crucial for comprehending the dynamics of human-robot interaction, designing sociable robots, addressing robo-ethical concerns, and informing the design of cognitive architectures for social robots.

While a vast philosophical literature on explanation and understanding exists, it has largely been neglected in studies on HERBs. Among the aims of this project is to incorporate insights from philosophy, psychology, and cognitive science on how people generate, select, evaluate, and communicate explanations. Furthermore, understanding HERBs is particularly significant in the context of educational robotics and the potential for robots to enhance technological literacy among students.

 

People:

UNIMIB (RobotiCSS Lab):

Edoardo Datteri

As a philosopher of science, I primarily work on the methodological foundations of biorobotics, Artificial Intelligence, and Cognitive Science. More specifically, I reconstruct and analyze the validity of methodologies involving robots and bionic systems, as well as robots interacting with animals and humans, to study living system behavior and cognition.
My interests also concern the role of robots as tools to intervene in, and theorize the mechanisms of social cognition, still from a methodological perspective, and the methodological foundations of educational robotics.

edoardo.datteri@unimib.it

Silvia Larghi

I have a background in computer science and engineering. After an internship in robotics at the JRC – Ispra (VA), I worked in software engineering participating in EC-funded international research projects. I taught technology in school, where I coordinated the team for digital innovation. For several years I have been designing and conducting laboratories of educational robotics and artificial intelligence in schools and training courses for teachers.
My research interests concern philosophy of Robotics and Artificial Intelligence, philosophy of Cognitive Sciences and Human-Robot Interaction.

silvia.larghi@studio.unimib.it

Nicola Zagni

Graduated in Philosophy at the University of Bologna with a thesis on the Epistemology of Machine Learning. He has a background as a robotics educator in K-12 schools. His research interests are scientific explanation, the marriage between cognitive science and artificial intelligence, robotics, and the ethics of AI.

nicola.zagni@unimib.it