Dr Robin Read from the University of Plymouth to present his research on “A Study of Non-Linguistic Utterances for Social Human-Robot Interaction” at the School of Computer Science Research Seminar. Robin will shed some light on the questions if and how sounds like R2D2’s beeps are powerful means of communication for a robot.
Time/Date
Wed, 7 May, 15:00 – 16:00
Venue
MHT Building, MC0024
Abstract
The world of animation has painted an inspiring image of what the robots of the future could be. It has shown us that robots may come in all shapes and sizes, and can use a wide variety of different ways to communicate; ranging from the use of natural language, body and facial gestures, to more unique ways such as the use of color and sounds. In this talk we are specifically interested in robotic sounds, like those used iconically by the robot R2D2. These are termed Non-Linguistic Utterances (NLUs) and are a means of communication which has a rich history in film and animation. However, very little is understood about how such expressive sounds may be utilised by social robots, and how people respond to these.
I will present a series of experiments aimed at understanding how NLUs can be utilised by a social robot in order to convey affective meaning to people both young and old, and what factors impact on the production and perception of NLUs. Firstly, it is shown that not all robots should use NLUs. The morphology of the robot matters. People perceive NLUs differently across different robots, and not always in a desired manner. Next it is shown that people readily project affective meaning onto NLUs though not in a coherent manner. Further- more, people’s affective inferences are not subtle, rather they are drawn to well established, basic affect prototypes. Moreover, it is shown that the valence of the situation in which an NLU is made, overrides the initial valence of the NLU itself: situational context biases how people perceive utterances made by a robot, and through this, coherence between people in their affective inferences is found to increase. Finally, it is uncovered that NLUs are best not used as a replacement to natural language (as they are by R2D2), rather, people show a preference for them being used alongside natural language where they can play a supportive role by providing essential social cues.
These results show us that sounds made by robots are more than just noise. They are seen as rich social display that hold meaningful content which can be easily implemented into our robots with a wide variety of applications and utilities.