All posts by Marc Hanheide

SoCS Research Seminar: Robots making (non-linguistic) sounds to communicate – R2D2 and other beeps

Dr Robin Read

Dr Robin Read from the University of Plymouth to present his research on “A Study of Non-Linguistic Utterances for Social Human-Robot Interaction” at the School of Computer Science Research Seminar. Robin will shed some light on the questions if and how sounds like R2D2’s beeps are powerful means of communication for a robot.

Time/Date

Wed, 7 May, 15:00 – 16:00

Venue

MHT Building, MC0024

Abstract

The world of animation has painted an inspiring image of what the robots of the future could be. It has shown us that robots may come in all shapes and sizes, and can use a wide variety of different ways to communicate; ranging from the use of natural language, body and facial gestures, to more unique ways such as the use of color and sounds. In this talk we are specifically interested in robotic sounds, like those used iconically by the robot R2D2. These are termed Non-Linguistic Utterances (NLUs) and are a means of communication which has a rich history in film and animation. However, very little is understood about how such expressive sounds may be utilised by social robots, and how people respond to these.

I will present a series of experiments aimed at understanding how NLUs can be utilised by a social robot in order to convey affective meaning to people both young and old, and what factors impact on the production and perception of NLUs. Firstly, it is shown that not all robots should use NLUs. The morphology of the robot matters. People perceive NLUs differently across different robots, and not always in a desired manner. Next it is shown that people readily project affective meaning onto NLUs though not in a coherent manner. Further- more, people’s affective inferences are not subtle, rather they are drawn to well established, basic affect prototypes. Moreover, it is shown that the valence of the situation in which an NLU is made, overrides the initial valence of the NLU itself: situational context biases how people perceive utterances made by a robot, and through this, coherence between people in their affective inferences is found to increase. Finally, it is uncovered that NLUs are best not used as a replacement to natural language (as they are by R2D2), rather, people show a preference for them being used alongside natural language where they can play a supportive role by providing essential social cues.

These results show us that sounds made by robots are more than just noise. They are seen as rich social display that hold meaningful content which can be easily implemented into our robots with a wide variety of applications and utilities.

Links

L-CAS robot “Linda” competing in Robot Marathon

Researchers at the Centre for Autonomous Systems are studying how mobile robots can learn from long-term experience to provide service in security and care scenarios. As part of the European STRANDS project, they contribute to the development of robots that are able to operate autonomously, without the need of human intervention, in regular indoor environments like offices and homes over long periods of time. The 4 year project involving six academic partners from Lincoln, Birmingham, Leeds, Vienna, Aachen, and Stockholm has started in April this year. The EU robotics week (25/11/13 until 29/11/13) is the project’s first major milestone to show their robots working continuously and autonomously at the different sites. Lincoln’s robot “Linda” is patrolling its surrounding 24/7 during this week as part of the “STRANDS robot marathon”.

Strands_LogoLinda faces the challenge to safely and reliably navigate an environment that is populated by people and not customised to a robot’s needs. She will have to cope with changes that occur, such as lights being turned on and off, objects moved about, and people walking around. During the whole week Linda will be streaming live to the internet where the public can follow Linda on her patrol routes every day and night. Linda can be seen patrolling, charging autonomously, and visiting checkpoints that have been defined by the researchers. While this is the first step towards autonomous robots that can assist and help people, the STRANDS project ultimately aims to deploy robots like Linda to other sites where they will complement human guards to increase security and help staff in care facilities, facilitated by the project partners G4S and AAF, respectively.

Lincoln’s Linda robot is competing well so far (started at 10am this morning) and reports her progress on her very own marathon website where you can follow her live (including a video stream and a 3D WebGL visualisation of her environment). You can also follow her on Twitter.

Research Presentation by Prof Ales Leonardis on Wed 29/05, 4pm

Prof Ales Leonardis (University of Birmingham) is presenting his research on “Combining compositional shape hierarchy and multi-class object taxonomy for efficient object categorisation”  in a joint Psychology and Computer Science seminar on Wed, 29th May at 4pm (in room MC1001). All students and staff are invited.

Here the abstract of Ales’ presentation:

Visual categorisation has been an area of intensive research in the vision community for several decades. Ultimately, the goal is to efficiently detect and recognize an increasing number of object classes. The problem entangles three highly interconnected issues: the internal object representation, which should compactly capture the visual variability of objects and generalize well over each class; a means for learning the representation from a set of input images with as little supervision as possible; and an effective inference algorithm that robustly matches the object representation against the image and scales favorably with the number of objects. In this talk I will present our approach which combines a learned compositional hierarchy, representing (2D) shapes of multiple object classes, and a coarse-to-fine matching scheme that exploits a taxonomy of objects to perform efficient object detection.
Our framework for learning a hierarchical compositional shape vocabulary for representing multiple object classes takes simple contour fragments and learns their frequent spatial configurations. These are recursively combined into increasingly more complex and class-specific shape compositions, each exerting a high degree of shape variability. At the top-level of the vocabulary, the compositions represent the whole shapes of the objects. The vocabulary is learned layer after layer, by gradually increasing the size of the window of analysis and reducing the spatial resolution at which the shape configurations are learned. The lower layers are learned jointly on images of all classes, whereas the higher layers of the vocabulary are learned incrementally, by presenting the algorithm with one object class after another.
However, in order for recognition systems to scale to a larger number of object categories, and achieve running times logarithmic in the number of classes, building visual class taxonomies becomes necessary. We propose an approach for speeding up recognition times of multi-class part-based object representations. The main idea is to construct a taxonomy of constellation models cascaded from coarse-to-fine resolution and use it in recognition with an efficient search strategy. The structure and the depth of the taxonomy is built automatically in a way that minimizes the number of expected computations during recognition by optimizing the cost-to-power ratio. The combination of the learned taxonomy with the compositional hierarchy of object
shape achieves efficiency both with respect to the representation of the structure of objects and in terms of the number of modeled object classes. The experimental results show that the learned multi-class object representation achieves a detection performance comparable to the current state-of-the-art flat approaches with both faster inference and shorter training times.

Another Job Opportunity in Robotics

The Lincoln Centre for Autonomous System (L-CAS) is looking for another Research Assistant to work in the STRANDS project. Consider applying if you are excited about mobile robots and long term behaviour, and you are looking for a great opportunity to also pursue a PhD. We need a great communicator, robot programmer, and system integrator; with an ambition and dedication for excellent research. Read the details and apply online for this post on “Intelligent Long-term Behaviour in Mobile Robotics”.

Screen Shot 2013-04-05 at 10.53.45