Dr. Heriberto Cuayáhuitl, who will be joining the Lincoln School of Computer Science soon as a Senior Lecturer in L-CAS, will be presenting in our research seminar series on Fri 22/1/16, at 1pm. His talk titled “Autonomous Learning for Interactive Agents” will be held in room MB1020. This is a great opportunity for staff and students alike, to meet their colleague and lecturer to-be.
Title: Autonomous Learning for Interactive Agents
Robots that interact with humans are still confined to controlled spaces, such as lab environments, where they conduct highly pre-specified tasks in interaction with recruited and cooperative users. Some of the obstacles that restrict real world applicability (amongst others) are their heavy reliance on domain-specific pre-programming and learning tasks that arise from the real world rather than being contrived for the purpose of robot training. In this talk, I will present a research direction on autonomous learning that aims to alleviate the above problems in order to push interactive robots over the edge of wider usability. The core of my research lies in multi-task reinforcement learning that helps agents to understand and optimise their behaviour by interacting with humans and learning from feedback and examples. I will briefly present three applications of this autonomous learning framework: (1) a situated agent that learns to guide people in indoor environments using a divide-and-conquer approach, (2) a conversational robot that learns to play educational games from interacting with children, and (3) a strategic agent that learns trading negotiations using deep reinforcement learning. I will conclude by discussing directions for future research that further increase the level of autonomy of interactive agents for their application in real world scenarios.
Dr Heriberto Cuayáhuitl is a Research Fellow in the School of Mathematical and Computer Sciences at Heriot-Watt University, Edinburgh Campus. He received a PhD from the University of Edinburgh in 2009, and has been a postdoctoral researcher at the University of Bremen and the German Research Centre for Artificial Intelligence (DFKI). His research interest is in machine learning for interactive systems and robots, and he has published 60 research papers in this area. He is lead organiser of the international workshop series on Machine Learning for Interactive Systems (MLIS), and has been guest editor of the journals ACM Transactions on Interactive Interactive Systems and Elsevier Computer Speech and Language.
The School of Computer Science is pleased to welcome Prof Nick Taylor (from Heriot-Watt University) for a research talk as part of the School’s research seminar series. Prof Taylor will be presenting current research from “The Edinburgh Centre for Robotics”.
Fri 27/11/2015, 10am
David Chiddick Building, Room BL1105 (1st Floor)
The Edinburgh Centre for Robotics harnesses the potential of 30 world leading investigators from 12 cross-disciplinary research groups and institutes across the Schools of Engineering & Physical Sciences and Mathematical & Computer Sciences at Heriot-Watt University and the Schools of Informatics and Engineering at the University of Edinburgh. Our research focuses on the interactions amongst robots, people, environments and autonomous systems, designed and integrated for different applications, scales and modalities. We aim to apply fundamental theoretical methods to real-world problems on real robots solving pressing commercial and societal needs. The Centre offers a 4 year PhD programme through the EPSRC Centre for Doctoral Training in Robotics and Autonomous Systems and hosts the Robotarium national UK robotics facility. http://www.edinburgh-robotics.org/ https://www.facebook.com/edinburghcentreforrobotics @EDINrobotics
Nick Taylor is a Professor of Computer Science at Heriot-Watt University and a Deputy Director of the Edinburgh Centre for Robotics. He was Head of Computer Science from 2008-2014 and leads the Pervasive, Ubiquitous and Mobile Applications (PUMA) Lab which he formed in 2010. He has been involved in robotics and machine learning research for over three decades, most recently with a particular interest in the personalisation of autonomous systems for pervasive environments. Nick took his A-levels at Lincoln Christ’s Hospital School and then studied at Cardiff, London and Nottingham before joining Heriot-Watt University and settling in Midlothian. http://www.hw.ac.uk/schools/mathematical-computer-sciences/staff-directory/nicholas-taylor.htm http://www.macs.hw.ac.uk/puma/
Dr Astrid Weiss from Technical University Vienna, will be presenting in the SoCS research seminar. Her work is at the crossroads of robotics, computer science, Human Computer Interaction, and social sciences; investigating robotic applications in public space, elderly care and factory settings.
This is a research seminar open to attendees across the university, in particular interesting for computer science, engineering, and and social sciences.
Users in focus – Creating service robots for and with people
User involvement is a widely accepted principle in the development of usable and acceptable technology. However, it is still a vague approach in the research field of human-robot interaction. I share with you my experiences on the nature of user involvement and how it can be integrated in the development of service robots, providing examples from different contexts (elderly care, public space, factory environments, etc.) and user groups (children, older adults, naive users, expert users, etc.). I’ll present reflections from a social scientists working on human-robot interaction from several years of user studies and field work.CV
Astrid Weiss is a postdoctoral research fellow in HRI at the Vision4Robotics group at the ACIN Institute of Automation and Control at Vienna University of Technology (Austria). Her current research focuses on Human-Robot Cooperation in vision-based tasks and service robots for older adults. Her research is inspired by Theory of Mind and the approach of transferring findings from human-human studies to human-robot interaction in order to improve intuitiveness and acceptance. Her general research interests are user-centered design and evaluation studies for Human-Computer Interaction and Human-Robot Interaction with a focus on in-the-wild studies and controlled experiments. She is especially interested in the impact technology has on our everyday life and what makes people accept or reject technology. Before her position in Vienna she was a postdoc researcher at the HCI&Usability Unit, of the ICT&S Center, University of Salzburg, Austria and at the Christian Doppler Laboratory on “Contextual Interfaces” at University of Salzburg. Astrid holds a master’s degree in sociology and a PhD in social sciences from the University of Salzburg. During her studies she specialized on methodologies of empirical social research and applied statistics. From September 2011 until January 2012 she was on a short-term sabbatical at the University of Amsterdam, Intelligent Systems Lab and the University of Twente, HMI group to work with Vanessa Evers on Cross-Cultural studies in Human-Robot Interaction.
Dr Robin Read from the University of Plymouth to present his research on “A Study of Non-Linguistic Utterances for Social Human-Robot Interaction” at the School of Computer Science Research Seminar. Robin will shed some light on the questions if and how sounds like R2D2’s beeps are powerful means of communication for a robot.
Wed, 7 May, 15:00 – 16:00
MHT Building, MC0024
The world of animation has painted an inspiring image of what the robots of the future could be. It has shown us that robots may come in all shapes and sizes, and can use a wide variety of different ways to communicate; ranging from the use of natural language, body and facial gestures, to more unique ways such as the use of color and sounds. In this talk we are specifically interested in robotic sounds, like those used iconically by the robot R2D2. These are termed Non-Linguistic Utterances (NLUs) and are a means of communication which has a rich history in film and animation. However, very little is understood about how such expressive sounds may be utilised by social robots, and how people respond to these.
I will present a series of experiments aimed at understanding how NLUs can be utilised by a social robot in order to convey affective meaning to people both young and old, and what factors impact on the production and perception of NLUs. Firstly, it is shown that not all robots should use NLUs. The morphology of the robot matters. People perceive NLUs differently across different robots, and not always in a desired manner. Next it is shown that people readily project affective meaning onto NLUs though not in a coherent manner. Further- more, people’s affective inferences are not subtle, rather they are drawn to well established, basic affect prototypes. Moreover, it is shown that the valence of the situation in which an NLU is made, overrides the initial valence of the NLU itself: situational context biases how people perceive utterances made by a robot, and through this, coherence between people in their affective inferences is found to increase. Finally, it is uncovered that NLUs are best not used as a replacement to natural language (as they are by R2D2), rather, people show a preference for them being used alongside natural language where they can play a supportive role by providing essential social cues.
These results show us that sounds made by robots are more than just noise. They are seen as rich social display that hold meaningful content which can be easily implemented into our robots with a wide variety of applications and utilities.