Tag Archives: research seminar

Robotics Research Seminar 24/5/17: “Making Robust SLAM Solvers for Autonomous Mobile Robots”

We invite everybody to attend the robotics research seminar, organised by L-CAS, on Wednesday 24/5/2017:

grisettiDr Giorgio Grisetti, DIAG, University of Rome “Sapienza”:

Making Robust SLAM Solvers for Autonomous Mobile Robots

  • WHERE: AAD1W11, Lecture Theatre (Art, Architecture and Design Building), Brayford Pool Campus
  • WHEN: Wednesday 24th May 2017, 3:00 – 4:00 pm

ABSTRACT:

In robotics, simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent’s location within it.

SLAM is an essential enabling technology for building truly autonomous robots that can operate in an unknown environment. The last three decades have seen substantial research in the field and modern SLAM systems are able to cope easily with operating conditions that in the past were regarded as challenging if not impossible to deal with.

This consideration might support the statement that SLAM is a closed problem. However a closer look at the contributions presented in the most relevant conferences and journals in robotics reveals that the papers on SLAM are still numerous and the community is large. Would this be the case if an off-the shelf solution that works all the time were available?

Non-experts that approach the problem, or even want to get one of the state-of-the-art systems running, often encounter problems and get performances that are far from the ones reported in the papers.  This is usually because the person using the system is not the person designing the system.  An open box approach that aims at solving the problems by modifying an existing pipeline is often hard to implement due to the complexity of modern SLAM systems.

In this talk we will overview the history of SLAM and we will outline some of the challenges in designing robust SLAM systems, and most importantly forming robust SLAM solvers.

Furthermore, we will also present PRO-SLAM (SLAM from a programmer’s perspective), a simplistic open-source pipeline that competes with state-of-the art Stereo Visual SLAM systems while focusing on simplicity to support teaching.

https://gitlab.com/srrg-software/srrg_proslam

Socially interactive robots to support autistic children

Technology is supporting and aiding a variety of people and disabilities every single day, enriching their lives as much as possible. Autistic children can now get communication support from robots.

MARC
MARC

Autism is a lifelong developmental disability that affects how a person communicates with, and relates to, other people. It also affects how they make sense of the world around them.

In our latest School of Computer Science Research Seminar we look at the ‘Socially Interactive Robotic Framework for Communication Training for Children with Autism’ and how robotic communication can aid their skills and behaviour.

Come along on 4th July at 1pm in MC3108 to hear Dr Xiaofeng Liu give an insightful FREE talk on this very interesting and topical subject.

Abstract:

Social robots are often employed to assist children with Autism Spectrum Disorder (ASD) for communication, education and therapeutic training. Many studies have shown that the intervention of social robots can promote educational or therapeutic outcomes.

In this study, we record gaze-based child-robot interaction to evaluate the engagement of children, which enable us to design the specific educational or therapeutic items for each child. The platform is built up by a NAO humanoid robot, and a depth camera that captures child’s actions and detect their gaze. The pilot tests have shown that our framework is helpful for therapist to design appropriate and personalised training courses for each child.

Bio:

XIAOFENG LIU received a Ph.D. degree in biomedical engineering from Xi’an Jiaotong University, Xi’an, China, in 2006. From 2008 to 2011, he held a post-doctoral position with the Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University. From 2011, he has been with the College of IoT Engineering, Hohai University, Changzhou, where he is currently a full-time Professor and the Vice Director of the Changzhou Key Laboratory of Robotics and Intelligent Technology.  From 2013 to 2014 He was a visiting professor at University College London, UK. His current research interests focus on the study of nature-inspired navigation, human robot interaction, and neural information processing.

 All are welcome.

SoCS Research Seminar Series on 27/11/2015: Prof Nick Taylor (HWU)

nkt

The School of Computer Science is pleased to welcome Prof Nick Taylor (from Heriot-Watt University) for a research talk as part of the School’s research seminar series. Prof Taylor will be presenting current research from “The Edinburgh Centre for Robotics”.

 

When?

Fri 27/11/2015, 10am

Where?

David Chiddick Building, Room BL1105 (1st Floor)

Abstract:

The Edinburgh Centre for Robotics harnesses the potential of 30 world leading investigators from 12 cross-disciplinary research groups and institutes across the Schools of Engineering & Physical Sciences and Mathematical & Computer Sciences at Heriot-Watt University and the Schools of Informatics and Engineering at the University of Edinburgh. Our research focuses on the interactions amongst robots, people, environments and autonomous systems, designed and integrated for different applications, scales and modalities. We aim to apply fundamental theoretical methods to real-world problems on real robots solving pressing commercial and societal needs. The Centre offers a 4 year PhD programme through the EPSRC Centre for Doctoral Training in Robotics and Autonomous Systems and hosts the Robotarium national UK robotics facility.
http://www.edinburgh-robotics.org/
https://www.facebook.com/edinburghcentreforrobotics
@EDINrobotics

Biography

Nick Taylor is a Professor of Computer Science at Heriot-Watt University and a Deputy Director of the Edinburgh Centre for Robotics. He was Head of Computer Science from 2008-2014 and leads the Pervasive, Ubiquitous and Mobile Applications (PUMA) Lab which he formed in 2010. He has been involved in robotics and machine learning research for over three decades, most recently with a particular interest in the personalisation of autonomous systems for pervasive environments. Nick took his A-levels at Lincoln Christ’s Hospital School and then studied at Cardiff, London and Nottingham before joining Heriot-Watt University and settling in Midlothian.
http://www.hw.ac.uk/schools/mathematical-computer-sciences/staff-directory/nicholas-taylor.htm
http://www.macs.hw.ac.uk/puma/

Research Presentation by Prof Ales Leonardis on Wed 29/05, 4pm

Prof Ales Leonardis (University of Birmingham) is presenting his research on “Combining compositional shape hierarchy and multi-class object taxonomy for efficient object categorisation”  in a joint Psychology and Computer Science seminar on Wed, 29th May at 4pm (in room MC1001). All students and staff are invited.

Here the abstract of Ales’ presentation:

Visual categorisation has been an area of intensive research in the vision community for several decades. Ultimately, the goal is to efficiently detect and recognize an increasing number of object classes. The problem entangles three highly interconnected issues: the internal object representation, which should compactly capture the visual variability of objects and generalize well over each class; a means for learning the representation from a set of input images with as little supervision as possible; and an effective inference algorithm that robustly matches the object representation against the image and scales favorably with the number of objects. In this talk I will present our approach which combines a learned compositional hierarchy, representing (2D) shapes of multiple object classes, and a coarse-to-fine matching scheme that exploits a taxonomy of objects to perform efficient object detection.
Our framework for learning a hierarchical compositional shape vocabulary for representing multiple object classes takes simple contour fragments and learns their frequent spatial configurations. These are recursively combined into increasingly more complex and class-specific shape compositions, each exerting a high degree of shape variability. At the top-level of the vocabulary, the compositions represent the whole shapes of the objects. The vocabulary is learned layer after layer, by gradually increasing the size of the window of analysis and reducing the spatial resolution at which the shape configurations are learned. The lower layers are learned jointly on images of all classes, whereas the higher layers of the vocabulary are learned incrementally, by presenting the algorithm with one object class after another.
However, in order for recognition systems to scale to a larger number of object categories, and achieve running times logarithmic in the number of classes, building visual class taxonomies becomes necessary. We propose an approach for speeding up recognition times of multi-class part-based object representations. The main idea is to construct a taxonomy of constellation models cascaded from coarse-to-fine resolution and use it in recognition with an efficient search strategy. The structure and the depth of the taxonomy is built automatically in a way that minimizes the number of expected computations during recognition by optimizing the cost-to-power ratio. The combination of the learned taxonomy with the compositional hierarchy of object
shape achieves efficiency both with respect to the representation of the structure of objects and in terms of the number of modeled object classes. The experimental results show that the learned multi-class object representation achieves a detection performance comparable to the current state-of-the-art flat approaches with both faster inference and shorter training times.