Tag Archives: Robotics

Robotics Research Seminar 24/5/17: “Making Robust SLAM Solvers for Autonomous Mobile Robots”

We invite everybody to attend the robotics research seminar, organised by L-CAS, on Wednesday 24/5/2017:

grisettiDr Giorgio Grisetti, DIAG, University of Rome “Sapienza”:

Making Robust SLAM Solvers for Autonomous Mobile Robots

  • WHERE: AAD1W11, Lecture Theatre (Art, Architecture and Design Building), Brayford Pool Campus
  • WHEN: Wednesday 24th May 2017, 3:00 – 4:00 pm

ABSTRACT:

In robotics, simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent’s location within it.

SLAM is an essential enabling technology for building truly autonomous robots that can operate in an unknown environment. The last three decades have seen substantial research in the field and modern SLAM systems are able to cope easily with operating conditions that in the past were regarded as challenging if not impossible to deal with.

This consideration might support the statement that SLAM is a closed problem. However a closer look at the contributions presented in the most relevant conferences and journals in robotics reveals that the papers on SLAM are still numerous and the community is large. Would this be the case if an off-the shelf solution that works all the time were available?

Non-experts that approach the problem, or even want to get one of the state-of-the-art systems running, often encounter problems and get performances that are far from the ones reported in the papers.  This is usually because the person using the system is not the person designing the system.  An open box approach that aims at solving the problems by modifying an existing pipeline is often hard to implement due to the complexity of modern SLAM systems.

In this talk we will overview the history of SLAM and we will outline some of the challenges in designing robust SLAM systems, and most importantly forming robust SLAM solvers.

Furthermore, we will also present PRO-SLAM (SLAM from a programmer’s perspective), a simplistic open-source pipeline that competes with state-of-the art Stereo Visual SLAM systems while focusing on simplicity to support teaching.

https://gitlab.com/srrg-software/srrg_proslam

Lincoln computer science research papers accepted

Lincoln Centre for Autonomous Systems (L-CAS) submitted research papers to SAC 2017 and HRI 2017, and have been accepted.

The first paper to be presented at SAC 2017 is joint work with Dr Marc Hanheide‘s PhD student Peter Lightbody and Dr Tomas Krajnik on “A Versatile High-Performance Visual Fiducial Marker Detection System with Scalable Identity Encoding”.

Fiducial markers have a wide field of applications in robotics, ranging from external localisation of single robots or robotic swarms, over self-localisation in marker-augmented environments, to simplifying perception by tagging objects in a robot’s surrounding.

We propose a new family of circular markers allowing for a computationally efficient detection, identification and full 3D position estimation. A key concept of our system is the separation of the detection and identification steps, where the first step is based on a computationally efficient circular marker detection, and the identification step is based on an open-ended `necklace code’, which allows for a theoretically infinite number of individually identifiable markers.

The experimental evaluation of the system on a real robot indicates that while the proposed algorithm achieves similar accuracy to other state-of-the-art methods, it is faster by two orders of magnitude and it can detect markers from longer distances.

The second paper that has been accepted at HRI 2017, which has an acceptance rate of only 24%, is co-authored by Marc Hanheide, Denise Hebesberger, and Tomas Krajnik:
“The When, Where, and How: An Adaptive Robotic Info-Terminal for Care Home Residents – a long-term study”

Adapting to users’ intentions is a key requirement for autonomous robots in general, and in-care settings in particular. In this paper, a comprehensive long-term study of a mobile robot providing information services to residents, visitors, and staff of a care home is presented with a focus on adapting to the when and where the robot should be offering its services to best accommodate the users’ needs.

Rather than providing a fixed schedule, the presented system takes the opportunity of long-term deployment to explore the space of possibilities of interaction while concurrently exploiting the model learned to provide better services. But in order to provide effective services to users in a care home, not only the when and where are relevant, but also the way the information is provided and accessed. Hence, also the usability of the deployed system is studied specifically, in order to provide a most comprehensive overall assessment of a robotic info-terminal implementation in a care setting.

Our results back our hypotheses, (i) that learning a spatiotemporal model of users’ intentions improves efficiency and usefulness of the system, and (ii) that the specific information sought after is indeed dependent on the location the info-terminal is offered.

This is a great achievement for our PhD students and researchers, and you can keep up to date with our L-CAS research here: https://lcas.lincoln.ac.uk/wp/ 

 

Feel the force of technological innovation at Future Fest 2016

Future Fest will return to the city of Lincoln next month with an extraordinary showcase of space-age technology, pioneering research and futuristic fun.

Jason Bradbury (University of Lincoln)

Tech guru, TV presenter and Visiting Lecturer Jason Bradbury will be the special guest at the University of Lincoln’s annual sci-fi themed festival, which this year takes place on Thursday 10th November 2016.

Inspired by the epic film franchise Star Wars, Future Fest 2016 will offer visitors the chance to immerse themselves in futuristic virtual reality worlds, discover the latest advances in consumer technology, and meet the University’s growing ensemble of cutting-edge robots.

*Edit*

The event will feature a number of exciting interactive zones. The Robot Zone will demonstrate the very latest in cutting-edge robotics – from 3D-printed humanoids and mind-controlled androids, to tech that will see visitors immersing themselves in extraordinary virtual reality worlds and building their own robots that can compete in a purpose-built arena.

In the Gaming Zone, visitors can head to ‘a galaxy far far away’ with a variety of Star Wars computer games. In the Space Zone they can learn about the technologies which help us understand what is going on 380,000 feet above our heads, and the Movie Maker Zone will reveal how films are brought to life – from concept to screen. The Stage Combat Zone will see visitors unleash their inner Luke Skywalker and learn how the best battles are fought with lightsabers.

Jason Bradbury, best known as presenter of TV’s The Gadget Show and a Visiting Lecturer on Computer Science and Product Design courses at the University of Lincoln, said: “I am so excited to be involved in Future Fest again this year after the great fun we had at last year’s inaugural event. Bringing all this expertise and technology together provides a wonderful opportunity to appreciate just how much scientific innovation has transformed the way we live in a relatively short time, and to examine some of the innovative research which could shape our future.

“Staff and students at Lincoln are working on projects that we could never have imagined 20 years ago, and that is why I am thrilled to be involved. We’re creating the future, right here, right now.”

Future Fest takes place on Thursday 10th November 2016 at the Engine Shed on the University of Lincoln’s Brayford Pool Campus. The event runs from 10am – 4pm. It is free to attend but places must be booked in advance via the University of Lincoln website.

Socially interactive robots to support autistic children

Technology is supporting and aiding a variety of people and disabilities every single day, enriching their lives as much as possible. Autistic children can now get communication support from robots.

MARC
MARC

Autism is a lifelong developmental disability that affects how a person communicates with, and relates to, other people. It also affects how they make sense of the world around them.

In our latest School of Computer Science Research Seminar we look at the ‘Socially Interactive Robotic Framework for Communication Training for Children with Autism’ and how robotic communication can aid their skills and behaviour.

Come along on 4th July at 1pm in MC3108 to hear Dr Xiaofeng Liu give an insightful FREE talk on this very interesting and topical subject.

Abstract:

Social robots are often employed to assist children with Autism Spectrum Disorder (ASD) for communication, education and therapeutic training. Many studies have shown that the intervention of social robots can promote educational or therapeutic outcomes.

In this study, we record gaze-based child-robot interaction to evaluate the engagement of children, which enable us to design the specific educational or therapeutic items for each child. The platform is built up by a NAO humanoid robot, and a depth camera that captures child’s actions and detect their gaze. The pilot tests have shown that our framework is helpful for therapist to design appropriate and personalised training courses for each child.

Bio:

XIAOFENG LIU received a Ph.D. degree in biomedical engineering from Xi’an Jiaotong University, Xi’an, China, in 2006. From 2008 to 2011, he held a post-doctoral position with the Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University. From 2011, he has been with the College of IoT Engineering, Hohai University, Changzhou, where he is currently a full-time Professor and the Vice Director of the Changzhou Key Laboratory of Robotics and Intelligent Technology.  From 2013 to 2014 He was a visiting professor at University College London, UK. His current research interests focus on the study of nature-inspired navigation, human robot interaction, and neural information processing.

 All are welcome.