Robot reaches a major milestone: Running autonomously for 30 days

Linda robotLinda, Lincoln’s mobile robot developed in the STRANDS project, will soon be reaching a first major project milestone: Running autonomously for a total of 30 days. While this seems easy too achieve for today’s industrial robots, Linda has continuously been patrolling an office space at the School of Computer Science University of Lincoln which is not designed to accommodate a robot.

The challenges Linda faced when she was on duty for 24/7 for a whole month were changing lighting conditions, re-arrangement of furniture, and blockage of her way. During this time Linda travelled more than a 100km. Her progress is documented on her Twitter account.

Robots that learn from experience

Strands 1Specialist robots will learn how to act intelligently in real-world environments, supporting security guards or care home assistants, in a multi-million Euro project.

The aim of the research is to create mobile robots that are able to operate intelligently and independently, based on an understanding of 3D space and how this space changes over time, from milliseconds to months.

The £7.2 million STRANDS collaborative project involves security company G4S Technology Ltd and the Academy of Ageing Research, an Austrian care home provider, where the technology developed throughout the four year venture will be tested.

Funded by the European Union’s Seventh Framework programme and led by the University of Birmingham, Dr Tom Duckett and Dr Marc Hanheide from the University of Lincoln’s School of Computer Science have been awarded £750,000 to help develop the software to process the sheer volume of experiences the robots will encounter.

Dr Duckett, who is Director of the Lincoln Centre for Autonomous Systems Research, will lead the research on creating 4D maps (3D mapping over extended time periods) of the environment and investigate methods for detecting changes and unusual situations.

He said: “The idea is to create service robots that will work with people and learn from long-term experiences. What’s unusual about any environment depends on the context. In a security scenario a robot will be required to perform regular patrols and continually inspect its surroundings for variations from its normal experiences. Certain changes such as finding a person in a restricted area may indicate a security violation or a burglary. In a care home a robot will be required to act as an assistant for elderly patients, fetching and carrying things while also being alert to incidents such as people falling over.

“It’s not just about developing a care home or security guard robot. We are trying to enable robots to learn from their long-term experience and their perception of how the environment unfolds in time. The technology will have many possible applications.”

As well as mapping the environment the robots will require capabilities for person detection, tracking and activity recognition. Dr Hanheide, whose background is in Artificial Intelligence, will lead the research on how the robots gather information about their surroundings, and use this learned knowledge to interact appropriately with human users.

Dr Hanheide said: “The main idea is to deploy robots that run for a long time so they have the chance to develop a common-sense attitude on how the world should be and be able to spot the deviations. The robots are curious to learn about the environment – they will see if something has changed and whether that’s a one-off or a regular occurrence. Our robots will be active for long periods in dynamic and changing environments.

“Currently industry robots can run for 24 hours a day and are incredibly reliable in well-controlled environments, but they don’t use long-term experience to adjust or improve in any way. Cognitive robotics systems can learn and adapt, but most are used for just one experiment. We want to build a bridge between the two by creating robots that can run for long periods of time and also make use of life-long learning capabilities to adapt to the needs of different users.”

The project was officially launched at a recent coding camp held at the University of Lincoln.

Following the research period the team will perform demonstrations of their systems at science museums, public events and trade shows. It is hoped the software solutions and theoretical concepts produced will be exploited by a variety of industries.

To see a video of the robots in operation go to http://youtu.be/YSTZfK7GtAk

New research into robots’ understanding of the world

Oscar Martinez-MozosNew methods to enable robots to understand the world around them have been put forward by Dr Oscar Martinez Mozos from the School of Computer Science.

His paper, currently in press, details research into how a robot can understand human-made environments by trying to learn how to recognise different types of surroundings, such as corridors, kitchens and offices. A 3D laser sensor is used in order to scan the environment.

Dr Martinez-Mozos said: “In our method, we combine the range and reflectance data from the laser scan for the final categorisation of places. The results of the presented experiments demonstrate the capability of our technique to categorise indoor places with high accuracy.”

The paper ‘Categorization of indoor places by combining local binary pattern histograms of range and reflectance data from laser range finders’ is currently in press for the international journal Advanced Robotics. The research was conducted with the University of Kyushu in Japan.

In a second publication Dr Martinez Mozos and colleagues study new methods for robots to be able to recognise everyday objects such as plates, boxes and cups.

He said: “The main point of this paper is that we focus on the situations where there is a lot of clutter and it is difficult to distinguish the different objects. For example, we try to identify several objects that are located on a table and that occlude each other. In this situation the task of recognising objects is difficult (even for humans) because the robot can only see a part of the object.”

The paper presents an approach based on a 3D dataset containing over 15,000 Kinect scans of more than 100 objects which were grouped into general geometric categories.

The joint work ‘Cumulative object categorization in clutter’ was done in collaboration with the German Aerospace Center (DLR), the University of Bremen in Germany, and the Autonomous Technologies Group Robert Bosch LLC in the United States.

The paper was accepted for a workshop organised inside the Robotics: Science and Systems (RSS) conference, June 2013.

‘Computing at School’ Hub meeting

The School of Computer Science held its third CAS (Computing at School) Hub meeting on the 3 July 2013. The turn out was good with 6 schools represented. Debate was energetic and members engaged in  topics related to:

  • Preparing to teach Computing.
  • Mapping  examining body assessment to curriculum.

Sue Sutton the curriculum advisor from the AQA was in attendance and provided some valuable input to the debate. Some concerns as to the overly prescriptive and constrictive nature of the assessment for GCSE Computing were raised and Sue agreed to forward comments and concerns.

University of Lincoln, UK