Tag Archives: Computer Vision

Research Seminar, Fr 10/11/17 2pm: Modelling and Detecting Objects for Home Robots

Everyone interested robotics, computer vision, and computer science in general is cordially invited to the School of Computer Science research seminar

on Friday, 10/11/2017 at 2pm

in room JUN0001 (The Junction).

Modelling and Detecting Objects for Home Robots

Markus Vincze, Technical University Vienna

Abstract

In the near future service robots will start to handle objects in home tasks such as clearing the floor or table, making order or setting the table. Robots will need to know about all the objects in the environment. As a start, humans could show their favourite objects to the robot for obtaining full 3D models. These models are then used for object tracking and object recognition. Since modelling all objects in a home is cumbersome, learning object classes from the Web has become an option. While network based approaches do not perform too well in open settings, using 3D models and shape for detection in a hypothesis and verification scheme renders it possible to detect many objects touching each other. Finally, the models are linked to grasp point detection and warping, so that objects with small differences can be handled and the uncertainty of modelling as well as the robot grasping are taken care of. These methods are evaluated in settings for taking objects out of boxes, to pick up objects from the floor and for keeping track of objects in user homes.

Biography of Markus Vincze

Markus-Vincze-e1504228193491Markus Vincze received his diploma in mechanical engineering from Technical University Wien (TUW) in 1988 and a M.Sc. from Rensselaer Polytechnic Institute, USA, 1990. He finished his PhD at TUW in 1993. With a grant from the Austrian Academy of Sciences he worked at HelpMate Robotics Inc. and at the Vision Laboratory of Gregory Hager at Yale University. In 2004, he obtained his habilitation in robotics. Presently he leads the “Vision for Robotics” team at TUW with the vision to make robots see. V4R regularly coordinates EU (e.g., ActIPret, robots@home, HOBBIT) and national research projects (e.g, vision@home) and contributes to research (e.g., CogX, STRANDS, Squirrel, ALOOF) and innovation projects (e.g., Redux, FloBot). With Gregory Hager he edited a book on Robust Vision for IEEE and is (co-)author of 42 peer reviewed journal articles and over 300 reviewed other publications. He was the program chair of ICRA 2013 in Karlsruhe and will organize HRI 2017 in Vienna. Markus’ special interests are cognitive computer vision techniques for robotics solutions situated in real-world environments and especially homes.

Computer vision and mobile technology could help blind people ‘see’

Computer scientists are developing new adaptive mobile technology which could enable blind and visually-impaired people to ‘see’ through their smartphone or tablet.

Funded by a Google Faculty Research Award, specialists in computer vision and machine learning based at the University of Lincoln, UK, are aiming to embed a smart vision system in mobile devices to help people with sight problems navigate unfamiliar indoor environments.

Based on preliminary work on assistive technologies done by the Lincoln Centre for Autonomous Systems, the team plans to use colour and depth sensor technology inside new smartphones and tablets, like the recent Project Tango by Google, to enable 3D mapping and localisation, navigation and object recognition. The team will then develop the best interface to relay that to users – whether that is vibrations, sounds or the spoken word.

Project lead Dr Nicola Bellotto, an expert on machine perception and human-centred robotics from Lincoln’s School of Computer Science, said: “This project will build on our previous research to create an interface that can be used to help people with visual impairments.

“There are many visual aids already available, from guide dogs to cameras and wearable sensors. Typical problems with the latter are usability and acceptability. If people were able to use technology embedded in devices such as smartphones, it would not require them to wear extra equipment which could make them feel self-conscious. There are also existing smartphone apps that are able to, for example, recognise an object or speak text to describe places. But the sensors embedded in the device are still not fully exploited. We aim to create a system with ‘human-in-the-loop’ that provides good localisation relevant to visually impaired users and, most importantly, that understands how people observe and recognise particular features of their environment.”

The research team, which includes Dr Oscar Martinez Mozos, a specialist in machine learning and quality of life technologies, and Dr Grzegorz Cielniak, who works in mobile robotics and machine perception, aims to develop a system that will recognise visual clues in the environment. This data would be detected through the device camera and used to identify the type of room as the user moves around the space.

A key aspect of the system will be its capacity to adapt to individual users’ experiences, modifying the guidance it provides as the machine ‘learns’ from its landscape and from the human interaction. So, as the user becomes more accustomed to the technology, the quicker and easier it would be to identify the environment.

The research team will work with a Google sponsor and will be collaborating with specialists at Google throughout the ‘Active Vision with Human-in-the-Loop for the Visually Impaired’ project.

Below is an interview with Dr Bellotto on BBC Radio Lincolnshire:

A PhD position is now available to work on this project. Click here for further details.

Floor washing robots – revolutionising cleaning for big businesses

Floor washing robots could soon be used to clean large industrial and commercial premises, following a European research collaboration totalling 4.2 million Euros.

FLOor washing roBOT, or FLOBOT, will be a large-scale, autonomous floor washing machine, for washing the floors of supermarkets, airports and other big areas that have to be cleaned regularly.

Although it can be manually started, programmed and monitored by people, there will be no need to physically move it around making the process more efficient.

FLOBOT is being developed by a multi-disciplinary team, including the University of Lincoln, UK, which specialises in the software required to operate the robot.

Dr Nicola Bellotto, Principal Investigator from the University of Lincoln and member of the Lincoln Centre for Autonomous Systems Research works in mobile robotics and computer vision and has detailed knowledge on people tracking with robots.

Dr Bellotto said: “Our key aim is to program FLOBOT to detect and track people moving around so as to avoid them, and also be able to estimate typical human trajectories in the premises where it operates. We can then predict where it is likely to be most dirty, by analysing those trajectories and the general use of the environment.

“We will be modifying existing scrubbing machines, making them autonomous by adding new electronics and sensors, including a laser range finder and a 3D camera for detecting people. We are advancing technologies already developed at Lincoln and a prototype will be tested and validated throughout this project.”

Floor washing tasks have many demanding aspects, including autonomy of operation, navigation and path optimization, safety with regards to humans and goods, interaction with human personnel, easy set-up and reprogramming.

FLOBOT addresses these problems by integrating existing and new solutions to produce a professional floor washing robot for wide areas.

The work that will be carried out on production prototypes will ensure the actual system is completed and ready for real-world use.

Professor Tom Duckett, also from the University of Lincoln, works in autonomous robotics and sensor systems, and is Director of the Lincoln Centre for Autonomous Systems Research.

Professor Duckett said: “The general idea is to create professional service robots that will work in our everyday environments, providing assistance and helping to carry out tasks that are currently very time – and labour – intensive for human workers. Participating in this Innovation Action project is really exciting, because it means that many of the underpinning research concepts and technologies we have been developing at the Lincoln Centre for Autonomous Systems now have the potential to leave the laboratory and become part of real products like cleaning robots, which could impact on the everyday lives of people everywhere.”

The project is funded by Horizon 2020, the EU Framework Programme for Research and Innovation for 2014-2020.
Project partners include CyRIC – Cyprus Research and Innovation Centre (coordinator), Fimap SpA (Italy) – an international leader in the production of professional scrubbing machines, Robosoft Service Robots (France), Vienna University of Technology, Carrefour Italia, Manutencoop Facility Management (Italy), Ridgeback S.A.S. (Italy) and GSF SAS (France).

FLOBOT
FLOBOT

Research presented at international computer vision conference

Two papers from academics in the School of Computer Science were presented at the world’s premier computer vision event.

The CVPR conference, which took place between June 24-27 in Ohio, is the highest-ranked venue in Computer Science.

According to Google Scholar Metrics, it is also the top publication venue in the field of computer vision and pattern recognition.

This year the University of Lincoln’s School of Computer Science was represented with two papers.

The first is ‘Gauss-Newton Deformable Part Models for Face Alignment in-the-Wild’ by Dr Georgios Tzimiropoulos and Maja Pantic.

Dr Tzimiropoulos’ research finds applications in face recognition, facial expression analysis and human behaviour understanding. In particular, prior to recognising someone’s identity or understanding his/her facial expressions, a computer program must be able to accurately detect and localise the facial parts like the mouth and the eyes, as well as track their deformable motion in video.

This very well-known computer vision problem, also known as face alignment, is a difficult one, especially when the faces to be analysed are captured in-the-wild, i.e. there is no control over illumination, image resolution, and head pose variations or occlusions. Dr Tzimiropoulos’ algorithm aims to address all of these challenging cases. A video with illustrative face tracking results can be found at: http://www.youtube.com/watch?v=MjCSWTFBrFg

The second paper is ‘A Bayesian Framework for the Local Configuration of Retinal Junctions’ by Touseef Qureshi, Professor Andrew Hunter and Dr Bashir Al-Diri.

This focusses on the development of a probabilistic system to accurately configure the broken vessels in retinal images.

Retinal images provide an internal view of the human eye (retina) that contains forests of blood vessels. These vessels provide useful information which can be used for diagnosing several cardiovascular and cerebrovascular diseases.

Computer-based automated extraction of significant features from the retinal vessels can help early diagnostics of these diseases.

The correct configuration of broken vessels into trees of arteries and veins is a prerequisite for extracting significant information from the vasculature.

Touseef said: “We achieve remarkable results in the initial experiments and intend to develop fully automated diagnostic system in future. Moreover, the proposed system can be optimized for other applications such as biometric security systems and road extraction using aerial images.”

Touseef outside the conference centre
Touseef outside the conference centre
Touseef with academic poster
Touseef with academic poster