Tag Archives: Computer Vision

Computer vision and mobile technology could help blind people ‘see’

Computer scientists are developing new adaptive mobile technology which could enable blind and visually-impaired people to ‘see’ through their smartphone or tablet.

Funded by a Google Faculty Research Award, specialists in computer vision and machine learning based at the University of Lincoln, UK, are aiming to embed a smart vision system in mobile devices to help people with sight problems navigate unfamiliar indoor environments.

Based on preliminary work on assistive technologies done by the Lincoln Centre for Autonomous Systems, the team plans to use colour and depth sensor technology inside new smartphones and tablets, like the recent Project Tango by Google, to enable 3D mapping and localisation, navigation and object recognition. The team will then develop the best interface to relay that to users – whether that is vibrations, sounds or the spoken word.

Project lead Dr Nicola Bellotto, an expert on machine perception and human-centred robotics from Lincoln’s School of Computer Science, said: “This project will build on our previous research to create an interface that can be used to help people with visual impairments.

“There are many visual aids already available, from guide dogs to cameras and wearable sensors. Typical problems with the latter are usability and acceptability. If people were able to use technology embedded in devices such as smartphones, it would not require them to wear extra equipment which could make them feel self-conscious. There are also existing smartphone apps that are able to, for example, recognise an object or speak text to describe places. But the sensors embedded in the device are still not fully exploited. We aim to create a system with ‘human-in-the-loop’ that provides good localisation relevant to visually impaired users and, most importantly, that understands how people observe and recognise particular features of their environment.”

The research team, which includes Dr Oscar Martinez Mozos, a specialist in machine learning and quality of life technologies, and Dr Grzegorz Cielniak, who works in mobile robotics and machine perception, aims to develop a system that will recognise visual clues in the environment. This data would be detected through the device camera and used to identify the type of room as the user moves around the space.

A key aspect of the system will be its capacity to adapt to individual users’ experiences, modifying the guidance it provides as the machine ‘learns’ from its landscape and from the human interaction. So, as the user becomes more accustomed to the technology, the quicker and easier it would be to identify the environment.

The research team will work with a Google sponsor and will be collaborating with specialists at Google throughout the ‘Active Vision with Human-in-the-Loop for the Visually Impaired’ project.

Below is an interview with Dr Bellotto on BBC Radio Lincolnshire:

A PhD position is now available to work on this project. Click here for further details.

Floor washing robots – revolutionising cleaning for big businesses

Floor washing robots could soon be used to clean large industrial and commercial premises, following a European research collaboration totalling 4.2 million Euros.

FLOor washing roBOT, or FLOBOT, will be a large-scale, autonomous floor washing machine, for washing the floors of supermarkets, airports and other big areas that have to be cleaned regularly.

Although it can be manually started, programmed and monitored by people, there will be no need to physically move it around making the process more efficient.

FLOBOT is being developed by a multi-disciplinary team, including the University of Lincoln, UK, which specialises in the software required to operate the robot.

Dr Nicola Bellotto, Principal Investigator from the University of Lincoln and member of the Lincoln Centre for Autonomous Systems Research works in mobile robotics and computer vision and has detailed knowledge on people tracking with robots.

Dr Bellotto said: “Our key aim is to program FLOBOT to detect and track people moving around so as to avoid them, and also be able to estimate typical human trajectories in the premises where it operates. We can then predict where it is likely to be most dirty, by analysing those trajectories and the general use of the environment.

“We will be modifying existing scrubbing machines, making them autonomous by adding new electronics and sensors, including a laser range finder and a 3D camera for detecting people. We are advancing technologies already developed at Lincoln and a prototype will be tested and validated throughout this project.”

Floor washing tasks have many demanding aspects, including autonomy of operation, navigation and path optimization, safety with regards to humans and goods, interaction with human personnel, easy set-up and reprogramming.

FLOBOT addresses these problems by integrating existing and new solutions to produce a professional floor washing robot for wide areas.

The work that will be carried out on production prototypes will ensure the actual system is completed and ready for real-world use.

Professor Tom Duckett, also from the University of Lincoln, works in autonomous robotics and sensor systems, and is Director of the Lincoln Centre for Autonomous Systems Research.

Professor Duckett said: “The general idea is to create professional service robots that will work in our everyday environments, providing assistance and helping to carry out tasks that are currently very time – and labour – intensive for human workers. Participating in this Innovation Action project is really exciting, because it means that many of the underpinning research concepts and technologies we have been developing at the Lincoln Centre for Autonomous Systems now have the potential to leave the laboratory and become part of real products like cleaning robots, which could impact on the everyday lives of people everywhere.”

The project is funded by Horizon 2020, the EU Framework Programme for Research and Innovation for 2014-2020.
Project partners include CyRIC – Cyprus Research and Innovation Centre (coordinator), Fimap SpA (Italy) – an international leader in the production of professional scrubbing machines, Robosoft Service Robots (France), Vienna University of Technology, Carrefour Italia, Manutencoop Facility Management (Italy), Ridgeback S.A.S. (Italy) and GSF SAS (France).

FLOBOT
FLOBOT

Research presented at international computer vision conference

Two papers from academics in the School of Computer Science were presented at the world’s premier computer vision event.

The CVPR conference, which took place between June 24-27 in Ohio, is the highest-ranked venue in Computer Science.

According to Google Scholar Metrics, it is also the top publication venue in the field of computer vision and pattern recognition.

This year the University of Lincoln’s School of Computer Science was represented with two papers.

The first is ‘Gauss-Newton Deformable Part Models for Face Alignment in-the-Wild’ by Dr Georgios Tzimiropoulos and Maja Pantic.

Dr Tzimiropoulos’ research finds applications in face recognition, facial expression analysis and human behaviour understanding. In particular, prior to recognising someone’s identity or understanding his/her facial expressions, a computer program must be able to accurately detect and localise the facial parts like the mouth and the eyes, as well as track their deformable motion in video.

This very well-known computer vision problem, also known as face alignment, is a difficult one, especially when the faces to be analysed are captured in-the-wild, i.e. there is no control over illumination, image resolution, and head pose variations or occlusions. Dr Tzimiropoulos’ algorithm aims to address all of these challenging cases. A video with illustrative face tracking results can be found at: http://www.youtube.com/watch?v=MjCSWTFBrFg

The second paper is ‘A Bayesian Framework for the Local Configuration of Retinal Junctions’ by Touseef Qureshi, Professor Andrew Hunter and Dr Bashir Al-Diri.

This focusses on the development of a probabilistic system to accurately configure the broken vessels in retinal images.

Retinal images provide an internal view of the human eye (retina) that contains forests of blood vessels. These vessels provide useful information which can be used for diagnosing several cardiovascular and cerebrovascular diseases.

Computer-based automated extraction of significant features from the retinal vessels can help early diagnostics of these diseases.

The correct configuration of broken vessels into trees of arteries and veins is a prerequisite for extracting significant information from the vasculature.

Touseef said: “We achieve remarkable results in the initial experiments and intend to develop fully automated diagnostic system in future. Moreover, the proposed system can be optimized for other applications such as biometric security systems and road extraction using aerial images.”

Touseef outside the conference centre
Touseef outside the conference centre
Touseef with academic poster
Touseef with academic poster

Bat vision system could help protect buildings

BatsVital data on bat behaviour is being analysed by a computer vision system developed by the University of Lincoln and Lincolnshire Bat Group.

The technique, which uses a high-speed camera, filming in infra-red, is being developed by academics at the University of Lincoln, UK. It monitors wing beat frequency which might enable the Group to classify species of bat.

Being able to identify individual species would provide extra information on how to effectively manage and protect the buildings they inhabit.

Bat populations frequently roost in buildings, such as churches, which can cause problems in terms of corrosive faeces damaging the structure and valuable artefacts.

As a highly protected species, conservation groups are looking for ways in which to control colonies in a non-invasive way.

PhD student John Atanbori and Dr Patrick Dickinson, from the School of Computer Science, developed the system which has been used by the Lincolnshire Bat Group to collect data on a colony which have been rescued and are being re-habilitated.

John said: “This computer vision technique is able to monitor repeated patterns in wing beat frequency. As specie type can be determined from the way a bat moves its wings this provides vital information not only for conservationists studying the animals, but also building managers and professional ecologists. Wing beat frequency is just one feature that could be monitored to determine species, but the project hopes to eventually encompass other features such as the shape and weight of the bat to provide a faster, more detailed classification. We also hope to transfer this research to birds in the future.”

Data is being collected by the Lincolnshire Bat Group, a local arm of the Bat Conservation Trust.

Dr Peter Cowling, from the Group, which is part of the Bat Conservation Trust, said: “To conserve bats we need to establish the size of current bat populations, working out which bats are where and how they are responding to the threats and pressures they face. By monitoring bats we can discover the factors that are important for their survival. We can identify which species need action now, what areas are important for bats and what threats bats face.”

John presented his research at the National Bat Conference at the University of Warwick in September.

He also presented at CAIP 2013 – an international conference devoted to all aspects of computer vision, image analysis and processing, pattern and recognition.