Skip to content


Welcoming BAXTER – the University of Lincoln’s newest robot

baxter

The University of Lincoln’s School of Computer Science has welcomed a new robot capable of sensing and manipulation in workspaces shared with humans – the first of its kind at the University.

The arrival of the new BAXTER robot at Lincoln’s Centre for Autonomous Systems heralds the start of a research project, supported by the Research Investment Fund (RIF), exploring the potential for human-robot collaboration within the manufacturing industries.

Led by Dr Marc Hanheide, the collaborative Manipulation for Adaptive Human-Robot Collaboration in Manufacturing (McMan) project will involve researchers from across the Schools of Computer Science and Engineering. Together they will use the new BAXTER robot as a test-bed and demonstrator for industry-relevant research into how humans and robots can work together to improve productivity, safety and efficiency in manufacturing workplaces, as well as safety-critical robot control.

The BAXTER robot is produced by Rethink Robotics as a cost-effective solution for businesses handling low-volume, high-mix production jobs. BAXTER has already been integrated into some factory workforces across North America to support employees with tasks involving the handling of light-weight products, such as line-loading and packaging.

At Lincoln, researchers will explore how BAXTER can be programmed using smart sensor technologies to create a prototype for human-robot collaborations, which can then be used as the basis for future studies.

Dr Hanheide, Reader in the School of Computer Science, said: “Facilitating closer human-robot collaboration in manufacturing has been identified as one of the key “technology clusters” where progress is most essential for Europe’s future. Enabling robots and humans to work more closely together will help SMEs to become more cost effective, and help citizens to improve the productivity and quality of their working lives.

“The integration of sensing and cognition technologies into robots is fundamental to enabling this collaboration, so that they can predict human motions, participate in joint tasks, ensure safety, and adapt to the needs and abilities of different individuals. The McMan project will build our capabilities to do precisely this. While some of our other research projects focus on autonomous robotics and engineering applications in manufacturing, developing a robot capable of sensing human and responding to human action is a new and very exciting research area for the University.”

The McMan team, which also includes Professor Tom Duckett from the School of Computer Science and Dr Argyrios Zolotas and Dr Andrea Paoli from the School of Engineering, will aim to hold an industry workshop as part of the project, to disseminate findings and explore new opportunities for research and collaboration with local and national manufacturing businesses.

Posted in Uncategorized.


Leading figures in human computer interaction converge in Lincoln

The 800th anniversary of Magna Carta is the inspiration for this year’s British Human Computer Interaction Conference (HCI 2015), hosted by the University of Lincoln, UK.

HCI 2015, which takes place from 13th-17th July, will focus on our ever-evolving digital society and the role interactive technology plays in mediating and communicating political views. A total of 220 delegates from 18 countries will be in attendance.

Organised by Lincoln’s Social Computing (LiSC) research centre, the conference is inspired by the anniversary of the sealing of Magna Carta in 1215, an event viewed as an international cornerstone of liberty and one that challenged society’s relationship with authority

Director of LiSC, Professor Shaun Lawson said: “Lincoln is home to one of only four surviving copies of Magna Carta and will take a major role in the 800th anniversary celebrations coinciding with our hosting of HCI 2015. The theme reflects the increasing public consciousness of how interactive technologies fundamentally affect our privacy, rights, and relationships with authority, government and commerce.

“This conference will set the agenda in the UK and internationally around the design of future interactive digital systems. The research community used to be interested in the use and design of a device, but now it’s more about the experience and the way digital technology affects our lives, including our political and democratic lives.”

The keynote speakers, Chris Csikszentmihalyi, Katrín Jakobsdóttir, Cristina Leston-Bandeira and Wikileaks founder Julian Assange, will discuss how Magna Carta is more relevant than ever in an age where interactive digital technology constantly shapes our lives and our relationships with each other, as well as those in authority.

Assange, for instance, has recently spoken about the Wikileaks’ claim that “top secret intelligence reports and technical documents” from the US National Security Agency (NSA) state it spied on communications by successive French Presidents from 2006-12.

Posted in Events, School news.

Tagged with , , , .


World-first in showing clinical-quality Proton CT for treatment of cancer

An international team of researchers will for the first time be able to demonstrate clinical-quality Proton CT to improve Proton Therapy in the treatment of cancer – moving a step closer to this improved treatment method being used to help those suffering with certain forms of cancer, particularly for children and young people.

Led by Distinguished Professor of Image Engineering Nigel Allinson MBE, from the University of Lincoln, UK, the pioneering PRaVDA (Proton Radiotherapy Verification and Dosimetry Applications) project is developing one of the most complex medical instruments ever imagined to improve the delivery of proton beam therapy.

The team has time on the South African National Cyclotron (a type of particle accelerator), near Cape Town, and hopefully later this year will have a world-first in showing clinical-quality Proton CT.

Professor Allinson said: “The uncertainties in where the protons lose their energy and do damage (tumour or healthy tissue) will only be eliminated by using the same type of radiation, Protons, to image and to treat. By delivering clinical quality Proton CT images, this project will greatly improve the treatment of cancer using proton therapy. Such therapy is particularly useful in the treatment of young people, those with brain tumours, and eventually it may help such stubborn cancers as lung cancer.

“It has been an extraordinary engineering feat – certainly the most complex piece of engineering undertaken at the University of Lincoln. It has involved the active involvement of six universities, four NHS Trusts, National Research Laboratories in South Africa, two specialist sub-contractors and numerous UK and European suppliers.

“The system including detectors of the type used in the Large Hadron Collider and CMOS imagers like those to be found in your smartphone but 500 times larger and working 20 times faster. There is enough processed silicon wafers to make more than 25,000 iPhone cameras. The data output of the system is equivalent to over 300 HDTV channels.”

The innovation will assist radiotherapists by helping them to achieve accurate proton CT images. Nearly half of all cancer patients receive radiotherapy as part of their curative treatment, and most radiotherapy is delivered using high-energy external beams of x-rays. Proton beam therapy, however, uses a different type of beam to conventional radiotherapy. It uses a high-energy beam of protons. Like x-rays, protons can penetrate tissue to reach deep tumours. However, compared to x-rays, protons cause less damage to healthy tissue in front of the tumour, and no damage at all to healthy tissue lying behind, which greatly reduces the side effects of radiation therapy.

There are uncertainties in exactly where the protons will lose most of their energy and hence kill the tumour while not adversely affect healthy tissue. For a tumour 20cm deep inside a patient, this uncertainty is about 1.5 cm. Using Proton CT will eliminate all of these errors.

In November 2014 the consortium received a prestigious Institution of Engineering and Technology (IET) Innovation Award, named as the winner in the Model-Based Engineering category.

The project is funded by The Wellcome Trust.

PRaVDA RS Pravda 3 RS Pravda 2

Posted in Research.

Tagged with , , .


International medical imaging conference coming to Lincoln

The University of Lincoln, UK, is to host the next Annual Conference in Medical Image Understanding and Analysis (MIUA).

The conference, which takes place from the 15-17 July, provides an opportunity to present and discuss research in medical image understanding and analysis and its real-world applications.

This year’s keynote lectures are delivered by four eminent academics: Professor Brian F. Hutton from University College London; Professor Tim Cootes from the University of Manchester; Professor Alejandro Frangi from the University of Sheffield; and Associate Professor Alfredo Ruggeri from the University of Padua, Italy.

In addition, there will be two pre-conference tutorials by Professor Philip Evans from the University of Surrey and Dr Charalampos Tsoumpas, from the University of Leeds.

The MIUA 2015 conference is being organised by Dr Tryphon Lambrou and Dr Xujiong Ye, from the Laboratory of Vision Engineering (LoVE), based in Lincoln’s School of Computer Science.

The University of Lincoln was chosen to host the 19th MIUA because of its varied and growing research in medical imaging.

MIUA, which took place at Royal Holloway in Egham (near London) in July 2014, will include both oral and poster presentations.

Selected papers will be invited to submit extended version of the papers to an ISI indexed journal.

Click here to register or e-mail miua2015@lincoln.ac.uk for more information.

MRI scan

MRI scan

Ultrasound scan

Ultrasound scan

Posted in School news.

Tagged with , , , .


Computer vision and mobile technology could help blind people ‘see’

Computer scientists are developing new adaptive mobile technology which could enable blind and visually-impaired people to ‘see’ through their smartphone or tablet.

Funded by a Google Faculty Research Award, specialists in computer vision and machine learning based at the University of Lincoln, UK, are aiming to embed a smart vision system in mobile devices to help people with sight problems navigate unfamiliar indoor environments.

Based on preliminary work on assistive technologies done by the Lincoln Centre for Autonomous Systems, the team plans to use colour and depth sensor technology inside new smartphones and tablets, like the recent Project Tango by Google, to enable 3D mapping and localisation, navigation and object recognition. The team will then develop the best interface to relay that to users – whether that is vibrations, sounds or the spoken word.

Project lead Dr Nicola Bellotto, an expert on machine perception and human-centred robotics from Lincoln’s School of Computer Science, said: “This project will build on our previous research to create an interface that can be used to help people with visual impairments.

“There are many visual aids already available, from guide dogs to cameras and wearable sensors. Typical problems with the latter are usability and acceptability. If people were able to use technology embedded in devices such as smartphones, it would not require them to wear extra equipment which could make them feel self-conscious. There are also existing smartphone apps that are able to, for example, recognise an object or speak text to describe places. But the sensors embedded in the device are still not fully exploited. We aim to create a system with ‘human-in-the-loop’ that provides good localisation relevant to visually impaired users and, most importantly, that understands how people observe and recognise particular features of their environment.”

The research team, which includes Dr Oscar Martinez Mozos, a specialist in machine learning and quality of life technologies, and Dr Grzegorz Cielniak, who works in mobile robotics and machine perception, aims to develop a system that will recognise visual clues in the environment. This data would be detected through the device camera and used to identify the type of room as the user moves around the space.

A key aspect of the system will be its capacity to adapt to individual users’ experiences, modifying the guidance it provides as the machine ‘learns’ from its landscape and from the human interaction. So, as the user becomes more accustomed to the technology, the quicker and easier it would be to identify the environment.

The research team will work with a Google sponsor and will be collaborating with specialists at Google throughout the ‘Active Vision with Human-in-the-Loop for the Visually Impaired’ project.

Below is an interview with Dr Bellotto on BBC Radio Lincolnshire:

A PhD position is now available to work on this project. Click here for further details.

Posted in Research.

Tagged with , , , , , , , , .