Guest Lecture: Kathrin Gerling (University of Saskatchewan)

Wheelchair Revolution: Motion-Based Games for Players with Mobility Disabilities

The growing popularity of motion-based video games such as Dance Dance Revolution, Dance Central, or Wii Fit creates new challenges for game design. Research has shown that such games have a variety of benefits, for example,  improved cognition, physical health, and emotional well-being. However, they remain largely inaccessible for people with mobility disabilities, a group that could greatly benefit from physical activity. In this talk, Kathrin presents an approach to make motion-based video games wheelchair-accessible, and discusses challenges and design opportunities for players with mobility disabilities with a focus on game interface design and game balancing.

This lecture is scheduled for Monday 4th November, from 1pm- 2pm in the EMMTEC Lecture Theatre.


Bringing video games into the real world

Video games are slowly moving out of the monitor and into the real world. And this next stage of development in the world of gaming will be shared with Computer Science students during a special workshop.

‘Real-world’ or Mixed Reality gaming is fast becoming the next big thing in computer games advancement.

Students will be learning how to create games set in real environments during a special two-week workshop led by Richard Wetzel, a PhD student from the University of Nottingham.

Richard said: “These Mixed Reality location-based games are interesting because, unlike when you play traditional video games, you are moving around using your whole body and senses to explore the real-world environment. For example, they give people the opportunity to see the city they live in through new eyes.

“The main difficulty when designing games like this, which also is an advantage, is that you cannot completely control the real world. You obviously have other people and situations, such as the weather, that will change what is happening. Although you cannot foresee these complications, this is what makes the game a much richer experience. The serendipity of the real world influences players’ actions. I will be teaching the students about these problems and how to overcome them.”

The workshop at the University of Lincoln runs from Sunday, 27th October to Saturday, 9th November and involves students from all year groups.

For an example of Richard’s previous work in this area go to

Detecting pain in cats

Feline Friends grantAnalysing cats’ facial expressions could lead to a major breakthrough in helping to alleviate feline suffering.

Computer vision expert Dr Georgios Tzimiropoulos, from the University of Lincoln, UK, has been pioneering the development of self-learning computer vision systems to aid the automatic detection of facial expressions.

Although the focus has been on humans, the technology will now be used to explore emotional expression in cats.

Derbyshire-based charity Feline Friends donated almost £400,000 for the research, which is aimed at detecting suffering earlier and possibly more subtle signs than has previously been possible, so that owners seek veterinary assistance sooner.

The idea is that by feeding the computer images of cats before and after treatment it will eventually start to pick out the key features that differentiate the two conditions.

Dr Tzimiropoulos will work with leading veterinary behaviourist Professor Daniel Mills, from the School of Life Sciences, who has been developing a clinical technique to help behaviourists identify the emotions of companion animals.

Professor Mills said: “This is a rare opportunity to systematically explore the emotional aspects of suffering in animals in new ways, with a view to developing more efficient early detection mechanisms. The multidisciplinary approach we will be using is ambitious, but has the potential to produce enormous rewards not just for those interested in feline welfare, but also animal welfare more broadly, as the methods we will be developing could be applied to any species.

“The translation of our findings into a usable resource is a major part of the project, so we can maximise the impact of our research. We are delighted that Feline Friends has had the courage and vision to make such a substantial investment in this pioneering work. We anticipate the project will take nearly five years to complete, but hope to be making useful contributions from an early stage within the research.”

Caroline Fawcett, of the charity Feline Friends, added: “Helping owners to better understand their feline companions, and the numerous ailments which beset them, has always been a paramount objective of our charity. Cats are notorious for not showing pain until their suffering becomes unbearable, and this visionary research may open our eyes in such a way that we can take much earlier action to relieve their suffering. The team at the University of Lincoln has demonstrated to us that they really do care about improving the welfare of our cats; and I believe that if anyone can succeed in breaking through the existing barriers to our knowledge then they can.”

Smarter video searching and indexing

Saddam BekhetTyping in text to find a film clip on YouTube often results in diverse (and sometimes unrelated) videos being suggested.

This problem could soon be resolved with the advent of smarter video-search engines that are able to pick and choose the most relevant videos by analysing a tiny fraction of video frames.

Research to create a quick and easy framework that is able to discover semantic similarities between videos without using text-based tags is being carried out at the University of Lincoln, UK.

The volume of video data is rapidly increasing with more than 4 billion hours of video being watched on YouTube each month. The majority of available video data exists in compressed format and the first step towards effective video retrieval is to extract features from these compressed videos.

School of Computer Science PhD student Saddam Bekhet, along with Dr Amr Ahmed and Professor Andrew Hunter, has produced a paper on recent work which suggests a framework towards real-time video matching.

Saddam said: “Everyone uses search engines but currently you are only able to search by text even to search for a video clip, thus some results are far removed from what you were looking for. With the huge volume of data, a smarter video analyser is required to associate semantic tags to the uploaded videos, allowing more efficient indexing and search (including the contents of the video). Being able to enhance the underlying search mechanism (or even input a visual query) would really enhance the likes of YouTube.”

Saddam’s framework relies upon finding similarities between videos using tiny frames instead of using the full-size video frames. Such tiny frames are easily extracted from a compressed video in real-time and able to fully represent video content, without wasting more time in decompressing the video to perform complex computer algorithms.

“I want to discover the semantic similarity between videos using the content only,” he explained. “I adapted some new techniques and found that tiny representative frames could be used to discover similarities. The next stage is to build an effective framework.”

The research follows on from work carried out to provide a framework for automated video analysis and annotation, by Lincoln’s Digital Contents Analysis, Production and Interaction (DCAPI) Group.

The paper ‘Video Matching Using DC-image and Local Features’ was awarded the Best Student Paper Award at the International Conference of Signal and Image Engineering - part of the 2013 World Congress on Engineering.
Organised by the International Association of Engineers (IAENG), the conference focuses on the frontier topics in the theoretical and applied engineering and computer science subjects.


University of Lincoln, UK