All posts by Marc Hanheide

Research Seminar: Robot Learning with an Unknown Reward Function

We are pleased to announce an exciting seminar by Robert Pinsler (Cambridge University).  He will visit us on Wednesday, 11/4/2018 and give a talk at 2pm in INB3102.

Robot Learning with an Unknown Reward Function

Robert PinslerWhile reinforcement learning has led to promising results in robotics, defining an informative reward function often remains challenging. In this talk, I will give an overview about different reward learning approaches and how they can be used for learning robotics policies in practice. In particular, I will present an efficient hierarchical reinforcement learning approach for learning how to grasp objects from preferences. Furthermore, I will show how inverse reinforcement learning can be used to learn flocking behavior of birds, which could potentially be used for apprenticeship learning of robot swarms.

Research Seminar, Fr 10/11/17 2pm: Modelling and Detecting Objects for Home Robots

Everyone interested robotics, computer vision, and computer science in general is cordially invited to the School of Computer Science research seminar

on Friday, 10/11/2017 at 2pm

in room JUN0001 (The Junction).

Modelling and Detecting Objects for Home Robots

Markus Vincze, Technical University Vienna

Abstract

In the near future service robots will start to handle objects in home tasks such as clearing the floor or table, making order or setting the table. Robots will need to know about all the objects in the environment. As a start, humans could show their favourite objects to the robot for obtaining full 3D models. These models are then used for object tracking and object recognition. Since modelling all objects in a home is cumbersome, learning object classes from the Web has become an option. While network based approaches do not perform too well in open settings, using 3D models and shape for detection in a hypothesis and verification scheme renders it possible to detect many objects touching each other. Finally, the models are linked to grasp point detection and warping, so that objects with small differences can be handled and the uncertainty of modelling as well as the robot grasping are taken care of. These methods are evaluated in settings for taking objects out of boxes, to pick up objects from the floor and for keeping track of objects in user homes.

Biography of Markus Vincze

Markus-Vincze-e1504228193491Markus Vincze received his diploma in mechanical engineering from Technical University Wien (TUW) in 1988 and a M.Sc. from Rensselaer Polytechnic Institute, USA, 1990. He finished his PhD at TUW in 1993. With a grant from the Austrian Academy of Sciences he worked at HelpMate Robotics Inc. and at the Vision Laboratory of Gregory Hager at Yale University. In 2004, he obtained his habilitation in robotics. Presently he leads the “Vision for Robotics” team at TUW with the vision to make robots see. V4R regularly coordinates EU (e.g., ActIPret, robots@home, HOBBIT) and national research projects (e.g, vision@home) and contributes to research (e.g., CogX, STRANDS, Squirrel, ALOOF) and innovation projects (e.g., Redux, FloBot). With Gregory Hager he edited a book on Robust Vision for IEEE and is (co-)author of 42 peer reviewed journal articles and over 300 reviewed other publications. He was the program chair of ICRA 2013 in Karlsruhe and will organize HRI 2017 in Vienna. Markus’ special interests are cognitive computer vision techniques for robotics solutions situated in real-world environments and especially homes.

Robotics Research Seminar 24/5/17: “Making Robust SLAM Solvers for Autonomous Mobile Robots”

We invite everybody to attend the robotics research seminar, organised by L-CAS, on Wednesday 24/5/2017:

grisettiDr Giorgio Grisetti, DIAG, University of Rome “Sapienza”:

Making Robust SLAM Solvers for Autonomous Mobile Robots

  • WHERE: AAD1W11, Lecture Theatre (Art, Architecture and Design Building), Brayford Pool Campus
  • WHEN: Wednesday 24th May 2017, 3:00 – 4:00 pm

ABSTRACT:

In robotics, simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent’s location within it.

SLAM is an essential enabling technology for building truly autonomous robots that can operate in an unknown environment. The last three decades have seen substantial research in the field and modern SLAM systems are able to cope easily with operating conditions that in the past were regarded as challenging if not impossible to deal with.

This consideration might support the statement that SLAM is a closed problem. However a closer look at the contributions presented in the most relevant conferences and journals in robotics reveals that the papers on SLAM are still numerous and the community is large. Would this be the case if an off-the shelf solution that works all the time were available?

Non-experts that approach the problem, or even want to get one of the state-of-the-art systems running, often encounter problems and get performances that are far from the ones reported in the papers.  This is usually because the person using the system is not the person designing the system.  An open box approach that aims at solving the problems by modifying an existing pipeline is often hard to implement due to the complexity of modern SLAM systems.

In this talk we will overview the history of SLAM and we will outline some of the challenges in designing robust SLAM systems, and most importantly forming robust SLAM solvers.

Furthermore, we will also present PRO-SLAM (SLAM from a programmer’s perspective), a simplistic open-source pipeline that competes with state-of-the art Stereo Visual SLAM systems while focusing on simplicity to support teaching.

https://gitlab.com/srrg-software/srrg_proslam