The research proposed in this project is driven by the need of independent mobility for the visually impaired. It addresses the fundamental problem of active vision with human-in-the-loop, which allows for improved navigation experience, including real-time categorization of indoor environments with a handheld RGB-D camera. This is particularly challenging due to the unpredictability of human motion and sensor uncertainty. While visual-inertial systems can be used to estimate the position of a handheld camera, often the latter must also be pointed towards observable objects and features to facilitate particular navigation tasks, e.g. to enable place categorization. An attention mechanism for purposeful perception, which drives human actions to focus on surrounding points of interest, is therefore needed. This project proposes a novel active vision system with human-in-the-loop that anticipates, guides and adapts to the actions of a moving user, implemented and validated on a mobile device to aid the indoor navigation of the visually impaired.
This is a collaborative project, involving academic institutes, industrial partners and charity organizations across six European countries. It tackles the progressive decline of cognitive capacity in the ageing population proposing an integrated platform for Ambient Assisted Living (AAL) with a mobile robot for long-term human monitoring and interaction, which helps the elderly to remain independent and active for longer. The system will contribute and build on recent advances in mobile robotics and AAL, exploiting new non-invasive techniques for physiological and activity monitoring, as well as adaptive human-robot interaction, to provide services in support to mental fitness and social inclusion. Our research contribution in this project focuses in the area of robot perception and ambient intelligence for human tracking and identity verification, as well as physiological and long-term activity monitoring of the elderly at home. Primary tasks include developing novel algorithms and approaches for enabling the acquisition, maintenance and refinement of models to describe human motion behaviors over extended periods. as well as integration of the algorithms with the AAL system.
The life span of ordinary people is increasing steadily and many developed countries, including UK, are facing the big challenge of dealing with an ageing population at greater risk of impairments and cognitive disorders, which hinder their quality of life. Early detection and monitoring of human activities of daily living (ADLs) is important in order to identify potential health problems and apply corrective strategies as soon as possible. In this context, the main aim of the current research is to monitor human activities in an ambient assisted living (AAL) environment, using a mobile robot for 3D perception, high-level reasoning and representation of such activities. The robot will enable constant but discrete monitoring of people in need of home care, complementing other fixed monitoring systems and proactively engaging in case of emergency. The goal of this research will be achieved by developing novel qualitative models of ADLs, including new techniques for 3D sensing of human motion and RFID-based object recognition. This research will be further extended by new solutions in long-term human monitoring for anomaly detection.
We use a variety of objects in our everyday life like for example, glasses, newspaper, or cups. These objects may be on the table, on the floor, or in the cabinet. Moreover, these objects are moved, handled, and left there by humans or robots. Looking for these objects is a task that we often carry out and that sometimes is quite time consuming, or even frustrating if the desired object is not found. Thus the main purpose of our research is to design a robot able to find everyday objects in indoor environments and bring them to the user. To carry out this task, the robot will combine human tracking techniques with some other sensors such as vision or RFIDs to trace the objects and pick them up. One of the main points in this research will be the interaction and exchange of information between the robot, the environment, and the human himself to solve complicate tasks such as finding lost objects. In this last case, even if the robot does not posses the complete knowledge about the searched object such as the shape, color, or the precise location, it will search and find the object according to the, sometimes ambiguous, instructions from the human. In other words, the robot and the human will collaborate with each other and will increase their own belief about the searched object by exchanging partial information about the state of the object. A robot able to find objects in indoor environments has many applications. For example, it could help people with moving disabilities by fetching the objects they need. Moreover, people with special memory loss problems such as Alzheimer patients, or elderly people, will get great advantage of this system as well.
The main goal of the project is to advance the science of cognitive systems through a multi-disciplinary investigation of requirements, design options and trade-offs for human-like, autonomous, integrated, physical (eg., robot) systems, including requirements for architectures, for forms of representation, for perceptual mechanisms, for learning, planning, reasoning and motivation, for action and communication.
The goal of this project is to develop an autonomous vehicle based the Smart car. The project was created as a cooperation between the Autonomous Systems Lab at EPFL, Lausanne, and the research group for Autonomous Intelligent Systems at ALU, Freiburg, to participate in the ELROB 2006 event, where 20 European teams demonstrated outdoor navigation technology.
[Technical Paper (pdf: 737k)]
Daily life assistance for elderly people is one of the most promising scenarios for service robots in the near future. In particular, the go-and-fetch task will be one of the most demanding tasks in these cases. We work on an informationally structured room that supports a service robot in the task of daily object fetching.
We work on several approaches to detect and categorize different furniture objects like chairs, tables or sideboards in 3D point clouds in the presence of clutter and occlusion. The detection and categorization of these objects is an important capability for service robots acting in indoor environments, as they play important roles in the tasks to be performed by these robots.Furniture Classification in ROS:
The recognition of people is key a capacity for service robots that interact with humans. We work on the detection of people with laser scans using one or several layers. The key idea is to learn classifiers for body parts that are represented by segments in laser scans. The main goal is to obtain a robust classifier which is able to detect people in different environments.
People Detection in ROS:
Semantic labeling is the problem of assigning labels with semantic meaning to the different places that composed indoor environments. Typical labels include "corridor", "room", "office", "lab", or "doorway". The ability to learn such semantic categories from sensor data enables a mobile robot to extend the representation of the environment facilitating the interaction with humans. We study different approach to categorize places using as input different sensors such as laser scans, vision cameras, RGB-D cameras.
Acquiring maps of the environment is a fundamental task for autonomous mobile robots, since the maps are required in higher level tasks, such as navigation and localization. In consequence, the problem of simultaneous localization and mapping (SLAM) has received significant attention in the robotics community. If the robot uses a camera as sensor for constructing the map, then this approach is known as visual SLAM.
We compare the behavior of different interest points detectors and descriptors under the conditions needed to be used as landmarks in vision-based simultaneous localization and mapping (SLAM).
There exist several attempts to develop bio-inspired artificial retinas with the purpose of replacing or partially recovering damaged functionalities in perception. Moreover, retinal-inspired models have been used to improve the vision system in robots. However, the complete information process inside the retina is not fully understood yet.
The retina is organized into layers formed by many local neuronal circuits which work in parallel. The study of the spatial relations between these mosaics is highly relevant to understand how the visual information spreads along them.
We apply spatial point patterns methods to study the spatial relations between the different mosaics in the retina.