Logo SNSF_0 logo

MeganePro­­ –Myo­Electricity, Gaze and Artificial ­Intelligence for Neurocognitive Examination Prosthetics

Bilateral and monolateral hand amputees suer strong functional deficits due to their impairment. Surface Electromyography (sEMG) currently gives some control capabilities but these are limited, often not natural and usually require long training times. The application of modern machine learning techniques to analyze sEMG activity related to natural movements seems promising but it is far from practice. With the NINAPRO project, we started to improve this situation through the establishment of a benchmark database of sEMG data for hand movements which has been welcomed with enthusiasm by the scientific community. In recent years, some papers showed that the combination of visual and electromyography data can strongly extend the capabilities of dexterous prostheses. With MEGANE PRO, we aim to bring the research in this field to its next step, i.e. to better understand the neurologic and neurocognitive effects of amputation on the persons and to strongly improve robotic prostheses control possibilities by hand amputated subjects. This project aims at improving the state of the art in hand prosthetics and also improve the clinical outcome of the patients (e.g., by respecting individual phantom limb phenomenology).

Schermata 2016-04-22 alle 22.49.08   Schermata 2016-04-22 alle 22.55.19

RoboExNovo­­ –Robots Learning about Objects from Externalized Knowledge Sources

While today’s robots are able to perform sophisticated tasks, they can only act on objects they have been trained to recognize. This is a severe limitation: any robot will inevitably face novel situations in unconstrained settings, and thus will always have knowledge gaps. This calls for robots able to learn continuously about objects by themselves. The learning paradigm of state ­of ­the ­art robots is the sensorimotor toil, i.e. the process of acquiring knowledge by generalization over observed stimuli. This is in line with cognitive theories that claim that cognition is embodied and situated, so that all knowledge acquired by a robot is specific to its sensorimotor capabilities and to the situation in which it has been acquired. Still, humans are also capable of learning from externalized sources like books, illustrations, etc containing knowledge that is necessarily unembodied and unsituated. To overcome this gap, RoboExNovo proposes a paradigm shift. I will develop a new generation on robots able to acquire perceptual and semantic knowledge about object from externalized, unembodied resources, to be used in situated settings. By enabling robots to use knowledge resources that were not explicitly designed to be accessed for this purpose, RoboExNovo will pave the way for ground­breaking technological advances in home and service robotics, driver assistant systems, and in general any Web­connected situated device.

­

ALOOF ­­–Autonomous Learning of the Meaning of Objects. FP7 CHIST­ERA

For autonomous robots to be able to reliably perform service tasks, it is crucial that they continuously learn, adapt and improve in the context of ever­ changing environments. We are still far from this: as of today, even the best system we can engineer will fail when facing situations not anticipated at design time. This is because the real world is too complex and unpredictable to be summarized within a limited set of specifications. Therefore a robot will inevitably face novel situations, and thus will always have gaps, conflicts or ambiguities in their own knowledge and capabilities. The goal of ALOOF is to equip autonomous systems with the ability to learn the meaning of objects, i.e. their perceptual and semantic properties and functionalities, from curated resources such as structured textual and visual databases. Our evaluation scenario will consist of a robot autonomously extending a semantic object map in an elderly care facility setting. We will vary which objects the robot has not seen before, and extend task descriptions to reference knowledge initially unknown to the robot. This challenging benchmarking scenario will drive and guide our development and support the evaluation of our progress.