Information-driven Affordance Discovery
for Efficient Robotic Manipulation

Pietro Mazzaglia
Qualcomm AI Research
Ghent University

Taco Cohen
Qualcomm AI Research

Daniel Dijkman
Qualcomm AI Research

Article

Abstract

Robotic affordances, providing information about what actions can be taken in a given situation, can aid robotic manipulation. However, learning about affordances requires expensive large annotated datasets of interactions or demonstrations. In this work, we argue that well-directed interactions with the environment can mitigate this problem and propose an information-based measure to augment the agent's objective and accelerate the affordance discovery process. We provide a theoretical justification of our approach and we empirically validate the approach both in simulation and real-world tasks. Our method, which we dub IDA, enables the efficient discovery of visual affordances for several action primitives, such as grasping, stacking objects, or opening drawers, strongly improving data efficiency in simulation, and it allows us to learn grasping affordances in a small number of interactions, on a real-world setup with a UFACTORY XArm 6 robot arm.


Efficient visual affordance learning in simulation

Benchmarking on a series of Maniskill2 simulated tasks, Information-Driven Affordance discovery (IDA) leads to superior final performance, in terms of mean success rate (+8% overall success).

Online visual affordance learning in Real-world

When tested in real world, IDA accelerates visual grasp affordance learning in our robotic setup, obtaining a 90% success rate from only 250 self-supervised interactions.

Citation

@inproceedings{
        Mazzaglia2024IDA,
        title={Information-driven Affordance Discovery for Efficient Robotic Manipulation},
        author={Pietro Mazzaglia and Taco Cohen and Daniel Dijkman},
        booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
        year={2024},
}