GRAB: A Dataset of Whole-Body Human Grasping of Objects
Abstract
Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time. While "grasping" is commonly thought of as a single hand stably lifting an object, we capture the motion of the entire body and adopt the generalized notion of "whole-body grasps". Thus, we collect a new dataset, called GRAB (GRasping Actions with Bodies), of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size. Given MoCap markers, we fit the full 3D body shape and pose, including the articulated face and hands, as well as the 3D object pose. This gives detailed 3D meshes over time, from which we compute contact between the body and object. This is a unique dataset, that goes well beyond existing ones for modeling and understanding how humans grasp and manipulate objects, how their full body is involved, and how interaction varies with the task. We illustrate the practical value of GRAB with an example application; we train GrabNet, a conditional generative network, to predict 3D hand grasps for unseen 3D object shapes. The dataset and code are available for research purposes at grab.is.tue.mpg.de.
Video
News
24 October 2020
- Rendered Videos now available for preview (Downloads Section).
25 August 2020
We have now released:
- Our code on GitHub (links below).
- Our GRAB v1.0 dataset.
- Our GrabNet v1.0 data and trained models.
Data and Code
Please register and accept the License agreement in this website in order to get access to the GRAB dataset. The license and downloads section include explicit restrictions per subject, to which you agree to comply with.
When creating an account, please opt-in for email communication, so that we can reach out to you per email to announce potential significant updates.
- GRAB dataset (works only after sign-in)
- GrabNet data (works only after sign-in)
- GrabNet model files/weights (works only after sign-in)
- Code for GRAB (GitHub)
- Code for GrabNet (GitHub)
Referencing GRAB
@inproceedings{GRAB:2020,
title = {{GRAB}: A Dataset of Whole-Body Human Grasping of Objects},
author = {Taheri, Omid and Ghorbani, Nima and Black, Michael J. and Tzionas, Dimitrios},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2020},
url = {https://grab.is.tue.mpg.de}
}
@InProceedings{Brahmbhatt_2019_CVPR,
title = {{ContactDB}: Analyzing and Predicting Grasp Contact via Thermal Imaging},
author = {Brahmbhatt, Samarth and Ham, Cusuh and Kemp, Charles C. and Hays, James},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2019},
url = {https://contactdb.cc.gatech.edu}
}
We kindly ask you to also cite Brahmbhatt et al. (ContactDB website), whose object meshes are used for our GRAB dataset, as also described in our license.
Acknowledgements
Special thanks to Mason Landry for his invaluable help with this project.
We thank:
- Senya Polikovsky, Markus Hoschle (MH) and Mason Landry (ML) for the MoCap facility.
- ML, Felipe Mattioni, David Hieber, and Alex Valis for MoCap cleaning.
- ML and Tsvetelina Alexiadis for trial coordination, and MH and Felix Grimminger for 3D printing,
- ML and Valerie Callaghan for voice recordings, Joachim Tesch for renderings.
- Jonathan Williams for the website design, and Benjamin Pellkofer for the IT and web support.
- Sai Kumar Dwivedi and Nikos Athanasiou for proofreading.
Contact
For questions, please contact grab@tue.mpg.de.
For commercial licensing, please contact ps-licensing@tue.mpg.de.