Human Grasp Classification for Reactive Human-to-Robot Handovers

Wei Yang* 1, Chris Paxton*1, Maya Cakmak1,2, Dieter Fox1,2
* equal contribution; 1 NVIDIA; 2 University of Washington


We propose an approach for human-to-robot handovers in which the robot meets the human halfway, by classifying the human’s grasp of the object and quickly planning a trajectory accordingly to take the object from the human’s hand according to their intent To do this, we collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses, and learn a deep model on this dataset to classify the hand grasps into one of these categories. We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position, and replans as necessary when the handover is interrupted. Through a systematic evaluation, we demonstrate that our system results in more fluent handovers versus two baselines. We also present findings from a user study (N=9) demonstrating its effectiveness and usability.