Project partners: Technical University of Munich, Knoke Beschlagtechnik GmbH, Gräff Robotics GmbH
Automatic bin picking is a major problem in robotics with the goal to pick objects from a bin by a robot arm. The iBinPick project is targeted to study the problem of bin picking for small identical objects, which are piled randomly in large quantities and thus, making the inherent challenges of bin picking more severe. Namely, the objects are more densely crowded and their individual recognition requires more insightful strategies. Moreover, the 6D poses of small instances demand more precision to be grasped by the robot.
Generally, bin picking consists of four main steps, (i) object detection, (ii) 6D pose estimation, (iii) motion planning and (iv) control. In this research, we focus on the first two phases, i.e. object detection and 6D pose estimation, which together establish the computer vision module of the project in order to understand the scene via sensor. We mainly use Microsoft Azure Kinect camera for capturing aligned RGB and depth images. In the experiments, the camera is mounted at two heights of 30 cm and 60 cm above the bin ground.
A Real Dataset for Object Recognition for Highly Cluttered Homogeneous Bin Picking
This dataset contains 600 samples (aligned RGB and depth images) for 6 object models. For this, we have mounted a Microsoft Azure Kinect camera at two different heights (30 cm and 60 cm) above the bin ground. Evaluating images at different heights is important to decide upon the placement of the camera (e.g. robot-carried or fixed-placed). These cluttered images are labeled for instance segmentation, which are conforming to the COCO dataset. We have used the RGB images and app.neuralmarker.ai for labeling.
Download the CAD models
Please cite our related paper if you use this dataset.