Cognitive Systems Table Tennis Robot

Jonas Tebbe, Yapeng Gao

The chair is equipped with a KUKA Agilus robot with 6 degrees of freedom. Motivated by the KUKA commercial (https://www.youtube.com/watch?v=tIIJME8-au8), we are teaching the robot to play table tennis.

External content

Actually, you are supposed to see a video here. To display this content (source: www.xyz.de), please click the "Accept" button below. Please note that by viewing the video, data might be transmitted to third parties or cookies may be stored.

For further information see our privacy policy.

A high-speed vision system was set up to identify a table tennis ball in the scene and infer its 3D position. To find the ball in an image, we use three filters that take the ball's movement, color, and shape into account. Its 3D position is estimated by triangulation using the ball's pixel coordinates in both camera frames. To speed up the image processing, we restrict to a small region of interest if a ball is already found in the previous frame. Based on the position information, the trajectory state is estimated by an extended Kalman filter and predicted into the future using an airodynamic force model. A description of the robot system is contained in [4].

Spin Detection

In table tennis, the rotation (spin) of the ball plays a crucial role. A table tennis match will feature a variety of strokes. Each generates different amounts and types of spin. To develop a robot that can compete with a human player, we need detect the spin, so the robot system can plan an appropriate return stroke. We propose three different methods to estimate spin [2]. For the first two approaches, we use a high-speed camera that captures the ball in flight at a frame rate of 380 Hz. This camera allows the movement of the circular brand logo printed on the ball to be seen (see Figure 1). The first approach uses background difference to determine the position of the logo. In a second alternative, we train a CNN to predict the orientation of the logo. The third method evaluates the trajectory of the ball and derives the rotation from the effect of the Magnus force. This method gives the highest accuracy and is used for the demonstration seen below. Our robot successfully copes with different spin types in a real table tennis rally against a human opponent.

External content

Actually, you are supposed to see a video here. To display this content (source: www.xyz.de), please click the "Accept" button below. Please note that by viewing the video, data might be transmitted to third parties or cookies may be stored.

For further information see our privacy policy.

Learning Return Strokes in 200 balls

A sample-efficient RL algorithm was developed to learn the parameters for a successful robotic return stroke [1]. Every table tennis stroke is different, with varying placement, speed, and spin. To return the ball, the state of the ball (position, velocity, spin) and racket (pose, velocity) at hitting time are the crucial elements. Therefore, we decided to have a reinforcement learning algorithm use our prediction for the ball state to suggest the racket hitting state. Then we generate a fitting robot trajectory for that state with the Reflexxes Motion Library.

An actor-critic-based deterministic policy gradient algorithm was developed for accelerated learning. Our approach achieves accurate returns on the real robot in a number of challenging scenarios within 200 balls of training. The video presenting our experiments is shown below.

External content

Actually, you are supposed to see a video here. To display this content (source: www.xyz.de), please click the "Accept" button below. Please note that by viewing the video, data might be transmitted to third parties or cookies may be stored.

For further information see our privacy policy.

We also uploaded a video with a complete training process (only 7 minutes) for the second scenario (I-play). The corresponding results are found in Figure 3.

External content

Actually, you are supposed to see a video here. To display this content (source: www.xyz.de), please click the "Accept" button below. Please note that by viewing the video, data might be transmitted to third parties or cookies may be stored.

For further information see our privacy policy.

Publications

[1] Jonas Tebbe, Lukas Krauch, Yapeng Gao, and Andreas Zell. Sample-efficient Reinforcement Learning in Robotic Table Tennis. In 2021 IEEE International Conference on Robotics and Automation (ICRA), Xian, China, May 2021. [ link ]
[2] Yapeng Gao, Jonas Tebbe, and Andreas Zell. Robust stroke recognition via vision and imu in robotic table tennis. In International Conference on Artificial Neural Networks, pages 379--390, Cham, 2021. Springer, Springer International Publishing.
[3] Jonas Tebbe, Lukas Klamt, Yapeng Gao, and Andreas Zell. Spin Detection in Robotic Table Tennis. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 9694--9700, Paris, France, May 2020. [ DOI | link ]
[4] Yapeng Gao, Jonas Tebbe, and Andreas Zell. Real-time 6d racket pose estimation and classification for table tennis robots. International Journal of Robotic Computing, 1(1):23--39, September 2019. [ DOI | link ]
[5] Yapeng Gao, Jonas Tebbe, Julian Krismer, and Andreas Zell. Markerless Racket Pose Detection and Stroke Classification based on Stereo Vision for Table Tennis Robots. In 2019 Third IEEE International Conference on Robotic Computing (IRC), pages 189--196, Naples, Italy, February 2019.
[6] Jonas Tebbe, Yapeng Gao, Marc Sastre-Rienitz, and Andreas Zell. A Table Tennis Robot System using an industrial KUKA Robot Arm. In German Conference on Pattern Recognition (GCPR), Stuttgart, Germany, October 2018. [ DOI ]