Sensor Fusion and Stroke Learning in Robotic Table Tennis

DSpace Repositorium (Manakin basiert)


Dateien:

Zitierfähiger Link (URI): http://hdl.handle.net/10900/128185
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1281852
http://dx.doi.org/10.15496/publikation-69548
Dokumentart: Dissertation
Erscheinungsdatum: 2022-06-22
Sprache: Englisch
Fakultät: 7 Mathematisch-Naturwissenschaftliche Fakultät
Fachbereich: Informatik
Gutachter: Zell, Andreas (Prof. Dr.)
Tag der mündl. Prüfung: 2022-06-03
DDC-Klassifikation: 004 - Informatik
Freie Schlagwörter:
robotic table tennis
stroke learning
sensor fusion
reinforcement learning
Lizenz: http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en
Gedruckte Kopie bestellen: Print-on-Demand
Zur Langanzeige

Abstract:

Research on robotic table tennis is attractive for studying diverse algorithms in many fields, such as object detection, robot learning, and sensor fusion, as table tennis is full of challenges in terms of speed and spin. In this thesis, we focus on optimal stroke learning with sensor fusion for a KUKA industrial manipulator. Four high-speed cameras and an IMU are used for object pose detection. To learn an optimal stroke for the robot, a novel policy gradient approach is proposed. Firstly, we develop a multi-camera calibration approach for wide-baseline camera pairs. The initial intrinsic and extrinsic transformations are computed using the classic calibration methods, resulting in a 3D position error of 15.0 mm for four cameras (11.0 mm for each stereo pair) in our test dataset. A novel loss function is proposed to post-optimize them with a new set of pattern images from each camera. The final accuracy is 3.2 mm for stereo cameras and 2.5 mm for four cameras. To efficiently use those cameras, we divide them into two stereo-camera pairs for the ball and racket detection, respectively. With the well-calibrated cameras, the 3D position of the ball can be triangulated when the pixel positions of the ball center are determined with two different approaches: color thresholding and two layers CNN. Secondly, we propose an optimal stroke learning approach for teaching the robot to play table tennis. A realistic simulation environment is built for the ball’s dynamics and the robot’s kinematics. The learning strategy is decomposed into two stages: the ball hitting state prediction and the optimal stroke learning. Based on the controllable and applicable actions in our robot, a multi-dimensional reward function and $Q$-value model are proposed. The comparison with other RL methods is performed using an evaluation dataset of 1000 balls in simulation. An efficient retraining approach is proposed to close the sim-to-real gap. The testing experiments in reality show that the robot can successfully return the ball to the desired target with an error of around 24.9 cm and a success rate of 98% in three different scenarios. Instead of training the policy in simulation, another option is initializing it with the actions of a human player and the corresponding state of the ball. To get the human actions, we directly detect the racket from images and estimate its 6D pose using two proposed approaches: traditional image processing with two cameras and deep learning by fusing one camera and an IMU. The experiment shows the latter method outperforms the former in terms of robustness for both the black and red sides of the racket. The former method is 1.9 cm better in position (2.8 cm versus 4.7 cm), but much slower in speed when the detection head is replaced with YOLOv4. Finally, a behavior cloning experiment is performed to reveal the potential of this work.

Das Dokument erscheint in: