Zero-shot Translation of Attention Patterns in VQA Models to Natural Language. Leonard Salewski, A. Sophia Koepke, Hendrik P. A. Lensch and Zeynep Akata To appear in: German Conference on Pattern Recognition, 2023 In-Context Impersonation Reveals Large Language Models' Strengths and Biases. Leonard Salewski, Stephan Alaniz, Isabel Rio-Torto, Eric Schulz and Zeynep Akata ArXiv abs/2305.14930, 2023 Paper Diverse Video Captioning by Adaptive Spatio-temporal Attention. Zohreh Ghaderi, Leonard Salewski and Hendrik P. A. Lensch German Conference on Pattern Recognition, 2022 Paper CLEVR-X: A visual reasoning dataset for natural language explanations. Leonard Salewski, A. Sophia Koepke, Hendrik P. A. Lensch and Zeynep Akata Springer Lecture Notes on Artificial Intelligence, 2022 Paper | Project page | Code This was also presented at the CVPR 2022 Workshop on Explainable AI for Computer Vision (XAI4CV). e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks. Maxime Kayser, Oana-Maria Camburu, Leonard Salewski, Cornelius Emde, Virginie Do, Zeynep Akata and Thomas Lukasiewicz IEEE International Conference of Computer Vision, ICCV 2021 Paper | Code Relational Generalized Few-Shot Learning. Xiahan Shi, Leonard Salewski, Martin Schiegg, Zeynep Akata and Max Welling British Machine Vision Conference, 2020 Paper This publication is the result of my master thesis. For up-to-date information please also check: Semantic Scholar or Google Scholar. |