Team
Research
Teaching
Thesis
What we do...
Newsfeed
May 2023
Paper @ UAI
We are happy that our work "When are Post-hoc Conceptual Explanations Identifiable?" by Tobias Leemann, Michael Kirchhof, Yao Rong, Enkelejda Kasneci, and Gjergji Kasneci was accepted for publication at the Conference on Uncertainty in Artificial Intelligence (UAI).
January 2023
Position Paper published as Preprint
Our position paper "ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education" which we jointly authored with colleages from TU Munic and LMU Munic was published as a preprint.
January 2023
3 Papers @ ICLR
Three of our works were accepted for publication at the International Conference on Learning Representations (ICLR):
(1) "Language Models are Realistic Tabular Data Generators" by Vadim Borisov, Kathrin Sessler, Tobias Leemann, Martin Pawelczyk, and Gjergji Kasneci.
(2) "On the Trade-Off between Actionable Explanations and the Right to be Forgotten" by Martin Pawelczyk, Tobias Leemann, Asia Biega, and Gjergji Kasneci.
(3) "Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse" by Martin Pawelczyk, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju.
December 2022
Paper accepted @ AAAI 2023
Our paper "Interventional SHAP Values and Interaction Values for Piecewise Linear Regression Trees" by Artjom Zern, Klaus Broelemann, Gjergji Kasneci was accepted to AAAI 2023.
December 2022
Paper @ TNNLS
Our paper "Deep Neural Networks and Tabular Data: A Survey" by Vadim Borisov, Tobias Leemann, Kathrin Sessler, Johannes Haug, Martin Pawelczyk and Gjergji Kasneci has been accepted to IEEE Transactions on Neural Networks and Learning Systems.
October 2022
2 Papers @ NeurIPS Workshops
We are happy to announce that our group will be present at the NeurIPS 22 conference in New Orleans, LA, USA to present two workshop contributions: "I Prefer not to Say: Operationalizing Fair and User-guided Data Minimization" by Tobias Leemann et al. (Workshop on Algorithmic Fairness through the Lens of Causality and Pricacy) and "On the Trade-Off between Actionable Explanations and the Right to be Forgotten" by Martin Pawelczyk et al. (Workshop on Trustworthy and Socially Responsible Machine Learning).
November 2021
Paper @ IEEE BigComp 2022
Aggregating the Gaussian Experts' Predictions via Undirected Graphical Models by Hamed Jalali and Gjergji Kasneci has been accepted at IEEE BigComp 2022.
August 2022
Article @ International Journal of Data Science and Analytics
"DeepTLF: Robust Deep Neural Networks for Heterogeneous Tabular Data" by Vadim Borisov, Klaus Broelemann, Enkelejda Kasneci, and Gjergji Kasneci has been accepted by the International Journal of Data Science and Analytics.
August 2022
Paper @ CIKM 2022
"Change Detection for Local Explainability in Evolving Data Streams" by Johannes Haug, Alexander Braun, Stefan Zürn, and Gjergji Kasneci has been accepted at CIKM 2022.
July 2022
Paper @ MICCAI MILLanD Workshop 2022
"BoxShrink: From Bounding Boxes to Segmentation Masks" von Michael Gröger, Vadim Borisov, and Gjergji Kasneci was accepted at the Workshop on "Medical Image Learning with Limited & Noisy Data" colocated with MICCAI 2022.
May 2022
Paper @ ICML
"A Consistent and Efficient Evaluation Strategy for Attribution Methods" by Yao Rong, Tobias Leemann, Vadim Borisov, Gjergji Kasneci and Enkelejda Kasneci was accepted for publication at the International Conference on Machine Learning (ICML).
April 2022
Paper @ AIES 2022
"Fairness in Agreement with European Values: An Interdisciplinary Perspective on AI Regulation" by Alejandra Bringas Colmenarejo, Luca Nannini, Alisa Rieger, Kristen Marie Scott, Xuan Zhao, Gourab Patro, Gjergji Kasneci, and Katharina Kinder-Kurlanda was accepted at AIES 2022.
April 2022
Paper @ arXiv
"Standardized Evaluation of Machine Learning Methods for Evolving Data Streams" by Johannes Haug, Effi Tramountani and Gjergji Kasneci has been published on arXiv. The float-evaluation Python-framework can be accessed via Github and Pypi.
March 2022
2 Papers @ ICLR SRML Workshop
"Disentangling Algorithmic Recourse" by Martin Pawelczyk, Lea Tiyavorabun and Gjergji Kasneci, and "Algorithmic Recourse in the Face of Noisy Human Responses" von Martin Pawelczyk, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci and Himabindu Lakkaraju were accepted at the ICLR SRML Workshop.
March 2022
Paper @ ICLR Workshop on Objects, Structure & Causality (OSC)
"Coherence Evaluation of Visual Concepts with Objects and Language" by Tobias Leemann, Yao Rong, Stefan Kraft, Enkelejda Kasneci und Gjergji Kasneci was accepted at the Workshop on "Objects, Structure & Causality" colocated with ICLR 2022. A link to the paper will be available shortly.
March 2022
Paper @ ICDE 2022
"Dynamic Model Tree for Interpretable Data Stream Learning" by Johannes Haug, Klaus Broelemann and Gjergji Kasneci has been accepted at ICDE 2022.
March 2022
Article @ PLOS ONE
"Do your eye movements reveal your performance on an IQ test? A study linking eye movements and socio-demographic information to fluid intelligence" by Enkelejda Kasneci, Gjergji Kasneci, Ulrich Trautwein, Tobias Appel, Maike Tibus, Susanne M. Jaeggi & Peter Gerjets has been accepted at the PLOS ONE journal.
October 2021
Paper @ Big Data 2021
"Model Selection in Local Approximation Gaussian Processes: A Markov Random Fields Approach" by Hamed Jalali, Martin Pawelczyk, and Gjergji Kasneci has been accepted at Big Data 2021: IEEE International Conference on Big Data.
October 2021
Paper @ NeurIPS Workshop - OPT2021
"Gaussian Graphical Models as an Ensemble Method for Distributed Gaussian Processes" by Hamed Jalali and Gjergji Kasneci was accepted at the NeurIPS 2021 workshop - OPT2021: 13th Annual Workshop on Optimization for Machine Learning.
October 2021
Paper @ BMVC 2021
Our paper "SPARROW: Semantically Coherent Prototypes for Image Classification" by Stefan Kraft, Klaus Broelemann, Andreas Theissler, and Gjergji Kasneci has been accepted at BMVC 2021.
October 2021
Paper @ NeurIPS workshop - eXplainable AI approaches for debugging and diagnosis
"A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines" by Vadim Borisov, Johannes Meier, Johan Van den Heuvel, Hamed Jalali, and Gjergji Kasneci was accepted at the NeurIPS 2021: eXplainable AI approaches for debugging and diagnosis workshop. This paper is based on a student project from our seminar in the summer semester 2021.
October 2021
Survey Paper: Deep Learning and Tabular Data
Our overview paper "Deep Neural Networks and Tabular Data: A Survey" by Vadim Borisov, Tobias Leemann, Kathrin Seßler, Johannes Haug, Martin Pawelczyk, and Gjergji Kasneci is published on arXiv.
July 2021
Paper @ NeurIPS
We are proud to announce that our paper "CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms" by Martin Pawelczyk, Sascha Bielawski, Johan Van den Heuvel, Tobias Richter and Gjergji Kasneci was accepted at the NeurIPS 2021 Datasets and Benchmarks Track.
July 2021
Paper @ ISMAR 2021
"TEyeD: Over 20 million real-world eye images with Pupil, Eyelid, and Iris 2D and 3D Segmentations, 2D and 3D Landmarks, 3D Eyeball, Gaze Vector, and Eye Movement Types" by Wolfgang Fuhl, Gjergji Kasneci and Enkelejda Kasneci was accepted at the IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 2021.
June 2021
Article @ Computers in Human Behavior Reports
"Robust Cognitive Load Detection from Wrist-Band Sensors" by Vadim Borisov, Enkelejda Kasneci and Gjergji Kasneci has been accepted to Computers in Human Behavior Reports journal.
June 2021
Article @ Nature Scientific Data
"TüEyeQ, a rich IQ test performance data set with eye movement, educational and socio-demographic information" was published in the latest issue of the Nature Scientific Data Journal. This article is the result of a collaboration between Enkelejda Kasneci, Gjergji Kasneci, Tobias Appel, Johannes Haug, Franz Wortha, Maike Tibus, Ulrich Trautwein & Peter Gerjets.
December 2020
Paper @ AAAI 2021 Workshop on Explainable Agency in AI
"On Baselines for Local Feature Attributions" by Johannes Haug, Stefan Zürn, Peter El-Jiz and Gjergji Kasneci was accepted at the workshop on "Explainable Agency in AI" of the AAAI 2021 conference. This paper is based on a project from our EFML seminar in the summer semester 2020.
October 2020
1st Prize in the Cognitive Load Monitoring Machine Learning Competition @ UbiComp 2020
We are happy to announce that our team has been awarded 1st Prize in the Cognitive Load Monitoring machine learning challenge at UniComp 2020 conference!
October 2020
Paper @ ICPR 2020
"Aggregating Dependent Gaussian Experts in Local Approximation" by Hamed Jalali and Gjergji Kasneci has been accepted at ICPR 2020.
October 2020
Paper @ ICPR 2020
"Learning Parameter Distributions to Detect Concept Drift in Data Streams" by Johannes Haug and Gjergji Kasneci has been accepted at ICPR 2020.
May 2020
Paper @ KDD 2020
"Leveraging Model Inherent Variable Importance for Stable Online Feature Selection" by Johannes Haug, Martin Pawelczyk, Klaus Broelemann and Gjergji Kasneci has been accepted at KDD 2020.
May 2020
Paper @ UAI 2020
On Learning Invariant Counterfactual Explanations under Predictive Multiplicity by Martin Pawelczyk, Klaus Broelemann and Gjergji Kasneci was accepted at UAI 2020.
May 2020
Best-Paper Award @ Symposium on Eye Tracking Research and Applications (ETRA), 2020
A MinHash approach for fast scanpath classification by David Geisler, Nora Castner, Gjergji Kasneci and Enkelejda Kasneci was accepcted at ETRA (2020)
April 2020
Paper @ Archives of Data Science, Series A
PLAY: A Profiled Linear Weighting Scheme for Understanding the Influence of Input Variables on the Output of a Deep Artificial Neural Network by Torsten Dietl, Gjergji Kasneci, Johannes Fürnkranz and Eneldo Loza Mencía was accepted at Archives of Data Science, Series A.
January 2020
Paper @ WWW 2020
"Learning model agnostic actionable counterfactual explanations for tabular data" by Martin Pawelczyk, Klaus Broelemann und Gjergji Kasneci was accepted at WWW 2020. The paper presents a new method to generate counterfactual explanations.