Artificial Intelligence (AI) – Should it explain itself?
We are no longer baffled by all the tasks algorithms can perform. And apparently, they are now even able to ‘explain’ their output. But is that something we really want? In our latest blog post, Eric Raidl, Sebastian Bordt, Michèle Finck and Ulrike von Luxburg address this question.
The methods of machine learning have conquered everyday life. AI outperforms humans in board games, it can drive cars, predict complicated protein folding structures, and can even translate entire books. This may all be of great benefit to us, but it isn’t the full story. Algorithms might also ‘decide’ whether I can secure a bank loan or not or whether I will be invited to a job interview. And they are employed in numerous other situations that involve value judgments. Such situations not only raise technical issues – which algorithm works better? – but also point to societal questions regarding their use. Which algorithm do we want to use in what circumstances and for what purpose?