At present, information technology systems are being developed for many contexts, which are either intended to make independent decisions or to support decision-making. Such systems can, for example, have the function of selecting applicants, selecting the appropriate therapy for patients, carrying out an action in road traffic (lane change, braking, etc.), trading shares, predicting the probability of recidivism of offenders or determining the credibility of statements (AI lie detector). Decisions refer in a conceptual sense to reasons. It belongs to the form of decisions that it can be asked on which reasons they are based. If decisions and reasons are so closely related, the question arises as to how systems designed to make decisions or support decision-making may affect this relationship. The reasons for decisions are central to their ethical evaluation.
Do such decision-making systems change the role that reasoning skills play in the moral evaluation of decisions in a way that could be ethically problematic? My line of argument will be as follows:
(1) The reasons for decisions of learning algorithms remain internally opaque for persons. The form of the justification of such systems is instead externalistic: they are justified by reliability. This raises the question of the strongness of the reasons.
(2) The justification by reliability could, however, in some situations represent a wrong kind of reason. This raises the question of the appropriateness of the reasons.
I will try to show that there are decision contexts in which reliability is the wrong kind of reason.