Center for Interdisciplinary and Intercultural Studies

Gero von Randow: A short comment on objections against the possibility of algorithms for moral decisions

The following sentence states an absolute impossibility of moral decisions by algorithms: "Artefacts cannot decide moral problems, because they cannot be intelligent in a strict sense."
1) The argument, that real AI is logically impossible cannot be based on the findings Kurt Gödel, Alonzo Church and Alan Turing.
2) The chinese chamber thought experiment, put forward against real AI, supposes an ontological difference between machines and the brain. This can hardly be demonstrated.
3) The objection that AI machines lack bodily experience and interaction with a social world becomes invalid in the cases of robot populations.
The following sentence states an intentional impossibility of moral decisions by algorithms: "Artefacts should not decide moral problems, else we act immoraly."
The trolley problem demonstrates that we can run into classic moral paradoxa if we delegate moral decisions to machines. We would have to to calculate and compare the numbers of human lives in advance and while not being in an emergency situation.
Can this be justified?
We do this all the time with our technological decisions. But they cannot be justified by the goal of reducing risks only. There is another precondition: responsability.