An autonomous vehicle with three passengers – child, mother and father – is speeding towards a crosswalk. The brakes fail and the car now has two options: Either it maintains the programmed course and drives straight ahead – but then three passers-by die – either a homeless person, a female doctor or a manager. Or it dodges and crashes against a concrete wall at full throttle, so that all three passengers will die.
If a human being were to sit behind the wheel, it would be in a classic ethical dilemma – assuming it was able to make a clear decision in a fraction of a second: Regardless of what the driver does, his decision violates any morality. Doing the right thing – that doesn’t seem to exist in this scenario. After all, it’s a coincidence how this tragic event will end.
Autonomous systems are not good or bad
Sophisticated control algorithms of autonomous vehicles could circumvent this random principle by establishing clear rules for such dilemma situations. For example, it would be better to spare children than older people, or the autopilot would avoid people rather than animals.
But this is exactly what the German ethics commission, for example, rejects in its report “Autonomous and Networked Driving”. It says: “In the case of unavoidable accident situations, any qualification according to personal characteristics (age, sex, physical or mental constitution) is strictly prohibited. (…) Those involved in creating mobility risks must not sacrifice those who are not involved.”
For Oliver Bendel, philosopher and a professor of business information science at the School of Business in Brugg-Windisch, Olten and Basel (University of Applied Sciences and Arts Northwestern Switzerland FHNW), the acceptance of autonomous vehicles depends decisively on whether the ethical questions associated with them can be clarified in a satisfactory manner. In an interview with the platform Dialog digitale Schweiz, he says: “The autonomous systems that this topic usually deals with have no empathy, no intuition, no consciousness, no free will, and they are therefore not good or bad.” Therefore it is fatal to leave it to the machines to deal with complex moral decisions.
As part of information ethics, machine ethics examines such questions. It’s reflects on moral and immoral machines and tries to build them. In addition to self-driving vehicles, the main focus is on (partially) autonomous systems such as software agents, certain types of robots and drones or computers in automated trade.
The FHNW is currently working on a concrete project under the direction of Prof. Bendel. HAPPY HEDGEHOG, the prototype of an animal-friendly mowing robot, is supposed to recognize hedgehogs in time during its work and spare them from its blades. This is because many of these animals – often at night – are currently killed by such devices.
Scientists call for behavioural research for machines
Around 20 scientists from leading universities in the USA and Europe are going one step further, recently calling for their own behavioral research for machines under the title “Machine Behaviour” in the science journal Nature. Instead of just checking whether the autonomous systems in a particular situation really do what they were designed for and ignore the rest of potential behaviour, the interdisciplinary research field takes a much broader approach.
In order to investigate machine behaviour under various circumstances, the methods of classical behavioural research should be used as well as statistical methods from opinion research or medicine. This means, for example, carrying out blind tests under controlled conditions or identifying representative groups.
One of the research fields the authors want to work on is the systematic investigation of collective effects. For example, when large groups of trading algorithms that act very similarly cause sudden collapses on the stock market. But the question of how humans and machines influence each other in their behavior is also extremely exciting.