The trolley problem is a famous philosophical quandary in which a subject is asked to imagine a scenario where he/she has to choose between the deaths of various people or groups of people, either through action or inaction. For a more detailed explanation of the trolley problem, check out the WIRED article linked below.
Today, the problem is being applied to autonomous vehicles rather than “trolleys” (trains). What happens when an autonomous vehicle has to make a choice between multiple potential victims – how should it decide who to save?
Lawyers, insurers and moral philosophers can argue the toss. But for OEMs and vehicle manufacturers the answer is simple. Autonomous vehicle makers and software companies are piling masses of time and resources into solving problems of perception, segmentation and motion planning to ensure that autonomous vehicles are as safe as possible. An autonomous vehicle will be able to anticipate hazards in much the same way as a human driver – and without distractions, tiredness or drunkenness to impair its response.
Call me naïve, but with cars which learn from each other and their environments constantly, I don’t think it’s too ambitious to aim for zero incidents sometime in the not-too-distant future.
So to get back to the question posed in the headline, the quick answer is that they are linked because I’m not interested in either of them. Another answer is that any mention of GBBO makes for a clickbait headline. And a longer answer explains why I don’t think the trolley problem is interesting in the context of autonomous vehicles. That’s because if an autonomous vehicle ends up in a position where it has to choose between various possible victims, the technology has already failed.
Will autonomous vehicles kill people? Possibly. Probably. If the planet’s one billion plus cars all became autonomous overnight, we would expect some casualties (over one 1 billion cars and seven billion people creates a lot of scope for accidents).
Estimates vary, but we know that well over 90% of traffic incidents are caused by human error of some sort. This proves that autonomous vehicles have the potential to be much, much safer than humans – making them the best thing since sliced bread. And no, I don’t care what Paul Hollywood thinks.
Recently, the “trolley problem,” a decades-old thought experiment in moral philosophy, has been enjoying a second career of sorts, appearing in nightmare visions of a future in which cars make life-and-death decisions for us.