No single issue is poised to disrupt logistics like the coming driverless truck revolution. It’s been on the horizon for years, and yet the trucking industry is only just beginning to consider the ethical, legal, and moral implications of automation. Now trucking stands on the brink of a new era, as the first wave of driver-assisted and fully autonomous vehicles hits the roads, while industry grapples with fundamental questions about how these technologies operate, and what they will mean for the future.
The classic ethical dilemma concerning driverless vehicles is a variation of the famous philosophical “trolley problem,” usually presented like this: a driverless car is ferrying its passenger along when a group of five pedestrians suddenly appear in the street. Given a lack of other options, should such a vehicle be programmed to swerve into a brick wall, killing the passenger? Or should it hit the pedestrians, killing all five of them? The auto industry has posed this question to both renowned philosophers and potential consumers, and most take agree on their answer: the car should kill the driver, sacrificing one life to save five. However, studies have shown that even when respondents agree that a car killing its own passenger is the right choice, few of them would be interested in purchasing a vehicle that could conceivably choose to save other lives, at the expense of their own.
This lack of trust poses a significant problem for automakers investing in driverless technology, and thus far their response to these ethical dilemmas has been uneven. Uber, Tesla, and Amazon have requested governmental guidance and the creation of national laws. Mercedes, on the other hand, made headlines when it announced that its policy would be to save the driver, every time. The rationale offered by Mercedes was that it is generally preferable for a vehicle to save the life it is most directly responsible for, rather than attempting to account for the unpredictable behavior of other parties. While some lauded this stance as an elegantly simple solution to the problem, others accused Mercedes of trying to win consumer confidence at the expense of public safety. Overall, the controversy has clarified the need for standardized ethical guidelines, lest we hurtle into a potentially nightmarish scenario in which Volvo’s and Toyota’s are programmed according to different philosophical schools of thought.
While the ethics of driverless cars is an increasingly hot-button topic, the question of how the calculus changes for commercial trucks has received relatively little attention. Yet autonomous trucks face a host of unique issues, and they are arguably more urgent, given that the driverless revolution is poised to take over trucking long before it becomes the norm for cars. For one thing, an 18-wheeler is capable of doing vastly more damage than a sedan, and a handful of catastrophic and ethically murky accidents could derail the adoption of driverless technology. As Volvo’s Carl Johan Almqvist has said, “If we do it too soon and have an accident, we’ll hurt the industry. And if you lose the public’s trust, it’s very difficult to regain it.”
Then there are the insurance concerns, which raise a myriad of liability questions in the event of an accident. The issue of who is ultimately responsible for a truck’s behavior changes depending on what level of automation is in use. Certain versions of driver-assisted technology require a driver to stay behind the wheel, ready to take control of the vehicle in situations where AI isn’t equipped to handle (a list that currently includes roadside debris and construction zones). Other systems–such as the one used for OTTO’s much publicized test drive–allow drivers to be in the sleeper cab for long stretches. Who is responsible for the driver’s safety during these times–the driver, the carrier, or the programmers who designed the AI? How should an automated truck’s programming prioritize the safety of its human driver versus an oncoming minivan? How does the equation change if the tractor is hauling hazardous waste? These are valid questions without clear answers.
When driverless car advocates and developers get caught in thorny ethical dilemmas, they tend to fall back on the same answer: the safety benefits of automated vehicles far outweigh the potential for abuse. As Mercedes’ Cristoph von Hugo said, in defense of his “driver first” policy, “99 percent of our engineering work is to prevent these situations from happening at all.” Ethical concerns are likely to fade in importance if automated trucks can deliver on their safety promises, and thus far that seems likely, with Daimler reporting that its automated vehicle reaction times “have dropped to about 0.2 or 0.3 seconds – while humans normally can respond not faster than one second.” Since human error is responsible for so many crashes, one could make the case that it’s unethical not to adopt a safer alternative. But how much free will should drivers be willing to sacrifice in the name of safety?
Driverless technology promises safety to both car and truck drivers, but the issue is not precisely the same. While improved safety is the primary benefit automation offers car owners, the trucking industry has another motivation: decreasing labor costs by eliminating drivers altogether. The dawn of truly driverless trucks, which travel from point to point with no human presence in the cab, is still several years away. At the moment, automated technology is best suited to long stretches of open highway, and still requires a driver to handle precise operations. Also, for the time being, nearly every company in the driverless truck race is promising that they are here to help drivers, not replace them. Yet it would be naive to imagine that the industry will be able to resist fully driverless trucks once the technology becomes available and affordable. Whether this will spell mass layoffs for the US’s 1.7 million truck drivers is yet another ethical question that the industry should start asking itself now.