As the clock counts down to the end of 2018 talking heads will definitely be going over the big events of the past 365 days. Most certainly there will be a lot of talking points when it comes to international events, politics, and a tiresome
From huge data breaches, to the cryptocurrency implosion, and scandals involving employee mistreatment, censorship, and engaging in morally obtuse practices, it has been a year that has punched some holes in modern society’s long-held faith in Silicon Valley’s corporate benevolence and goodwill.
Once lauded for its innovative ideas and progressive values that changed the world and gave us cool new toys, the events of 2018 has exposed these new corporate titans that we wholly welcomed into our lives as any other entity pushing their self-interest behind a veil of progress, promptly raising questions on the ethics and responsibilities they should be held to account for.
No event demonstrated this need more than the death of Elaine Herzberg in March after she was struck by Uber’s autonomous car that was undergoing testing in the United States. A later report from the US National Transportation Safety Board found that the system did detect Herzberg’s presence as she was crossing the road about six seconds before impact, but it “classified the pedestrian as an unknown object”, failed to predict her “future travel path”, and decided not to apply the brakes right up until 1.3 seconds before impact.
Rather tragically the report found that engineers had disabled the emergency braking feature while testing its autonomous operation to avoid “the potential for erratic vehicle behaviour” caused by false alarms from the sensors, instead leaving the onus of intervention to the operator, though in this case the system wasn’t designed to alert the operator who momentarily took her eyes off the road before the crash.
Although nobody was convicted, and Uber reached an undisclosed settlement with the victim’s family, the accident sent shockwaves through the tech industry and governments around the world. Herzberg’s death isn’t the first fatality related to a self-driving car, but she was the first known pedestrian to have met an unwitting and fatal end by an autonomous car, which raised serious ethical questions on what an autonomous car should do when faced with the choice between protecting its occupant’s lives or preserving the life of others in an unavoidable crash scenario – the veritable “trolley problem” question.
Unsurprisingly it was German lawmakers, themselves being well aware of their country’s prominence in the race in the development of autonomous cars, who have laid some basic ethical rules that “human safety must come first”, “all humans are considered equal”, and “the fewest people possible must be harmed”. Sounds simple enough, but real life doesn’t work by such rigid objectivity. To demonstrate that we must turn to the trolley problem, where, with the addition of some minor details, the thought experiment conflagrates the debate with a whole host of uncomfortable dilemmas.
Using Germany’s objective guidelines as an example, when it comes down to it, is it right for the car to choose to mow down a single Nobel laureate over a gang of thugs? Does a pregnant lady with twins count as one individual or three? And if you are the sole occupant in the car, are you willing to accept your fate as the collateral? The answers to these questions also vary greatly from society to society, depending on the local societies’ set of shared values and cultural perceptions.
All that, while worth ruminating on, is pointless as legislation and laws can only do so much in providing solutions to these problems. After all lawmakers can only define what should be permitted and the subsequent legal course of action should that be breached, even if in practice there is no absolute moral way to go about it, because at the end of the day we are handing over the control of such a high-functioning task to a form of intelligence that views the world in a completely different way than we do, and at this point we have to ask ourselves if we are comfortable with that.
Now, this isn’t some schizophrenic rant about how the toaster is going to go all Skynet tomorrow and eradicate humanity in its singular desire to deliver the perfect morning toast. For the past 10,000 years of human civilisation, we as a species never had to face the idea of putting our autonomy in the control of a non-human ‘intelligence’ that perceives the world around us differently.
Although we have automated several dangerous tasks to computers, such as flying a plane, most of these complex automated tasks are being watched over by human operators. Humans, who perceive the visual and auditory world that they inhabit in the same way as we do. Computers, on the other hand, don’t see the world around it in the same way as we do.
The windows to the world for autonomous cars are mainly comprised of a bank of cameras that build a visual guide for the computer, lidars that fire invisible laser beams and radar to give the computer a precise 3D map of its surroundings, and machine learning algorithms to make sense of all the information it is receiving. While these sensors create a far more detailed and complete picture than what an able-bodied human is able to collect from their set of eyes and ears, how the machine interprets the data that it is fed is markedly different from our view of reality.
Take an apple for example. Apples vary in size, texture, and shape, but we can identify an apple at a glance thanks to our visual and sensory experience that helps us define what makes an apple an apple. Computers see the world from the images in forms of bits of information from its cameras, and from that stream of data algorithms have to process the arrangement of data to make out a form and match it to one of many of its classifications of objects. To do so, programmers have been using a method known as machine learning, which doesn’t involve sitting a computer down and telling it what is what before handing it a certificate.
Instead, machine learning for image recognition involves giving algorithms a training data set, which in this case is comprised of images of several different subject matters, and then letting it separate each image into its appropriate classifications automatically. These algorithms then continually go through a process of trial and error at identifying the images and fine-tuning itself along the way until it is competently able to correctly identify and differentiate one from the another. How it builds its parameters for each classification and gets to that stage is largely done by the algorithms itself, bereft of any actual human input, and known to the computer itself.
Trouble is, a computer can easily identify an apple if it looks like the apples in the set of training data it is given, but give it a slightly altered image of an apple and it might not know how to pick out an apple from, say, a basket of nectarines. Such variations are to be expected in the chaos of a real-world scenario, as was demonstrated when researchers in the United States managed to confuse a self-driving system by sticking strategically placed stickers on a stop sign. From a human perspective, these signs were still a stop sign, though slightly vandalised but not entirely obscured, as for the algorithm
That being said, researchers say that in order to successfully ‘hack’ the self-driving system in such a way would require hackers to have access to its programming and understand how it classifies road signs, which in itself is no easy task, but it shows that the way autonomous systems see the world and how we see it, are different.
Extrapolate these differences to the ‘trolley problem’ scenario and to the artificial mind, it might not be able to make distinctions between young or old, social standing or ethnicity. To the autonomous machine, what it is seeing is just objects that are listed in its own general classifications, and nothing too specific. If seen in that scope, Germany’s laws that define all humans as equals is a necessary clause in their rulebook for the legal boundaries of autonomous systems. It is impossible to factor in the ethical considerations of society into a computer’s decision making process.
Driving enthusiasts may take potshots at the general incompetence of the average human driver, but the charge for the goal of a Level 5 autonomous car is underlining just how complex and multilayered the act of driving really is, especially when it comes to subtle nuances like visual cues and mannerisms drivers give off to one another.
The engineers behind Google’s Waymo, who have already started their own self-driving service, is steadily learning the challenges of making their driverless cars more welcoming and assuring for customers and finding ways of fine-tuning the technology for the public. Some experts in the field suggest that the right course of action for the adoption of driverless cars might be educating people on the nature of its operations instead of depending on the technology to mimic human nature, after all, you wouldn’t apply the same expectations to a non-human being.
That isn’t to say that autonomous cars are a hopeless pursuit and we should ditch the whole venture and go back to making good old
However as the event of 2018 has transpired, and despite the progress companies like Tesla, Waymo, and Uber have made in this field up to this point in time, we are still a long way off from the real deal. Whether 2018 is the halfway point in this story, or we are still further afield is an unknown, and we can only know for certain by understanding it and pushing forward.