An old university joke goes: “Q: What do you get when you cross a cow with an octopus? A: An immediate revocation of your grant funding and a call from the ethics department.”
A changing world
A press article today explained how London’s Underground network is going to start to use data collected from the Wi-Fi system it provides in order to track passengers as they move around the transport system. This is just the latest use of a huge data set to potentially improve customer experience for many people. But it made me think about the increasingly relevant field of engineering ethics – the rights and wrongs associated with decisions we make about solving problems in so many areas. Whilst many large companies do have departments that consider such challenges, I feel that we should encourage everyone involved in problem-solving to consider the ethical implications of what they do, something that has been embedded in other professions, for example medicine, for decades.
Is anything secret any more?
Most of us are used to picking up our smartphones and looking at where there is a traffic jam on our route before we get into our cars. Those little coloured lines next to the road on the map tend to be very accurate and allow us to make quick decisions about when to leave home, or which way to travel. Extending this to urban, and indeed regional/national transport is a logical thing to do, as per the Underground example. But how much thought do we give to where that information comes from? Most of us tick ‘accept’ to the terms and conditions that allow our phones or other devices to track our movement, rarely reading the small print. Do we care? I guess not, provided we benefit from the information derived from the anonymised data created. @kaimichaelhermsen wrote a great Ingenuity blog about trust and how it relates to data, and that is a great example of the consideration of ethical behaviour in this context.
Letting the computer decide
So we share some data. No-one gets hurt. Let’s take another example of an engineering system that consistently derives information from multiple data sources. Imagine an automated road vehicle driving at 80 km/h down a road. An oncoming car (driven by a human) makes a bad decision and overtakes a vehicle, coming head on to our robot-driven car. What does our car do? Does it drive into the wall on the side of the road, potentially wiping out all of its occupants? Does it go head on into the oncoming vehicle? Does it go for the vehicle being overtaken (it is after all moving more slowly than the overtaking vehicle)? Does it go for the pavement where a family is walking along (that would keep the occupants of the car alive)? Those are all horrible options, but one has to be taken, in a split-second, by a computer.
That computer is programmed by humans, and they have to consider how to give the computer the rules that allow it to decide. How do you define those rules? What would a human driver do in that situation? I suspect that most drivers would do anything to avoid that family group, even if it meant getting hurt – are you happy to let the processor in your car decide to hurt you?
Who’s the customer, or who is most important?
In the car example it’s easy to consider our customer (so the passenger in our automated vehicle) to be the most important person. The complexity ramps up quickly if the oncoming vehicle is also automated, now we have two vehicles both trying to make decisions that prioritise their customer. Hopefully of course the car in the wrong isn’t a robot that’s made a terrible decision.
So what about in another world – railways? The safety of the railway is assured in a different way to road traffic, so situations analogous the one above shouldn’t happen more than once or twice per century across all systems if they meet modern standards. BUT what about making decisions about which of two trains gets priority at a junction? Both are express trains, both have several hundred people on them, both are going to the same destination, but one is 5 minutes late. Who goes first? The one that’s late? The one that’s on time? The one with most passengers aboard? The one that pays most for track access? The one whose passengers paid most for their journey? The one which will have most impact on the overall network performance if it is delayed?
Hopefully some degree of altruism will come into this, and decisions will be made in a logical way that best meets the needs of the greatest numbers of people, but most railways are commercial organisations. Software engineers can build key performance indicators in to allow the computers to make decisions based on thousands of permutations – but on what ethical basis?
These are just a few examples. One can imagine that there are many thousands of examples out there, especially in our changing world where technology is doing things we couldn’t imagine even ten years ago.
So what’s the answer?
I don’t think there is an easy answer, but I do think that society as a whole and the engineering profession specifically needs to develop a much greater culture of considering the impact of our decisions, allowing us to question more the societal and safety impacts of what we do. Engineering is more exciting than it has ever been, but we live in a world that is changing at a momentous rate, and we can’t just keep thinking like we have in the past. I’d be fascinated to hear what others think, and examples of how other industries have faced similar problems.
Written by Mark Glover