Tag Archives: philosophy

Machine Ethics Isn’t Actually a Problem

At least, it’s not in any way a new one.

A while ago, The Economist ran an article posing the question of how we should deal with drones and other artificial intelligences that have to make ethical decisions. Alex Tabarrok pointed out that the question is not entirely hypothetical, with the emergence of Google’s self-driving car: a truly self-driving vehicle has to decide what to do if, in a variation of the classic “trolley problem,” the car was hurtling toward a crowd of pedestrians and faced the choice to kill them or turn and kill just one pedestrian.

The issue is interesting on its face, but I don’t think the question really stands up to much additional thought. It rests on a basic misunderstanding of the nature of ethics and the nature of computers, at least as computers currently exist.

It’s useful to think about the disconnect by analogy to regulatory systems. There are two basic ways to construct, for example, financial regulations: they can be rules-based or principles-based.

A rules-based regime might say “Systemically important institutions must maintain an additional cash capital buffer equal to 3.1% of total demand deposits plus 0.8% of all debts with maturities less than 180 days. A bank or other regulated entity is considered ‘systemically important’ and subject to additional scrutiny if it has total liabilities greater than $38 billion, or is a counterparty to more than 10% of the notional value of all contracts in any of the derivative classes named in subsections c-f, below.” The rule is algorithmic–if you follow the rules precisely, you are in compliance, and there’s no uncertainty involved.

A principles-based regulatory regime, by contrast, might say “Systemically important institutions (those institutions whose default would pose a substantial risk of financial sector contagion) must maintain an additional capital buffer sufficient to maintain their solvency and liquidity during times of heightened financial stress.”* In this case, there aren’t clear lines between “OK” and “bad.” What one person honestly believes constitutes compliance may seem to another person to be in clear violation. (Obviously, there’s more room to fudge the rules to your own advantage, too, but leave that aside for the moment.)

Ethics is clearly a system of principles-based regulation. There are no readily applied algorithms that spit out answers–even once the principles are established, individuals have to use judgement to apply abstractions to specific scenarios.

That leads us to the central problem: computers do not (to my knowledge) exercise judgement. They are sets of algorithms. While we can create a computer that appears to parse the meaning of words, it does so based on a search mechanism that finds information, not based on a logical application of abstractions. (No, Toronto is still not a city in the United States.) As long as that is the case, computers don’t have any real independence–and therefore, do not make decisions that could give them moral culpability. We would not hold a misprogramed calculator responsible for poor performance on a math test–that would be the fault of the calculator’s programmer (and the unprepared math student). Similarly, we can’t hold a drone responsible for shooting what it registered as a threat, even if that threat was actually a bouncy castle.

To the extent that the emergence of more autonomously-operated technology poses any ethical problems, those problems just require us to come to some agreement on existing moral questions. The trolley problem is still the trolley problem; the decision of whether to fire a missile at a suspected insurgent based on limited information remains the same; we still have to decide how much collateral damage is acceptable. But any sort of widespread use of technologies programmed with an answer set for those situations would seem to require a set of answers that an overwhelming majority of people would find acceptable, to avoid public uproar. I suspect that widely acceptable answers sets would include a bias toward inaction, which poses its own ethical problems.**

The next question is who bears ultimate responsibility for the decisions of programmers, and what kind of accountability mechanisms are appropriate.  Translating principles-based systems into algorithms is imprecise and unpredictable–we saw that with the financial crisis. If algorithms are more often making decisions of moral relevance, we need to decide whether someone can be sent to jail for a program whose unpredicted outcomes we find objectionable. Presumably the programmer’s responsibility for a wrongful death lies somewhere between that of totally uninvolved bystander and that of an actual murderer. That seems like the new and interesting question here.

*(Yes, I made both of those passages up. Yes, I had fun doing so. I hope they illustrate the differences.)

**A commenter on Tabarrok’s blog post also notes that Google’s engineers and programmers will be more focused on avoiding the situation altogether than deciding whether to kill three grannies or one infant. That is as it should be. This probably deserves a post in itself, but in general I propose that one’s ethical duty in a real-life trolley scenario is to break the rules of the ethical dilemma (to beat the Kobayashi Maru), rather than to accept the deaths of the five workers, or of the fat man.

I love hard questions!

Via the wonderful and intelligent Meb Byrne: “Is access to healthy food a basic human right?”

My initial response:

“In my line of thinking, no. I don’t think any basic human rights make material claims on other people–human rights entitle you to protection from other people’s actions, but (the way I see it) they usually don’t require action from other people on your behalf.

I looked at the Universal Declaration of Human Rights while thinking about this. They do include food, along with housing, education, and various other social safety net-type things. Those aren’t the only parts of the UDHR that I disagree with, but they’re the point where I object most strongly. Will blog shortly on why, although I don’t know if I’ll make sense.

What do you think?”

The addendum to my response:

Human rights are, by definition, inalienable and inherent to our humanity. That means, to me, that they have been consistent throughout time: human rights are the same today as they were when the concept emerged during the Enlightenment, when they were the same as they had been for the Ancient Greeks, immutable since the dawn of civilization. (Of course, they haven’t always been honored, but we’ve gotten better at it over the years–especially the last 200 or so.) To assert that the definition of human rights changes to fit the feelings of the time would imply that (perhaps) the various bouts of enslavement over the millennia were not, in fact, violations of the enslaved’s human rights.

Given that the definition of human rights should be constant over time, I don’t think we can include any material rights in the definition–that includes food, water, housing, medical care, &c. Basically, I can get outraged over political/religious/ethnic repression & wanton violence throughout the years, and in each case I can find someone who was committing an affirmative violation of human rights. I can’t muster the same ire over the Irish potato famine–or, for that matter, the bubonic plague, or the Indonesian tsunami, or any other essentially natural condition. I don’t think it’s philosophically consistent to hold all of Medieval Europe responsible for the fact that the population periodically got out of control and people starved, any more than we can call it a human rights violation that they didn’t have access to doxycycline.

Obviously, I might be very wrong here. If I am, please tell me!


(None of this, of course, suggests that I oppose public provision of social services. Obviously, governments exist to do more than protect basic human rights. It’s worth acknowledging, thought, that we provide social services because 1) we like having other people around who aren’t starving and 2) we all benefit from having an informed, productive workforce to do stuff for each other and 3) we like knowing that if something goes wrong, we’ll have a safety net available, and lots of other reasons. None of those rises to the level of human rights, though, and we need to admit that in policy discussions.)