Do Companies Dream of Numerical Sheep?

A few years back, there was what seemed to me a sudden spike in the number of articles on the ethics of self-driving cars, and perhaps for good reason. Machine learning was really hitting its stride, the (Western, technocratic) world was excited, and as with all novel technologies, critics came to temper our enthusiasm. The main idea behind these articles was this: how should we program the car to approach ethical dilemmas? Should it protect the driver? Apply a utilitarian principle? A deontological one? The normative ethicists are finally getting their time in the spotlight.

The reason (or at least one reason) we might care is that we want to know where the responsibility lies. But it seems in all these discussions, one option does not really get taken seriously: that the car itself is responsible.

Controversial, maybe, but here’s what I mean. Obviously, it does not mean that we throw a Tesla in jail on bail and wait for the court dates. But like a good criminologist who does not ask whether a criminal is morally responsible for their criminal actions so much as what factors drive the individual to commit these actions, so too, we should ask the same of the self-driving car. In other words, we ought to ask how and why the car was put in the situation in which it had to make these moral decisions.

Instead of assuming that these moral decisions come up inevitably, we should challenge their existence. Do self-driving cars necessitate a re-structuring of our city-scape, forcing us to rethink the concept of roads and sidewalks, or more generally how our pedestrian lives are entangled with our driving lives? Or for that matter, is self-driving technology best suited for private vehicles? Can it not be implemented into a better public transit system? Would this latter possibility not mitigate many of the risks associated with self-driving vehicles?

(Let’s not even get started on a class analysis of the ethics of self-driving vehicles, which is grossly absent from the articles I have read on mainstream news outlets. Suffice to say the people at risk in these real-life trolley problems are mostly going to be devoid of the rich.)

My point here isn’t really to advocate for any of these restructuring agendas (though I am a fervent supporter of investment in better public transportation), I am far from qualified. What I want to do is expose some assumptions about our ethical discussions on self-driving vehicles, and how these assumptions subtly reinforce a particular vision of the world as natural, and thus, unchangeable. The world is not. And, to make my point clear, maybe we should ask, why do we even want self-driving cars in our lives anyways?

Now, I have been speaking of self-driving cars as a future event, something that has begun to creep into our lives, but still remains far enough for us to “figure it out” before it shows up in full. But really, the fundamental problems I have with our discourse on self-driving cars is nothing new. It is not, as many writers make it seem, a unique problem posed to us by a radically new form of technology, i.e. machine learning/artificial intelligence. The technology may be new, I guess, but all technology alters the way we interact with each other and the world. The dearth of imagination in our discussion of the ethics of self-driving cars, however, represents a failure that is just as present in our discussions of business ethics.

If anything, corporations are forms of artificial intelligence. They operate with a certain telic logic (without being reducible to simple binary operations) that we might, if we expand our definition a little, even call intelligence, and this intelligence is no less artificial than that of a computer chip. In fact, we’ve begun to treat corporations as sentient beings, capable of intent. Just take a look at the whole fast food twitter thing happening a while back.

Wendy’s as Smug Anime Girl

The psychologists among us might point and cry “that’s just projection! Of course we all know they don’t really think!” And most of us will concede the point (I will too for now, as there isn’t enough room to get into a debate on the philosophy of mind). Yeah, corporations, like self-driving cars, don’t really think or even act. They are not moral agents, so we cannot rethink their place in the world. And like self-driving cars, the result is that we look elsewhere for moral responsibility, which of course, means the focus is shifted back onto the human.

As ethicists, we can only ever ask whether the people who work for corporations are morally responsible for unethical behavior by these same corporations. As individuals, we can only ever hope to act as ethically as we can within the confines of our roles in these corporations. In both cases, the lines have been drawn. The existence of corporations itself cannot be understood morally, or so they say. No, we do not need to ask whether corporations really need to exist, especially if our concern is living in a more ethical world. We take them as a given, and proceed from there. But why?

For so long, artificial intelligence has captured our collective imaginations. In our college dorms, we might ask ourselves whether our technologies may overtake us, drawing on our extensive knowledge of popular science-fiction like The Matrix or the Terminator franchise. We might ask whether we have the power to unplug our own creations. Well, it looks like the problem isn’t whether we are able to unplug or not, but whether we even know what it is that needs unplugging. Our current situation suggests that, no, we do not.

It’s weird, because we have so little trouble casting humans out of our world. The United States is particularly egregious in this matter. We throw them in prisons, mental asylums, deport them, murder them. It is so easy for us to tell someone, your existence threatens my world, so you are no longer welcome. Yet we never truly think to do the same with corporations or cars, even when we believe they do not have feelings and cannot suffer, which, if anything, ought to make it so much easier to say goodbye.

So much for our so-called Humanism.


Further Readings:

https://www.newyorker.com/science/elements/a-study-on-driverless-car-ethics-offers-a-troubling-look-into-our-values

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s