Return to site

Your Coworker is a Machine

· Automation,Robots

Your Coworker is a Machine

This article was inspired by a recent conversation with Dima Korolev. In addition to being a top-notch thinker and conversationalist, Dima is an accomplished computer science professional that has worked at Google, Microsoft, and Facebook.

Let’s propose a hypothetical. You’re an employee at a large software company and have recently been working with an engineer that is stationed remotely. Though the two of you have never met each other in person, you’ve talked over the phone and on video chat numerous times. Overall, you’d describe them as a kind, intelligent, and exceptionally talented coworker. Hell, you’d be happy to grab a beer with them at Happy Hour if they worked in the office .

Sorry to break it to you, but your coworker is a machine. It’s an artificial intelligence developed to the point of being indistinguishable from a human. Both its face and voice are computer-generated.

Do you feel uneasy? Do you feel deceived?

Let’s take this one step further. Should this co-worker be entitled to the same legal protections afforded to a human? While many would instinctively shout “no,” this isn’t a simple question, and I would advise against being so quick to reach a conclusion.

A valid jumping-off point for approaching this question is to ask why we, as humans, feel entitled to natural rights. This should be done in a non-anthropocentric way—simply stating “because we’re humans” is not enough. This in and of itself is a very complicated question, but let’s keep it simple. In general, most people can agree that humans deserve rights at the very least due to the fact that we have “consciousness,” the ability to suffer, and self-awareness. Prior to discovering that this coworker was a robot, we would believe them to have all of those things. Hence, they deserve human rights. Right?

As it turns out, our philosophical dilemma is not so easily solved. Just because we might’ve believed that our coworker possessed these traits doesn’t mean that they actually do. The Chinese room thought experiment, proposed by John Searle in 1980, is particularly applicable in this case. It goes like this:

  1. Let’s assume that there exists an artificial intelligence program capable of behaving as if it understands Chinese.

  2. This program accepts Chinese characters as input, follows a set of instructions, and produces Chinese characters as output.

  3. It does this task so well that it can pass the Turing Test and convince a human Chinese speaker that it is another live Chinese speaker.

Does this program actually understand Chinese, or is it just really good at simulating the ability to understand Chinese? Searle argues that the computer is merely simulating the ability to understand Chinese. His logic is this: if a human that doesn’t speak Chinese was given access to the program’s instructions, he or she would be able to execute the program manually and converse with a Chinese speaker, all without actually understanding a word of what they were saying.

That said, things have gotten far more sophisticated since 1980. Artificial intelligence programs in the 21st-century aren’t merely executing a list of hard-coded instructions—they’re often black-box algorithms that have learned to perform a specific task by being given immense amounts of training data (or, in some cases, being placed in a training environment).

Now, I’m by no means an artificial intelligence expert, but let me be forthcoming: modern machine learning techniques are incredibly dumb, at least when compared to any higher-level organism. If you want to train a model to accurately differentiate between cats and dogs, you better have thousands of unique images of each. Further, said model will only be able to perform exactly what it was trained for, and all images that it classifies must be reasonably represented by its training data. Does this sound like something that actually understands the differences between cats and dogs?

The reality is that current machine learning techniques, which rely on massive amounts of data and are highly specific, are very clearly producing simulated understanding. We can’t even think about beginning a discussion of machine rights until our machines learn in a way that is far more natural, intelligent, and consistent with obtaining legitimate understanding. I suspect that such a level of learning won’t be reached instantaneously in a black-and-white sense; there will likely be an in between period of uncertainty. That said, we are certainly far from it. However, in such a period of uncertainty, I would advocate for erring on the side of preventing any potential suffering.

So, back to our coworker. If this hypothetical is taking place now, in 2019, I say this: such an artificial intelligence is a commendable achievement, but it certainly is not entitled to rights or legal protections. If this hypothetical is taking place in the distant future… it might just be time to think about amending the Constitution.

Written by Daniel DiPietro, Edited by William Turchetta & Alexander Fleiss