Reading:
AI Part I: Introduction to the Uncanny Valley

AI Part I: Introduction to the Uncanny Valley

February 18th 2015

No conversation about the future is complete without a discussion of artificial intelligence (AI). There are quite a few misconceptions about what AI actually is and everyone has a slightly different way of approaching it. Usually opinions are at one extreme or the other; either gloatingly dismissive or obsessively paranoid. The rational response probably lies somewhere in between.

Here’s a more rational approach.

AI is not a living thing and it never will be. It’s exactly what the name implies; a simulation of intelligence that can give the illusion of being a sentient being. But no matter what, anything and everything it does is a result of complex programming. But when it gives the illusion of being a living thing and it does it so convincingly, what kinds of effects will that have on people?

There’s a concept in robotics (and it applies to other simulations as well, such as computer generated imagery – abbreviated as CGI – in films) called the “uncanny valley.”  The idea of the uncanny valley, a concept put forward by roboticist Masahiro Mori in 1970, posits that our brains can recognize when something is “off” in a fellow human. If someone is lying to you, you may believe them, but if they have a subtle “tell” your brain can pick that up and process it without you fully realizing it. Presumably, you are used to truthfulness in your interactions with others. Even the subtlest lie can give a sensation of unease.

As simulations both virtual (CG) and physical (robotics) continue to advance and become more realistic, our brains accept what we’re seeing as obviously fake. But when say, a CG human in a movie is 99.9% realistic, the upward curve collapses into that uncanny valley. The uncanny valley is the point where every single tiny little thing that’s “off” jumps out at us. Simply put, a simulation of life, like a robot, will continue to become realistic and relatable until it is so close to being a hundred percent indistunguishble from the real thing that the tiniest details jump out at you, often causing discomfort. You can watch a CG animation like a Pixar movie with traditionally exaggerated cartoonish humans and not feel creeped out. Your brain doesn’t pick up the flaws of the blue cat people in Avatar because you know that blue cat people aren’t real and your brain isn’t wired to recognize them. But when you watch something attempting photorealism, your alarms go off. You may not be able to say why (although it’s usually described as soulless dead eyes).

wovax wordpress mobile app iOS mobile app android mobile app wordpress mobile app native wordpress app mobile apps artificial intelligence intelligence explosion uncanny valley AI atomic anxiety Polar Express dead eyes Robert Zemeckis Tom Hanks
Creepy Stoner Child never really caught on as a Christmas tradition.

Let’s say that AI does become self-aware and reaches the theoretical point that is called the singularity (more on that in the next part). Let’s say that a machine becomes self-aware; it knows it’s a machine, but that can’t change anything as far as its own existential problems. It’s still just a machine. If it looks like a trashcan, it’ll be easier to accept. But if it looks like a regular person, you might have trouble rationalizing to yourself, this is a walking iPod. Animators know this trick; that’s why everything from Disney princesses to Avatar cat people to Wall-E have massive eyes. It instantly makes the character empathetic (and sells tons of toys). Even cars have “faces.” It’s why in PG action films, bad guys are usually masked. It’s easier to accept that the heroes are killing hundreds of other humans if we can’t see the faces they’re snuffing the life from.

I’ve talked to people who don’t care about any of this at all. An AI-powered robot is just a glorified computer. Therefore, any concerns are moot and why should we care? I agree with this, but I would disagree that there shouldn’t any concern at all. Any unchecked advancement in a field of science will lead to trouble.



4 Comments
  1. Carson Spratt
    February 18th 2015

    Will you more fully develop your statements about why there will never be a self-aware AI in the next part?

    Reply Reply
    • Jeremy Sauder
      February 19th 2015

      I think AI could be self-aware, but what I'm not so sure about is it actually being classified as a living thing.

      But yes, I'll develop everything further.

      Reply Reply
  2. Chris
    February 19th 2015

    AI is not necessarily the result of complex human programming. It may also be an emergent result of in-silico analogues to biological processes. If an earthworm can be "alive", why not an in-silico earthworm? Do we call an earthworm alive because of the teleological way it interacts with its environment, or because it's made out of materials weirder than silicon?

    Reply Reply
    • Jeremy Sauder
      February 19th 2015

      I think the way you think about a self-aware AI will depend on what your view of the origins of life is in general. I'm pretty convinced of the idea of a soul, which would be a key missing component in a machine. But, then again, the roboticist that I referred to in the article also believes that robots could attain buddhahood. I'll fully know what I think when I see more of this happen beyond theoretics. I'm certainly open to the idea of the strange.

      Reply Reply

Leave a Reply

Related Stories

Arrow-up