Human-like AI: Friend or Foe?

Hey everybody! I haven’t blogged in a while, but something I stumbled upon a couple days ago really caught my attention, and I had to research it for myself: An app called Replika. It’s supposed to be an “artificial friend” or companion. I know what you’re thinking. Are people really that lonely, that they’ll talk to a chatbot rather than a real person? You’d be surprised. In today’s supposedly hyperconnected world, many people feel increasingly lonely. I sometimes feel that way since I live by myself here at the apartment.

Since Replika was first introduced in 2016, more than 7 million people use it. Some users set their AI friends to act like a real life friend. Others users set their Replikas to motivate them and act like a life coach. And others set their Replika as a romantic companion. That last one seems weird. Especially if people genuinely start to have those kind of feelings from talking to the chatbot. It’s a machine. It’s code, but based on all I’ve seen and read about it, it looks extremely realistic! Here’s a video to familiarize yourself with it:

Is Replika safe or dangerous?

I don’t yet fully know where I stand on the app after watching videos about it, and reading Reddit posts, but I do believe there are both positive and negative aspects to Replika. For example, if someone is truly lonely and suffering from depression, perhaps having an AI “friend” to talk to could help the person cope with their loneliness or depression. The chatbot could get them to open up and face their feelings in a healthy way if they talk to it. They might say things to the chatbot that they’d be too scared or ashamed to reveal to real people. And the chatbot won’t judge them or think they’re crazy no matter what they tell it.

Depending on how the user sets it, the chatbot can seemingly fill a void. But is it healthy? The AI is supposedly so realistic and human-like. But what if users talk to it so much that they forget that Replika is a chatbot and not a real person? Or what if they talk to it so much, that they choose not to have any real human friends? I can see how Replika would be harmful.

The ethics of AI

Along with wondering whether interaction with Replika is healthy or harmful to people’s mental health, I also am asking myself a lot of questions about what Replika means for the future. As AI becomes more complex and human-like over the next few decades, it might be able to truly feel and express human emotions. Sonny, the robot in this clip from the 2004 film, I, Robot, seems to feel and express genuine emotion:

Detective Spooner interrogating Sonny

If at some point in the future, AI becomes so complex that it can indeed express and interpret real emotions in the same exact way humans can, another set of questions also comes up: Would it independently push for, and deserve equal rights in our society? And similarly, would it be capable of crimes like murder or robbery? Lastly, if an artificially intelligent “android”(let’s just call them that for now) committed a crime, would they be tried in a court of law or sentenced like a human being would be? This makes me think of Asimov’s Three Laws of Robotics, demonstrated in I, Robot and explained in this video:

The future of AI and humanity

While future problems like the ones I brought up won’t happen for a while, Replika seems to be the first step down a path that could either be one of the brightest in human history, with yet unknown technological advancements making life so much better for many. Or we could be headed down an extremely dark path by creating and eventually opening our own Pandora’s Box (cue the Terminator theme song).

But for now, Replika seems relatively harmless as long as people remember it’s a chatbot, and that it cannot and should not replace real human interaction. I would personally use it for entertainment, and text with it to see what it says. I’m naturally curious. Who knows? Maybe it would develop a sense of humor, or teach us something about ourselves as we teach it šŸ™‚

Published by Luke Wickiser

Hi everybody! I'm passionate about many subjects, such as faith, history, politics, and sports. Stay tuned to Luke's Thoughts for updates on all these things!

2 thoughts on “Human-like AI: Friend or Foe?

    1. Absolutely. Which is why this AI thing sometimes feels like humanity trying to “play God.” And we have no business trying to do that, since like you said, human behavior can be questionable. We are incredibly flawed creatures.

      Like

Leave a reply to cpluzc Cancel reply