Is A.I. the future, or the end of humanity?

Amazon is running two seasons of the TV show made in Great Britain called Humans. It takes place in a time when synthetic human beings–synths–are produced by factories. They look like human beings and they do the menial tasks that human beings use servants for. Not an uncommon theme for science fiction movies. But surprisingly, in this show very significant issues are raised in a very intelligent and artistic manner.

To make it interesting and introduce high drama, five of these synths have been imbued by their creator with human-like emotions and consciousness (I will call them CS-synths), while the rest of them–millions–are merely human-looking machines very well programmed (call them Robo-synths). Rather than go through the rather complex plots and characters, I find it more interesting to discuss the philosophical questions that are being raised:

  • An R-synth named Odi was a household servant, but after the death of his owner, someone decided to give him “freedom” by uploading a special module which inserted human-like emotions into his programming, making him a CS-synth. He was then encouraged to “enjoy his freedom and explore his possibilities.” Having had no experience with either, spent all his time either aimlessly wandering or cowering in fear of discovery. Eventually, he “killed” himself by destroying his programming, but the note he left was significant. He said he was originally in a state with plenty of purpose–serving his owner–but no freedom. Once he had “freedom”, he had no purpose, and that was a far worse state to him.
  • A CS-synth named Karen pretended to be human–in fact was able to masquerade as a police officer–so successfully that a human officer fell in love with her. Even after he discovered she was artificial, the love remained. During the course of a conversation, he casually said “I have a brain, that synth merely has clever programming.” Her reply: “How do you know the difference?”
  • A little girl named Sophie starts to “identify as a synth.” Her behavior becomes more and more automaton-like, voice inflection and emotions become flatter, and she imitates the household synth by obsessively cleaning and arranging things. When asked why she wants to be a synth, she replies, “I would be perfect, no mistakes, forgetting nothing, and I wouldn’t have to feel anything like sadness or anger or fear.”
  • A famous researcher in the A.I. field, Dr. Morrow, is trying to keep her daughter, who died from a fall when she was a teenager, alive digitally by storing her visual memories on a computer (don’t ask me how). When she is able to link her daughter’s “consciousness” program with a huge capacity network that spans the globe, “she” is no longer limited to a laptop computer but is able to inhabit any computer all over the world. When Dr. Morrow offers to transfer her consciousness to an actual human body, her response is, “why would I want to be limited to a single body when I can go anywhere I want instantly? Her mother replies, “so you can feel the sensations you were once able to feel.” Her daughter rejects the trade-off, thus defining herself as her consciousness rather than her form. I can relate to this one. I suffered a stroke 2 years ago, and have hardly improved. My mind and consciousness (are they the same or not?) are extremely sharp, and my body is a prison of pain and limitations which can’t even taste.
  • Towards the end of the second season, a CS-synth who fell in love with, and then was betrayed by a man, is speaking to a person who has been very sympathetic to synths. “Humans have no intrinsic value. You thought you did until we came along.”

Well, what is the “intrinsic value” of being human? Where does that value originate? Elon Musk, creator and prime mover of Tesla and Space-X believes the A.I. is more dangerous to our future than nuclear weapons.elon musk AI