AI Chronicles: The Rise of Artificial Intelligence – Science Fiction’s Favorite Technology

You all remember the opening scene of the original Blade Runner (1982). It’s said that empathy is one of few things that can distinguish humanoid Artificial Intelligence – known as replicants in the film – from a real person. Leon is a replicant being interviewed/tested by a Blade Runner named Dave Holden. Just as soon as Holden asks questions about family relationships, Leon pulls a gun and kills the interviewer.

When people think of AI, some of them think of the technology as a potential menace, a likely threat to humanity at large. Part of the reason is that science fiction often (although not always) depicts AI as a sentient entity that may possess harmful intent toward their creator. Some movies do offer an excellent insight into the future of technology, but many of them blow things out of proportion. In fact, most of the AI machines and robots you see in sci-fi films are utterly unrealistic – at least when compared to the real-world technology we have today – and only a few get it just right. Let’s take some movies as examples and see if their ideas of AI are half-baked or reasonable enough.

Chappie (2015)

Artificial Intelligence

The robot police comes to the world with just a basic understanding of the world around him, and yet it’s programmed to learn through experience. It’s accurate in the sense that a machine-learning system needs to be trained to organize and interpret data. But the film gets two things terribly wrong. First, the programming is done by a single person, when in reality, it may take a huge team of AI developers many years to build something that even remotely resembles Chappie. Second, the idea of transferring human consciousness to a computer chip is just pure fiction.

Blade Runner (1982)

Artificial Intelligence

The classic sci-fi film depicts a future where genetic engineering can create humanoid robots with physical characteristics indistinguishable from humans. Problems arise when the film pitches the idea that the resemblance applies to both physical and psychological aspects. According to Blade Runner, it is possible to give AI consciousness using implanted memories. Real science begs to differ, as genetic engineering doesn’t think it’s actually possible to implant complex memories.

A.I. Artificial Intelligence (2001)

There isn’t really anything wrong with David, a robot child programmed to love and need to be loved. Everything he does to achieve that directly results from the program. It’s all good at this point until you realize that the humanoid AI is created by a small team of scientists within a period of just 18 months. Making the development project even more unrealistic is the fact that David comes with a simple switch to turn on the conscience. And then there’s the unlikely sight of David being well-accepted into the society as if he is an ordinary child.

I, Robot (2004)

What makes “I, Robot” great is the way it directly addresses Asimov’s Three Laws of Robotics:

  1. A robot should never harm humans through either action or inaction.
  2. A robot must always obey orders given by humans, unless the order conflicts with the first law.
  3. A robot has to protect its own existence as long as the protection does not violate the first or second law.

A surprise comes when VIKI, the supercomputer, also implements the Zeroth law, which essentially overrides the programming and alters robots’ behaviors. In reality, robots cannot develop a new agenda by altering their own underlying codes.

2001: A Space Odyssey

HAL 9000 is likely the best depiction of AI in sci-fi movies. It can analyze data, recognize speech, and verbally respond to spoken queries. The AI is capable of speech like a normal human, but it’s not in humanoid form. During an investigative mission to space, the crew of Discovery One come to realize that their onboard AI, known as HAL 9000, will overtake the mission because it thinks humans can’t get the job done. It’s sitting in a static position but connected to the entire system of the spaceship.

HAL 9000 represents what an AI should be, at least for now. The film somehow depicts HAL as the villain, but there’s no exaggeration about what it can and can’t do. In fact, everything it does might be regarded simply as its own technical, sophisticated, predictable way of solving problems and making sure that the mission always occupies the highest spot in the priority list – even higher than the safety of the crew.


We think much – but not nearly enough – has been written about AI, first in literary works and now in TV shows and movies. But of course, you should never see filmmakers and authors as the authoritative figures with the real AI technology. Our computer engineering and robotic science still have a long way to go before they reach a point where Chappie and Ava are technically plausible. Yes, the visual effects used to depict humanoid AI have become much better than it used to be, in the sense that robots appear realistic on screen, but it doesn’t mean the AI ideas themselves are getting more accurate.

Other Things You Might Want to Know

How is HAL 9000 accurate?

The most important thing is that HAL 9000 doesn’t stray from its original programming. Attentive viewers know that all the nefarious actions are simply part of the system’s objective to ensure the success of the mission. HAL has neither emotion nor survival instinct that can turn it into some sort of villain; it carries out every seemingly sinister act against humans because it doesn’t want to jeopardize the mission. HAL 9000 is a fine example to show that even sci-fi AI doesn’t need to entangle itself with emotions, consciousness, desires, and feelings of any sort to occupy a central spot in the story. A well-made programming is enough for the job.

What is Zeroth Law?

It extends Isaac Asimov’s Three Laws of Robotics. Zeroth Law adds two major points:

  • The fate of humanity is much more important than the fate of a person or a single human.
  • As long as it’s in the long-term interest of humanity, robots can overrule every other law whenever necessary.

And of course, when long-term interest is the priority, sometimes certain actions have terrible short-term effects; this can make robots or artificial intelligence appear like a villain. For example, if AI robots shut down all industrial processes to stop greenhouse gas emissions, an economic disaster is inevitable.

Where do all of those things put Bicentennial Man?

Andrew isn’t like most AI robots. Evil AI wants to enslave and kill humans, whereas the good one is happy to serve people. It makes little sense that a robot as sophisticated as Andrew would want to be recognized as a person. This is outside his original programming, and it doesn’t seem to do him any good, anyway. Being able to develop emotions and think like an actual human, without all the shortcomings like aging and vulnerability to diseases, should be more than enough.

Check out other articles by month: