Towards Doomsday or Utopia? : A slightly biased walk through AI

  • Reading time:9 mins read

When we were kids I was so fascinated with the K.I.T.T (Knight Industries Two Thousand), the amazing futuristic car with an inbuilt AI, from the TV show Knight Rider. The talking car with emotions and self-awareness was so amazing (and still is). Fast forward few decades and here we are, with self driving cars from Tesla and other automobile manufactures as well as from IT giants, talking AI assistants such as Siri and Alexa, and even some promising emotionally intelligent agents. Although we are not there yet, the concept of K.I.T.T might not be a fantasy after all.

Its no secret that we are going in an accelerated rate towards Artificial General Intelligence(AGI). Is it good or is it bad? That’s the million dollar question. Although its easy to end a discussion by taking a stance for AI or against it; this question still remains as something that is worthwhile to dig a little deep into.

Back in 2005 when a Google engineer said the following, it raised more than few concerns.

We are not scanning all those books to be read by people, we are scanning them to be read by an AI.

Google’s co-founder Larry Page said the following and similar things throughout the past decades which gave us a glimpse of the direction world was heading.

Google will fulfill its mission only when its search engine is AI-complete. You guys know what that means? That’s artificial intelligence.” – (May 2002 )

This was almost 2 decades ago. Imagine how much technological advancements we had since then in the machine learning and artificial intelligence fields alone. Robotics has had its own rapid growth as well. Are will building our own doomsday or are we going for a utopia ? What ever it is, it is an almost certain that Strong AI (may be not the version you and me think) is going to be inevitable.

o its wise to at least have a fair amount of knowledge on the field of AI even though you are not a scientist or an engineer. We are already surrounded by AI agents. Surprised ? Well yes, though it is true that we are not closer to a Strong AI system, we are using Weak AI systems all the time; from video games to self driving cars, from weather forecasting to stock market handling, from transportation to medicine; AI is everywhere.

No need to be alarmed (not yet). When we consider the expectations of an AI, or in other words the tasks which an AI should perform, we get an idea on where the world is currently on the road map of AI.

The artificial intelligence tasks are divided mainly into three categories.

  • Mundane Tasks Perception (Vision and Speech), Natural Language (Understanding, Generation, Translation), Common Sense Reasoning
  • Formal Tasks Mathematical Proofs, Playing Games
  • Expert Tasks Engineering, Medical Diagnosis, Analysis

At the first look it seems like since we are yet to get the mundane tasks working in AI we are far far away from the expert tasks. But the irony is we have already made significant advances in accurate AI systems which are doing expert tasks better than humans in certain domains, even though we still are baffled with how to get the mundane tasks working. (Current advances in deep learning in vision and natural language processing has taken us closer to teaching AI systems to perform better in certain mundane tasks as well)

The Knowledge based Expert systems have been in use for decades now and many of the day to day systems rely on such AI systems even without us knowing. The reason for expert tasks to be relatively easy to implement in AI is because as the tasks get more specific it is much easier to teach and/or learn (for a machine) in comparison to learning common sense.

We humans on the other hand are very good at picking up common sense but it is harder for us to become experts in some field. The difficulty of trying to give a machine the ability of common sense was strongly stuck in my brain few years ago while listening to a lecture from Prof. Patrick Winston. He gave the following example.

“Imagine a man who is walking with a water bucket on his left hand.”

Now we with our common sense know a lots of things about this situation. But unless we specifically told the computer, it will never guess that ‘the man will be slightly bent towards his left side’

Going further with the example he also showcased that if the man is running, then what would happens. By common sense we know that the water in the bucket will probably be spilling, his trousers and feet might be getting a bit wet and so on. That is our common sense. A computer program does not have that ability.

Yes we can give it a huge knowledge base of past events, then it will know from a past video or something that if a man is running with a bucket of water then it will be spilling. But we humans have an amazing ability of generalizing and applying the knowledge gained through different experiences to create one of the greatest tools in our arsenal, the common sense. When we see a body of water then we have common sense to know its okay jump into the swimming pool and not the water well, it is okay to wash our hands from the tap water not from the water in the gutter, it is okay to drink from the water filter and not from the dog bowl. Even though every time it was water, we have the common sense to figure out the correct thing to do (at-least most of the time).

This doesn’t in anyway mean that the AI is not growing. 20 years ago when Deep Blue defeated Garry Kasparov it was a huge deal. Still the AI community didn’t expect to see the likes of AlphaGo Zero or OpenAI Five till well after 2040s. But here we are. Being getting beaten by our own creations.

At the rate the research in AI is growing, its not easy to give a timeline for the emergence of the AGI and it would also be unwise not be concerned about the power of AI. For me its not about stopping and saying no to the development of AI (we can’t, and I don’t want to 🙂 ). For me its about finding ways of making an AI benevolent. When a baby is born, we don’t know whether he or she is going to grow up to be a normal person or a sociopath (I am not talking about psychopaths). But some how almost all of us grow up to be normal. How? Guidance? Rules? Culture? Believes? Yes, probably, among other things. So shouldn’t we pay more attention to understanding how we start from a clean slate and grow up to be more or less benevolent; and some how guide our AGI through the same processes?

I personally don’t like when people say, “Hey AI is just a tool, like, fire. Fire is not good nor bad, its how we use it. So we don’t need to take it seriously”. Well, for starters, even with a wild fire, we can let it burn out or do something to stop it. But if we loose an AIG into the internet, there is nothing we can do (well, except for shutting down entire internet; which is not possible and even if we did, shutting the internet down would bring the humanity to its knees anyway).

Yes an evil (or even a pure logical or pure emotional) AGI would be bad for us. On the other hand if we are able to find a way to some how nurture an AGI (when ever we build it) then all the problems of humanity would be solved. Cure for cancer? Done. A lawful society ? Done. No work ? Done. The positive changes the world and humanity would enjoy would truly be endless.