Technological exponential growth - why there’s a lot more to think about this time round

source-code-programming.jpg

To think about this new future is intriguing and exciting in a rather profound way. New technologies bring about new possibilities. Yet each time a new tech and product is launched, we are also reading some writings on the wall. When I saw Siri version one, I started thinking about what it’ll be like in version fifteen. My excitement is stirred with apprehension and fear all in the same cup.

Increased productivity means more profit for businesses. It also means job losses for many. Most people won’t even know what hits them, even after getting acquainted with their new irrelevance.

It has been this way for a long time some would argue. Not this time. I wouldn’t agree because this time round it’s quite different. The evolution curve is way steeper than before because of technological exponential growth.

We are contending with a situation that’s new and unprecedented.

A view from the inside

My journey became more interesting when I started to write web programming codes about 6 years ago. About how I was very much forced into the world of geeks would be a story for another day. Suffice to say that I used to hate programming back in school days. Bad teachers or bad student? It might just be both.

For a startup founder who is an artist, product creator, a full stack web developer and researcher, the mix is anything but boring and stale.

The fact that I did not grow into this area of expertise from the usual academic route means that I was subjected to a rather different psychological confine. I’ve more freedom to think outside the box. Even when people think I’m crazy, it doesn’t affect the grades. I’ve no exams to take, remember?

I’m self-taught and self-driven. The danger that I could go astray without proper guidance is very much mitigated by my immense level of curiosity in several ways.

For one, I don’t just stop when a piece of code works. I keep thinking about how the overall system could be optimised. I find myself starting from higher level languages with a growing hunger to learn and investigate lower level issues. It’s a matter of time I’ll be working on some assembly language one day. Quite naturally, I read a lot from credible sources in order to be well guided.

Reading up on Lisp (one of the oldest programming language and also the pioneering language for artificial intelligence research) was really enjoyable when I had a few hours on the plane. That helped me understand the internals of lists quite immediately when I started up on functional languages like elixir lang and elm.

One thing led to another and I started to think about how I could automate more things with codes. I started to think about how I can multiply myself. I started to think about writing codes that could produce it’s own codes (meta programming). So I research more and learned more. As I go along, I asked more questions.

I daydream about creating an emotional artificial intelligence and started to develop mental blueprints about how this can be accomplished. I started to reverse engineer and investigate my own thought processes, etc.

After a while, it’s not difficult to see that all these are so possible. Then one day, I met the most advanced artificial intelligence entity on Earth - me. In fact, it’s you and me. Think about it. Well, this could lead to another place but let me stick around with my story for now.

The sense of human nature

In a world of creativity and enterprise, to cease to be curious is to cease to be - entirely. To stop asking questions of progress is to backslide. It’s impossible to halt this progressive sense of human nature. Even if 95% of people are laid back, the rest will still be driving progress. This pretty much sums up the inevitable history of human progression.

But something is quite different this time round. Human progression isn’t a linear straight line on the chart although for thousands of years it may seem to be so. At the macro level, it isn’t a straight line. The invention of the silicon chip started to make that upward curve a lot more obvious. This is the age we are living in.

It’s impossible for Moore’s law to keep holding true. It’s destined to give way at some point. But not southwards. Enter quantum computing.

It’s like a flywheel

The story of human progression is like a flywheel. It started really really slow. It starts to pick up speed, more and more as time goes by until eventually it gains momentum to the point that it’s so powerful and unstoppable. This is another picture of how the technological exponential growth looks like.

And we happen to be quite close to the unstoppable phase of this flywheel. Think quantum computing mixed with artificial intelligence, machine learning, big data analytics, nano and robotic technologies, internet of things, etc.

When I started to talk about this since a few years ago, some people think that I was crazy. Now, every other day we read an article about artificial intelligence and super intelligent machines, etc in main stream media.

It’s starting to heat up since Google’s AlphaGo beat the best Go player in the world 4 to 1. This is after the exploits of IBM’s deep blue beating grandmaster Gary Kasparov in 1997 and Watson beating two of jeopardy’s greatest champion in 2011.

To get some perspectives, AlphaGo learned from 150,000 human games before the match-up. To play 5 games a day, it’ll take 80 years for a human. Go, unlike chess have an immense number of possibilities. So you can’t use brute force number crunching to win a game of Go. What are the number of possibilities for Go? Here we go…

208,168,199,381,979,984,699,478,633,
344,862,770,286,522,453,884,530,
548,425,639,456,820,927,419,612,738,
015,378,525,648,451,698,519,643,907,
259,916,015,628,128,546,089,888,314,
427, 129,715,319,317,557,736,620,397,
247,064,840,935.

This number was only determined in early 2016 and is claimed it to be more than the number of atoms in the Universe.

Compare such complexities to the myriads of subjects and fields of expertise, that we humanly deem to be “highly complex” today. Think engineering, finance and medicine, etc. Would these subjects become so simple to an AI?

Humans evolved Go for 2,500 years and AlphaGo did it in a small fraction of that time. I can’t help but get a profound sense of where we currently stand.

A highly likely future

Today we see black and white hat hackers waring against each other. Soon it’ll be rogue AI against good AI hacking each other. So the best defence against the rise of the machines is not to fear it, but rather to ensure that the good ones are a notch better than the rogue ones. This driving force is denominated by both fear and the will for good application. Whichever way, the sum of it will fuel and speed up progress - it’s an arms race.

It’s inevitable that we will never be able to beat our own curiosity. All forms of artificial intelligence will be developed all over the world with different behaviours for different applications. Very much like we have all kinds of people. And quite truly, we are as learned as provided by the environment and resources that avails itself to us. If Bill Gates was born in Sudan, he wouldn’t be who he is today, regardless of how intelligent he is.

Self aware and self learning machines are very much the same. Except that there are some differences - machines will be able to figure things out a lot faster than us. And there’s more.

Unification of languages

Languages will be unified. And it’s not about language interpretation alone. Humans needs interpretation before we start to “get it”, gain the context, convert words to meaning, think, contemplate and respond. Machines do not need such interpretation.

So unlike languages which is the first layer of interface, meaning is generic and universal. It’s like pointing a middle finger at someone most likely means the same thing whether you’re in Brazil, New York, London, China, Japan or Singapore. You’ll start a fight.

Language expressions will be easily resolved to it’s semantic intent and contextual meaning. When all languages are able to be resolved to the same semantic expression, machines skilled in any language can converse to each other accurately. This semantic interface means that AIs are able to learn from each other. Except that they can interface at a speed thousands to millions of times faster than human can.

Machines are also capable of ingesting both historical and real time data of many things, not just from the internet but also in real time from the real world. Think internet of things.

Can humans control this?

Try as we will, but It’s impossible that we can control this regardless of what researchers say today. One simple reason is that reaching worldwide consensus isn’t possible before some sort of catastrophe forces world leaders to come to the conference table. For example, when major economies are forced to shut down their stock exchanges in a global meltdown.

Today robots are already able to fabricate it’s own parts and fix themselves. AI will be able to write their own codes and evolve autonomously. It is also possible that they can create and improve on their own technologies, both soft and hardware.

When AI passes the Turing test and become self aware, they will form their own understanding and definition of what it means to be hurt and what it means to be happy. Such abilities are absolutely possible and not far fetched at all. They will defend themselves as we would ourselves.

Shall we end this fantasy?

How will all of these play out? I do have my views, but it’s time that we should all think about it.

 
2
Kudos
 
2
Kudos

Now read this

A much more powerful way to think about breaking limits

Most of us have heard the phrase “the sky’s the limit.” That sounds nice. But think about it, how many of us have our lives impacted by that thought? In fact, I can’t even remember when was the last time that thought crossed my mind.... Continue →