Even the most cursory of looks into the subject of AI shows some major developments and even bigger ones planned for the future.
In short, AI seems to be on everyone’s minds and everyone’s lips.
But, is AI really living up to the amount of attention it is getting?
Current Applications and Immense Potential
Ignoring the many applications of AI that we experience on a daily basis is luddism, to say the least. Since the advancements in computing power have finally made the practical application of deep learning possible (more on this later), companies and organizations around the world have found innumerable practical uses for it.
For years, Uber has depended on deep learning to predict the ETAs of their rides, to set pricing and to help customers wait in the right spot. In short, Uber depends on deep learning to be the company that it is. In fact, back in 2016, Uber’s head of machine learning Danny Lange said that Uber wouldn’t be able to function as a company without it. Uber’s Chinese counterpart, DiDi Chuxing recently announced going a step further and helping develop a city traffic management system powered by AI>.
Two of the world’s largest ecommerce platforms, Amazon and Alibaba are also powered by some of the most advanced AI algorithms we know today, both when it comes to obvious stuff such as recommendations, but also when we are talking under-the-hood applications. Another two giants, Google and Facebook, are also among the most prominent AI pioneers, basing a great deal of their operations on deep learning. Robotic process automation and conversational AI are also inching closer to one another and it will be no surprise once AI becomes an integral part of RPA.
Of course, not all use of AI is strictly commercial. For example, in Singapore, the JTC agency has started using AI-driven analytics to better manage its buildings, reducing energy costs by a significant amount. Considering that one-third of Singapore’s electricity is spent by buildings, this is an important development.
Most recently, Babylon Health presented their AI doctor at the London Royal College of Physicians which received an 82% on a standard MRCGP exam, meaning it would pass it easily. This is just the latest in the line of AI “physicians” which are getting more and more awe-inspiring.
Reliance on Deep Learning
Taking all of the above into consideration, one would think that the field is experiencing a veritable revolution with no end in sight. However, there are a few voices that are saying that this might actually not be the case and that AI has entered a plateauing phase, that it has stopped making significant leaps.
One of those voices is Geoffrey Hinton.
In order to understand why he is someone we should listen to, we have to understand that the vast, vast majority of everything that happens in AI revolves around deep learning.
Namely, apart from some very rare exceptions, most AI applications in use today fall into the category of deep learning which is, in turn, based on backpropagation. Backpropagation is a method for training neural networks to be more successful at giving the correct output, ultimately resulting in self-organized hierarchical layers that handle certain aspects of the task at hand. This self-organizing aspect of deep neural networks and their ability to learn certain representations of ideas on their own is what makes them AI in the sense of the word we use today.
The aforementioned Mr. Hinton was the one to work out how to use backpropagation to train deep neural networks. The interesting thing is that he did that in 1986. It was only thanks to the recent proliferation of GPU-based computing systems that it has become possible to do it practically.
In one of the best articles on AI in the last couple of years, Hinton himself says that most of the work in the field is revolving around variations. The article makes more than a few great points about this beyond Hinton – noting how, for all the excitement and seemingly major leaps, the AI field is more tinkering and engineering than science.
There are some exciting developments in other areas of AI, like the Feynman Machine<, but it is crucial to understand that the vast majority of new things we hear about AI involves deep learning which has been around for a while and where talking ‘about “revolutionary” is overwrought,’ as another one of the world’s preeminent experts on machine learning, Michael Jordan, puts it.
Inexplicability of AI Decisions
In addition to some “minor” issues associated with deep learning-based AI, such as its need for training, its reliance on less-than-perfect and expensive big data, and its inability to deal with complex input, the field of AI is faced with another, much grander problem that limits the application of AI – the inexplicability of its decisions.
Namely, there is no simple way to determine how an AI system comes to a decision. We can understand how it was built and trained, but the decision-making process it develops cannot be understood or explained.
This issue has been keeping many an AI researcher awake at night at least since the 1990s, but it has only started garnering the attention of the general public relatively recently. In fact, the EU’s GDPR legislation includes a part about EU citizens having an implied right to an explanation of any decision made by artificial intelligence. While the language of the legislation is vague (for a number of possible reasons), it is safe to say that the legislative bodies have also got a wind of this issue.
The ever-growing number of questions on this issue is perfectly understandable. If we are to rely on AI-made decisions in areas as critical as healthcare, military or transportation, shouldn’t we be able to obtain the rationale for those decisions?
Research is being done into discovering how AI systems decide, but it will most likely be a long time before the decision-making process is sufficiently explained to be applied to certain critical matters.
This is no small problem.
The AI field is definitely an exciting one, with many applications already affecting our lives and many more on the horizon. That being said, there is certainly a great deal of undeserved hype surrounding it, seemingly unaware of its (current) inherent limitations and issues.