Blog post

Artificial intelligence - is the hype cycle over?

For almost 20 years, Big Data, artificial intelligence (AI) and machine learning have been promoted as the way forward, with AI one of four strands in the government's Industrial Strategy. But is the bubble about to burst for these technologies? According to Richard Self, Senior Lecturer in Analytics and Governance at the University of Derby, the answer is 'probably yes'.

By Richard Self - 17 January 2019

AI is lauded as the future, helping us to get new insights from all the data that surrounds us, replacing jobs and transforming our everyday lives: autonomous vehicles will revolutionise transport, and in offices, repetitive clerical work will be mechanised. All of this is predicated on the fact that AI is going to be successful.

However, few technologies actually succeed and become productive, value-driving business models - over the last 25 years, only about 25-30% of IT projects have ever delivered in terms of cost, timescales, business value or functionality. The Gartner Hype Cycle shows how new ideas emerge from research labs and, driven by marketing excess, secure funding for further development. Typically, however, once lots of people are on board, they start discovering that the technology isn't as good as they'd initially thought.

With AI, we've already experienced two or three cycles - in the 50s and 60s, again in the 70s and 80s, and more recently. Things like predictive analytics, machine learning and neural networks are really based on advanced statistics and mathematical insights which were gained about 30 years ago. The problem is, we haven't actually had any significant new insights since.

A series of major AI failures

At the end of 2018, something unique happened: five major companies admitted their hype was no longer valid. In my 40-year career, I have never seen such important companies suddenly admitting their dream was wrong.

It started with IBM, which had been focusing, for the last five years or more, on cognitive computing - its IBM Watson cognitive environment. IBM partnered with the Sloane Kettering Hospital, one of the world's leading cancer research and treatment centres, feeding in huge quantities of data about diagnoses, treatment and protocols for different types of cancer. Working with the hospital's staff, they trained the AI to understand the information and act as an expert advisor, including rating its confidence in the advice it gave.

The technology worked well in Sloane Kettering, and they hoped to licence it to hospitals around the world, but their dream was short-lived. The AI simply didn't work in hospitals with different protocols and procedures.

IBM's Watson cognitive was also the technology behind CIMON, an Airbus-built bot which was taken up to the International Space Station to interact with the crew. When they switched it on, it rapidly became uncooperative, refusing to react to commands and instead talking about music. It overheard an astronaut discussing the issue and told him to 'be nice'. In the end, it was switched off until its return to Earth.

In early November, IBM said it was changing direction and going back to its roots - mainframes and operating systems - paying $40bn for a company called Red Hat Linux to do just that.

In the same seven-day period, Uber and Lift, who had been actively trying to develop their own autonomous self-driving car technologies, announced they were moving away from driverless taxis to become a global transport portal. They also announced they are now buying up companies providing electric bikes and scooters, having lost 10% of their market in San Francisco to them.

Then, in mid-November, John Krafcik, Chief Executive of the Google Waymo autonomous car company, made an astounding statement: after decades of work, and millions of miles of testing, he doesn't envision a day when driverless technology is able to operate safely in all weather conditions and without some sort of 'user interaction' from a human being.

The problem of human bias

There are signs that AI and machine learning analytics are running into some interesting problems. AI is trained by inputting a lot of data, which we help it to understand so it can tell us what the answer is, and we tell it if that's right or wrong.

However, because it's using human data, it can suffer from one of two biases: either the data is biased, or the training is biased. AI learns from using the data we humans have inputted, or the decisions we have made, so it learns how to be like us, and adopts our biases.

Amazon had been working for several years on an AI analytics tool for its human resources group to pre-process the huge number of job applications it receives, only to find it had become sexist and would only process male applications. It had learned corporate human biases from the 10,000 job applications it had been given, so in January 2018 Amazon killed it off.

And there are many other examples. Microsoft's Tay Bot, an AI chatbot designed to learn from human interactions on Twitter, became racist and foul-mouthed within 24 hours and was taken offline. Facebook got two chatbots talking to each other, which quickly invented a brand new language which humans couldn't understand.

Data without common sense

Problems can arise even when the AI has not been affected by human bias. If you use a large enough amount of data, analytics systems will find correlations and patterns - almost all of which are totally spurious. If you take the number of tonnes of lemons imported into the USA from Mexico per year, and the number of road deaths in the USA, for example, the correlation is perfect, but meaningless.

That's why we need human intuition and experience, because AIs are incredibly domain-specific. We don't yet know how to make 'artificial general intelligence' because we don't know how we humans do what we do. There are so many things we don't understand about ourselves, and there have been no fundamental insights in any of these areas for many years.

What's made recent advances possible is Moore's Law - the doubling of compute power, and now data too, every 18 months. Smartphones now have more compute power than the supercomputers of the 1980s. It's still the same old algorithms, only with a huge amount more power.

Where next for AI?

Quite likely, we are approaching the end of the current hype cycle for AI and machine learning. It is becoming increasingly clear that AI is beginning to fail, or at least we're finding its limits. The hype has proved to be just that.

So, if we are at the end of the hype cycle, does that change where we should be going? The answer is no. As educators, we need to change away from teaching predominantly technical skills (although they are still important) and emphasise human skills like curiosity, creativity, collaboration, critical thinking and communication, because we don't know how to make computers do these things. This will allow our graduates to partner with AIs as they develop, and AIs can then become useful as crunchers of data so that humans can make the judgements.

If history is anything to go by, we'll see AI go into the doldrums for 10 or 15 years, until someone has a new way of thinking, which may lead to some new capabilities which could put jobs seriously at risk.

Other blogs in our four-part Industrial Strategy series:

For further information contact the press office at pressoffice@derby.ac.uk.

About the author

Richard smiling, wearing a blue shirt

Richard Self
Senior Lecturer in Analytics and Governance

Richard is a Senior Lecturer in Governance of Advanced and Emerging Technologies.

Email
r.j.self@derby.ac.uk
View full staff profileView full staff profile