The future of artificial intelligence
AI is entering its third wave, where machines can assess context of data to make their own decisions. What new abilities can come from this and are there any pitfalls? In the Zero Pressure podcast, our host Helen Sharman is joined by experts Tero Ojanperä and Dr Karen Haigh to discuss the implications of AI’s third wave.
Education, greater transparency and trust are the keys to widespread acceptance of artificial intelligence (AI)’s third wave, according to our two AI experts in the third episode of Imperial College London and Saab’s Zero Pressure podcast.
Both Tero Ojanperä, the co-founder of Silo, the Nordic Region’s largest private AI lab, and Dr Karen Haigh, an expert in cognitive techniques for physically-embodied systems, are convinced of the benefits more advanced AI capabilities can offer.
Yet they also tell podcast host Helen Sharman that they are equally certain that we must be clear about how and why the technology operates as it does, neither dismissing nor unthinkingly accepting it, and that we must take charge.
“We as humans have to be able to put the appropriate boundaries on it,” says Dr Haigh.
“It’s not unlike what you would do with your toddler at home. You set up fences around that child to make sure that it is learning the things you want it to learn.”
Advanced capabilities, great potential
AI has been around since the 1950s, when it began with simple statistical processes before expanding into machine learning in the 1960s. However, in the past decade it has been developing at breakneck speed.
The third wave of AI promises machines that can contextualise data points. “Something we as humans do all the time, for example when understanding what a road sign means, even when it’s covered in dirt and hardly visible,” says Helen Sharman. We are already seeing this type of AI make its way into self and assisted-driving cars, for example.
Tero Ojanperä and Dr Karen Haigh both believe that this new wave of AI has great potential in a number of applications, including school education, supply chain management and coordinating responses to natural disasters.
“It’s not about AI taking over but about AI supporting people in their daily work, helping to solve big problems and in this way making the world a better place. It’s about rethinking how we perceive AI,” says Ojanperä.
“When we think about the teaching process, AI can collect data on learning in a digital environment. You can see better who’s falling behind and give them individual support. And AI can interact when it comes to teaching. It can be the teacher on many occasions, like a chatbot that many of us have communicated with in customer service applications.”
The keys to realising AI’s promise
As befits a technology now enabling machines to contextualise data, our experts think that seeing AI in context is key to its integration and acceptance into our lives. Neither see it as a risk to humanity.
Dr Haigh, who has written about the role of AI in electronic warfare, sees the technology as being well suited to rapid handling of complex tasks in restricted situations, such as the battlefield.
“If we are looking at a critical defence sector, situations with hard real-time operating constraints, things must happen in very tight time frames. You’re operating with small embedded devices. You can’t go back to the cloud to do the huge computations. There’s a need for the right decisions to be made quickly,” she explains.
“The other major difference in military situation is that there are often no previous learning examples. You’re doing real-time, in-mission learning and there are systems that will go out with no data at all. Mars Rover has no time to go back to Earth regarding every decision that it’s making. AI is just a tool, like a set of mathematical tools that can be applied to problems, and it’s especially useful in remote and rugged environments.”
And Tero Ojanperä adds: “There are so many benefits but it’s very important that we are transparent about AI, not only in education but if people are communicating with a chatbot in a customer service situation - if you think it’s a human and then find out that it’s AI you might become angry or disappointed. It’s all about being transparent and explaining in simple terms how it works.”
And for both experts, informed trust is crucial.
“If we consider motor cars with lane following and cruise control – we have been accepting them, so it’s building up trust in AI from the lowest level,” says Dr Haigh.
“AI is still like a human. It makes mistakes sometimes,” adds Ojanperä. “If I understand that it can have control of the car yet I still need to be alert, that works. So long as we don’t trust blindly, and have transparency, the world is our oyster.”
How to follow us
This third podcast in Imperial College London and Saab’s Zero Pressure series is available on most podcast platforms including Spotify, Google and Apple.
Your questions, comments and suggestions for future discussions are welcome. Follow us on your favourite podcast platform and on Twitter to stay involved!