Artificial intelligence is a potentially world-changing technology. It could help cure cancers, control autonomous cars, and augment human intelligence. Or it could lead to a robot apocalypse and the downfall of humanity. It depends on who you ask.
Artificial intelligence or AI simply means software used by computers to mimic aspects of human intelligence. For example, a program that recommends what you should read based on books you’ve bought or a robot vacuum that has a basic grasp of the world around it.
So why all the fuss? In the last decade a particular flavour of AI, called machine learning, has become extremely powerful. The technique is behind everything from DeepMind’s world champion Go playing AIs to Google translate, and face recognition algorithms to digital assistants, such as Amazon Alexa.
Rather than programmers giving machine learning AIs a definitive list of instructions on how to complete a task, the AIs have to learn how to do the task themselves. There are many ways to attempt this, but the most popular approach involves software called a neural network that is trained by example.
Advertisement
Neural networks
A neural network is a large web of connections, inspired by the way neurons connect in the brain. Inputs work their way through the network, guided by the strength of the connections, to find the appropriate output.
In the case of a robot vacuum, the inputs could be all of the various measurements from its sensors, and the output could be how it decides to move. To train the vacuum, it could be shown thousands of examples of humans vacuuming rooms along with the relevant sensor inputs. By strengthening the relevant connections, a neural network vacuum would then eventually learn which inputs correspond to which actions so that it can clean the room by itself.
Neural networks have been around since the 1940s and 1950s, but only recently have they started to have much success. The change of fortunes is due to the huge rise in both the amount of data we produce and the amount of computer power available.
The AIs require anywhere between thousands to millions of examples to learn how to do something. But now millions of videos, audio clips, articles, photos and more are uploaded to the internet every minute, making it much easier to get hold of suitable data sets – especially if you are a researcher at one of the large technology companies holding people’s files.
Processing these data sets and training AIs with them is a power-hungry task, but processing power has roughly doubled every two years since the 1970s meaning modern supercomputers are up to the task.
Artificial general intelligence
However, neural networks can’t do everything. A neural network trained to do one thing is next to useless at doing something else. For example, ask Libratus, an AI that is the world’s best heads up no-limit Texas hold ’em poker player, which country Las Vegas is in and it wouldn’t even be able to process the question.
There are ambitions to create AIs with a broader range of abilities, known as artificial general intelligence (AGI), which could perform any task that the human brain can. But we’re a long way from AGI at the moment.
Because of the limitations of neural networks, some argue a completely different approach may be needed to reach AGI. That could be giving neural networks more underlying structure to begin with to help them learn things in a better way, moving to programs that learn by evolution, or trying to better mimic animal brains.
Once we have AGIs, there are worries that could spell the end for humanity. AGIs could become so smart that they iteratively improve their own intelligence to far surpass human intelligence. These superintelligences could have motivations of their own, and keeping humans around may not be one of them.
AI problems
There’s no need to worry just yet though. We’re still a long way from reaching AGI and there are some pressing concerns regarding current AIs.
AI is already involved in making important decisions, such as someone’s eligibility for parole in parts of the US. It is also increasingly assessing people’s suitability for a job, loan, or insurance product.
Yet AI is often biased, meaning its recommendations can be too. Time after time, researchers have found that neural networks pick up biases from the data sets they are trained on. For example, face recognition algorithms have much lower accuracy for anyone who is not a white man – a hallmark of the lack of diversity in the data sets.
AI decisions are also opaque. How neural networks come to conclusions is very hard to analyse, meaning if they make a crucial mistake, such as missing cancer in an image, it’s very difficult to find out why they made the mistake or to hold them accountable. This could slow the progress of AI in applications where trust and safety are particularly important. Timothy Revell