Should we worry about artificial intelligence or stupidity?

In 1997, Garry Kasparov was beaten for the second time by IBM’s Deep Blue, and ever since, the writing has been on the wall for humanity. Many believe that advances in artificial intelligence (AI), will lead to the development of superintelligent, sentient machines. Sci-fi films (The Matrix, The Terminator) have made us fearful of such developments.   

AI, however, has a couple of definitions. One, the marketing term for anything that seems clever, like Alexa turning on a radio station, is said to be AI. A second, from which the first borrows its magic, points to a future that does not yet exist.This is the world of artificial general intelligence (AGI), leading to machines with superhuman general intelligence. 

So how does one lead to the other? Current AI uses machine learning (or deep learning), rather than programming rules directly into a machine, so it can essentially learn stuff by itself. 

Machine learning works by training the machine on vast quantities of data – pictures for image-recognition systems, or terabytes of prose taken from the internet for bots that generate semi-plausible essays. But datasets are not simply neutral repositories of information; they often encode human biases in unforeseen ways. Recently, Facebook’s news feed algorithm asked users who saw a news video featuring black people if they wanted to “keep seeing videos about primates”. So-called AI is already being used in several US states to predict whether candidates for parole will reoffend, with critics claiming that the data the algorithms are trained on reflects historical bias in policing.

Computerised systems (as in aircraft autopilots) can be a boon to humans, so the flaws of existing AI aren’t in themselves arguments against the principle of designing intelligent systems to help us in fields such as medical diagnosis. The more challenging sociological problem is that adoption of algorithm-driven judgments is a tempting means of passing the buck, so that no blame attaches to the humans in charge – be they judges, doctors or tech entrepreneurs. Will robots take all the jobs? That very framing passes the buck because the real question is whether managers will fire all the humans.

The existential problem is this: if computers do eventually acquire some kind of god‑level self-aware intelligence – something that is explicitly in Deepmind’s mission statement; (“our long-term aim is to solve intelligence” and build an AGI) – will they still be as keen to be of service? If we build something so powerful, we had better be confident it will not turn on us.

For the people seriously concerned about this, the argument goes that since this is a potentially extinction-level problem, we should devote resources now to combating it. The philosopher Nick Bostrom, who heads the Future of Humanity Institute at the University of Oxford, says that humans trying to build AI are “like children playing with a bomb”, and that the prospect of machine sentience is a greater threat to humanity than global warming. 

His 2014 book, Superintelligence, suggests AI might secretly manufacture nerve gas or nanobots to destroy its inferior, human makers. Or it might just keep us in a planetary zoo while it gets on with whatever its real business is.

AI wouldn’t have to be actively malicious to cause catastrophe. This is illustrated by Bostrom’s famous “paperclip problem”. Suppose you tell the AI to make paperclips. What could be more boring? Unfortunately, you forgot to tell it when to stop making paperclips. So it turns all the matter on Earth into paperclips, having first disabled its off switch because allowing itself to be turned off would stop it pursuing its noble goal of making paperclips.

That’s an example of the general “problem of control”, subject of AI pioneer Stuart Russell’s  Human Compatible: AI and the Problem of Control, which argues that it is impossible to fully specify any goal we might give a superintelligent machine so as to prevent such disastrous misunderstandings. In his Life 3.0: Being Human in the Age of Artificial Intelligence, meanwhile, the physicist Max Tegmark, co-founder of the Future of Life Institute, emphasises the problem of ‘value alignment’ – how to ensure the machine’s values line up with ours. This too might be an insoluble problem, given that thousands of years of moral philosophy have not been sufficient for humanity to agree on what ‘our values’ really are.

Other observers, though, remain phlegmatic. In Novacene, scientist and Gaia theorist, James Lovelock, argues that humans should simply be joyful if we can usher in intelligent machines as the logical next stage of evolution, and then bow out gracefully once we have rendered ourselves obsolete. In her recent 12 Bytes, Jeanette Winterson is optimistic, supposing that any future AI will be at least “unmotivated by the greed and land-grab, the status-seeking and the violence that characterises Homo sapiens”. As the computer scientist Drew McDermott suggested in a paper as long ago as 1976, perhaps after all we have less to fear from artificial intelligence than from natural stupidity.

The above contains extracts from an article by Steven Poole that appeared in the Guardian

 

Share:

Facebook
Twitter
Pinterest
LinkedIn
On Topic

Related Articles

cause-rotated-pink

Our Cause

We will fight for a world where everyone feels safe, valued, able to grow, and be inspired by their role and the organisation that they work for. And that starts with us…

cause-rotated-pink

Our Cause

We will fight for a world where everyone feels safe, valued, able to grow, and be inspired by their role and the organisation that they work for. And that starts with us…