Oleg dragged his bone sickle through the arid dirt and when he reached the end of the field he turned back and poked holes with a stick in the track he had made. His mother then followed behind him, dropping seeds into the holes. Oleg was five and had been doing this already for two years of his young life.
It was 12,000 BC and his family, instead of constantly travelling and picking wild food, had decided to settle in one place and farm their food instead.
Generations later, Oleg’s descendants plowed the fields using oxen and a harrow, which turned the earth over as they plowed. The use of irrigation and the invention of water mills meant families could make their smallholdings productive even in the most arid of places. This was the Middle Ages.
In the 1800s (AD), the invention of the internal combustion engine made it possible for animals to be swapped for gas-powered tractors and harvesters.
Oleg’s great-great-great-great-great grandson looked around his farm, which stretched as far as the eye could see in every direction and thought to himself; “It doesn’t get better than this.”
Technology is always at its most advanced.
Today, most of us don’t have to concern ourselves about farming or where our food comes from, but have the time, instead, perhaps about the threat from artificial intelligence (AI).
The intelligence of machines has been a trope for sci-fi for decades and as the novelist, William Gibson wrote: “The future is already here; it’s just not very evenly distributed.”
There is much discourse on the danger posed by ‘superintelligent’ machines, even though they have always been 20 to 50 years away since we first started worrying about them.
However, the lower-level, but already very powerful AI is already here and playing an ever- increasing role in our economy, politics and society. This combination of big data and machine learning is everywhere and it is controlled by a handful of newly-powerful companies and national security agencies.
Google and the other digital corporations go weak at the knees over ‘AI everywhere’ and preach it at every opportunity. The sheer youth and un-testedness of these companies means we must, legitimately, raise questions and concerns over their short and long-term intentions.
Is what they are doing legal, ethical. Is it good for society and what will happen when it all ends up in the hands of people who are even worse than these rapacious billionaires and their acolytes?
Some scholars have started asking the awkward questions. AI Now is a research institute from New York University which looks at the social implications of AI.
Their 2017 report, The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation, is tellingly titled.
It considers three key areas that will prove problematic.
The use of AI to automate tasks involved in carrying out cyber-attacks will alleviate the existing trade-off between the scale and efficacy of attacks. We can also expect attacks that exploit human vulnerabilities (eg. the use of speech synthesis for impersonation), existing software vulnerabilities (through automated hacking) or the vulnerabilities of legitimate AI systems (through corruption of the data streams on which machine learning depends).
This could be attacks by drones and autonomous weapons systems. (Like the hobbyist drones that Isis deployed, but this time with face-recognition technology on board.) We can also expect new kinds of attacks that subvert physical systems – causing autonomous vehicles to crash, or ones deploying physical systems that would be impossible to remotely control from a distance: a thousand-strong swarm of micro-drones, for example.
This might involve using AI to automate tasks involved in surveillance, persuasion (creating targeted propaganda) and deception (eg. manipulating videos). We can also expect new kinds of attack based on machine-learning’s capability to infer human behaviours, moods and beliefs from available data. This technology will obviously be welcomed by authoritarian states, but it will also further undermine the ability of democracies to sustain truthful public debates. The bots and fake Facebook accounts that currently pollute our public sphere will look awfully amateurish in a couple of years.
The report is available as a free download and is worth reading in full. If it were about the dangers of future or speculative technologies, then it might be reasonable to dismiss it as academic scare-mongering. The alarming thing is most of the problematic capabilities that its authors envisage are already available and in many cases are currently embedded in many of the networked services that we use every day. William Gibson was right: the future has already arrived.