Some more AI

Artificial intelligence (AI) is a big term right now. But I think it is often misunderstood, a bit like black holes or time travel. Although these probably have little impact on us currently, AI (or what is termed AI) is having a sizeable influence on most of us.  

If we define AI as ‘machine learning’, we can see that the tech giants that own and control the technology have plans to exponentially increase its impact and to that end have crafted a distinctive narrative. It goes like this: “While there may be odd glitches and the occasional regrettable downside on the way to a glorious future, on balance AI will be good for humanity. And its progress is unstoppable, so don’t worry your silly little heads fretting about it because we take ethics very seriously.”

Critical analysis of this narrative suggests that the formula for creating it involves mixing one part fact with three parts self-serving corporate cant and one part tech-fantasy emitted by geeks who regularly inhale their own exhaust. The truly extraordinary thing, therefore, is how many apparently sane people seem to take the narrative as a credible version of humanity’s future.

Chief among them is our own endearingly stable and competent prime minister, Mrs May, who has identified AI as a major growth area for both British industry and healthcare. She is not alone.

Why do people believe so much nonsense about AI? The obvious answer is that they are influenced by what they see, hear and read in mainstream media. But until now that was just an anecdotal conjecture. The good news is that we now have some empirical support for it, in the shape of an investigation by the Reuters Institute for the Study of Journalism at Oxford University into how UK media cover artificial intelligence.

The researchers conducted a systematic examination of 760 articles published in the first eight months of 2018 by six mainstream UK news outlets, chosen to represent a variety of political leanings – the Telegraph, Mail Online (and the Daily Mail), the Guardian, HuffPost, the BBC and the UK edition of Wired magazine. The main conclusion of the study is that media coverage of AI is dominated by the industry itself. Nearly 60% of articles were focused on new products, announcements and initiatives supposedly involving AI; a third were based on industry sources; and 12% explicitly mentioned Elon Musk, the would-be colonist of Mars.

Critically, AI products were often portrayed as relevant and competent solutions to a range of public problems. Journalists rarely questioned whether AI was likely to be the best answer to these problems, nor did they acknowledge debates about the technology’s public effects.

“By amplifying industry’s self-interested claims about AI,” said one of the researchers, “media coverage presents AI as a solution to a range of problems that will disrupt nearly all areas of our lives, often without acknowledging ongoing debates concerning AI’s potential effects. In this way, coverage also positions AI mostly as a private commercial concern and undercuts the role and potential of public action in addressing this emerging public issue.”

This research reveals why so many people seem oblivious to, or complacent about, the challenges that AI technology poses to fundamental rights and the rule of law. The tech industry narrative is explicitly designed to make sure that societies don’t twig this until it’s too late to do anything about it. The Oxford research suggests that the strategy is succeeding and that mainstream journalism is unwittingly aiding and abetting it.

Another plank in the industry’s strategy is to pretend that all the important issues about AI are about ethics and accordingly the companies have banded together to finance numerous initiatives to study ethical issues in the hope of currying favour with politicians and potential regulators. This is known in rugby circles as “getting your retaliation in first” and the result is what can only be described as “ethics theatre”, much like the security theatre that goes on at airports.

Nobody should be taken in by this kind of deception. There are ethical issues in the development and deployment of any technology, but in the end it’s law, not ethics, that should decide what happens, as Paul Nemitz, principal adviser to the European commission, points out in an article just published by the Royal Society.

Just as architects have to think about building codes when designing a house, he writes, tech companies “will have to think from the outset… about how their future program could affect democracy, fundamental rights and the rule of law and how to ensure that the program does not undermine or disregard… these basic tenets of constitutional democracy”.

So, it is time to stop the “soft” coverage of artificial intelligence and to publish some real, sceptical journalism instead.

Share:

Facebook
Twitter
Pinterest
LinkedIn
On Topic

Related Articles

cause-rotated-pink

Our Cause

We will fight for a world where everyone feels safe, valued, able to grow, and be inspired by their role and the organisation that they work for. And that starts with us…

cause-rotated-pink

Our Cause

We will fight for a world where everyone feels safe, valued, able to grow, and be inspired by their role and the organisation that they work for. And that starts with us…