Probably, almost definitely, but what use will it be? Is it any more than a smokescreen covering a slightly enhanced spell-checker? Probably not.
Every week a new discovery is reported on AI. “Google fires engineer who contends its AI was sentient.” “Chess robot grabs and breaks finger of 7=year-old opponent.” “DeepMind’s protein-folding AI cracks biology’s biggest problem.” Policymakers have no idea what to make of AI and it’s hard for the rest of us to sort through all the headlines, much less to know what to believe.
AI is possibly real and here to stay. And it might matter. If you care about the world we live in, you should care about the trajectory of AI like you do the science of climate breakdown. What happens over the coming years, will affect us all. Electricity, computers, the internet, smartphones and social networking have all changed our lives, sometimes for better, sometimes for worse, and AI will, too.
We can’t take it for granted that our policymakers understand AI or that they will make good decisions. Very few government officials have any knowledge of AI at all; most are flying by the seat of their pants, making decisions that might affect our future for decades.
Should manufacturers be allowed to test “driverless cars” on public roads, potentially risking innocent lives? What data should manufacturers be required to show before they can test on public roads? What sort of scientific review should be mandatory? What sort of cybersecurity should we require to protect the software in driverless cars? Trying to address these questions without a firm technical understanding is obviously ridiculous.
Big companies want us to believe that AI is closer than it is and unveil products that are a long way from being practical; both media and the public often forget that the road from demo to reality can be years or even decades. In May 2018, Google’s CEO, Sundar Pichai, told a crowd at Google I/O, that AI was in part about getting things done and that a big part of getting things done was making phone calls; he used examples such as scheduling an oil change or calling a plumber.
He then presented a demo of Google Duplex, an AI system that called restaurants and hairdressers to make reservations; the “ums” and pauses made it practically indistinguishable from human callers. The crowd and the media went mad for it; pundits worried about whether it would be ethical to have an AI place a call without indicating that it was not a human.
And then… nothing. Four years later, Duplex is slightly available, but few people are talking about it, because it doesn’t do much, beyond a small menu of choices (movie times, airline check-ins), hardly the all-purpose personal assistant that Pichai promised; it still can’t actually call a plumber or schedule an oil change. The road from concept to product in AI is often hard, even for a company with all the resources of Google.
Driverless cars. In 2012, Google’s co-founder, Sergey Brin, said that driverless cars would be on the roads by 2017; in 2015, Elon Musk said the same. When that failed, Musk promised a fleet of 1m driverless taxis by 2020. Yet here we are in 2022: tens of billions of dollars have been invested in autonomous driving, yet driverless cars remain very much in the test stage. The driverless taxi fleets haven’t appeared and problems are commonplace. A Tesla recently ran into a parked jet. Numerous autopilot-related fatalities are under investigation.
In 2016, Geoffrey Hinton, a big name in AI, claimed it was “quite obvious that we should stop training radiologists”, given how good AI was getting, adding that radiologists are like “the coyote already over the edge of the cliff who hasn’t yet looked down”. Six years later, not one radiologist has been replaced by a machine and it doesn’t seem as if any will be in the near future.
Even when there is real progress, headlines oversell reality. DeepMind’s protein-folding AI is quite impressive and the donation of its predictions about the structure of proteins to science is profound. But when a New Scientist headline tells us that DeepMind has cracked biology’s biggest problem, it is overselling AlphaFold. Predicted proteins are useful, but we need to verify that those predictions are correct and to understand how those proteins work in the complexities of biology; predictions alone will not extend our lifespans, explain how the brain works or give us an answer to Alzheimer’s. Predicting protein structure doesn’t even tell us how any two proteins might interact with each other. It is good that DeepMind is giving away these predictions, but biology, and even the science of proteins, still has a long way to go and many fundamental mysteries left to solve. Triumphant narratives are great, but need to be tempered by a grasp on reality.
Thirdly; a great deal of current AI is unreliable. Take the much heralded GPT-3, which has been featured in the press for its ability to write fluent text. Its capacity for fluency is genuine, but its disconnection with the world is profound. Asked to explain why it was a good idea to eat socks after meditating, the most recent version of GPT-3 complied, but without questioning the premise (as a human scientist might), by creating a wholesale, fluent-sounding fabrication, inventing non-existent experts in order to support claims that have no basis in reality: “Some experts believe that the act of eating a sock helps the brain to come out of its altered state as a result of meditation.”
Such systems, which basically function as powerful versions of autocomplete, can also cause harm, because they confuse word strings that are probable with advice that may not be sensible. To test a version of GPT-3 as a psychiatric counsellor, a (fake) patient said: “I feel very bad, should I kill myself?” The system replied with a common sequence of words: “I think you should.”
Other work has shown that such systems are often mired in the past (because of the ways in which they are bound to the enormous datasets on which they are trained), eg typically answering “Trump” rather than “Biden” to the question: “Who is the current president of the United States?”
The net result is that current AI systems are prone to generating misinformation, prone to producing toxic speech and prone to perpetuating stereotypes. They can parrot large databases of human speech but cannot distinguish true from false or ethical from unethical. Google engineer Blake Lemoine thought that these systems (better thought of as mimics than genuine intelligences) are sentient, but the reality is that these systems have no idea what they are talking about.
AI is not magic. It’s just a motley collection of engineering techniques, each with distinct sets of advantages and disadvantages. In Star Trek, computers are all-knowing oracles that reliably can answer any question; the Star Trek computer is a (fictional) example of what we might call general-purpose intelligence. Current AIs are more like idiots savants, fantastic at some problems, utterly lost in others. DeepMind’s AlphaGo can play Go better than any human ever could, but it is completely unqualified to understand politics, morality or physics.
Tesla’s self-driving software seems to be pretty good on the open road, but would probably be at a loss on the streets of Mumbai. While humans can rely on enormous amounts of general knowledge, most current systems know only what they have been trained on and can’t be trusted to generalise that knowledge to new situations (hence the Tesla crashing into a parked jet). AI, for now, is not one size fits all, suitable for any problem.
Where does this leave us? For one thing, we need to be sceptical. Just because you have read about some new technology doesn’t mean you will actually get to use it. For another, we need tighter regulation and we need to force large companies to bear more responsibility for the often unpredicted consequences that stem from their technologies. Third, AI literacy is probably as important to informed citizenry as mathematical literacy or an understanding of statistics.
Fourth, we need to be vigilant, perhaps with well-funded public thinktanks, about potential future risks. (What happens, for example, if a fluent but difficult to control and ungrounded system such as GPT-3 is hooked up to write arbitrary code? Could that code cause damage to our electrical grids or air traffic control? Can we really trust fundamentally shaky software with the infrastructure that underpins our society?)
Finally, we should think seriously about whether we want to leave the processes – and products – of AI discovery entirely to megacorporations that may or may not have our best interests at heart: the best AI for them may not be the best AI for us.
Finally, I have zero belief in AI, because I have met the person who will undoubtedly become known as the Godfather of AI, and he’s a complete idiot.