AI creep continues

Unilever says it is saving hundreds of thousands of pounds a year replacing human recruiters with an artificial intelligence system. It has saved 100,000 hours of human recruitment time by deploying software to analyse video interviews.

The system scans graduate candidates’ facial expressions, body language and word choice and checks them against traits that are considered to be predictive of job success. Vodafone, Singapore Airlines and Intel are among other companies to have used similar systems.

Polling commissioned by the Royal Society of Arts and released on Friday suggests 60% of the public are opposed to the use of automated decision-making in recruitment as well as in criminal justice.

A citizens’ jury convened by the charity to explore AI concluded that the growing practice needed independent regulation and warned of public anger at “tech creep” unless citizens were given a greater role in designing systems.

A parallel YouGov poll found that only 32% of people are aware AI is being used for decision-making in general. Awareness of automated decision-making in workplaces and the criminal justice system is even lower, at 14% and 9% respectively.

Last week the United Nations special reporter, Philip Alston, said the world risked “stumbling zombie-like into a digital welfare dystopia” in which artificial intelligence and other technologies were used to target, surveil and punish the poorest people.

The Guardian reported on how the UK’s Department for Work and Pensions was accelerating the development of welfare robots for use in its flagship universal credit system, and how more than 100 councils were using predictive analytics and other artificial intelligence systems to aid interactions with their citizens.

“New technologies are being adopted at a rapid pace, and regulators and the public are struggling to keep up,” said Asheem Singh, acting head of tech and society at the RSA.

“An increasing amount of decision-making – in our public services, the job market and healthcare – is taking place via ever-more opaque processes. This is a source of anxiety for the general public. The measures we are proposing – such as a new watchdog to scrutinise decisions made by AI on behalf of the public – are crucial first steps in increasing clarity and accountability.”

Last month in a report commissioned by the government’s Centre for Data Ethics and Innovation, the Royal United Services Institute, a security thinktank, warned of “unfair discrimination” by data analytics and algorithms in policing. Meanwhile, the high court ruled that South Wales police’s use of facial recognition software was legal, despite a claim it breached data protection and equality laws.

The RSA panel spent four days examining the spread of AI and automated decision-making into recruitment, healthcare and policing. Members of the panel voiced hopes that algorithms could make fairer, less-biased decisions on things such as pay rises or promotions, and that facial recognition programmes might be more objective than human police officers.

But they raised questions about whether automated decision systems would reinforce an organisation’s existing profile, for example as traditionally white and male, and how the public would know the technology was being used.

The jury was convened by the RSA in collaboration with DeepMind, a London-based AI firm owned by Google’s parent company, Alphabet.

Unilever is using software from a US-company, HireVue, in the UK and abroad, having first trialled it in 2017. HireVue has previously said the software scans the language that candidates use – for example, active or passive phrases, tone of voice and speed of delivery – as well as facial expressions such as furrowed brows, smiles and eye-widening.

“It is helping to save 100,000 hours of interviewing time and roughly $1m in recruitment costs each year for us globally,” said a Unilever spokeswoman. “It is, however, just one of many tools we use for our graduate recruitment.”

She said video interviews were optional and candidates were asked to allow or disallow automated decision-making being used to evaluate their video interview. They were sent information about how to prepare beforehand and could choose to speak to a “talent adviser” instead if they preferred.

The system is now used across Unilever’s entire graduate recruitment programme, and HireVue claims it has resulted in a more ethnically and gender-diverse workforce.

Unilever said that at the early stage in the recruitment process when HireVue was used, it was not compulsory for candidates to give their gender or ethnicity so it was not able to provide representative data.


On Topic

Related Articles


Our Cause

We will fight for a world where everyone feels safe, valued, able to grow, and be inspired by their role and the organisation that they work for. And that starts with us…


Our Cause

We will fight for a world where everyone feels safe, valued, able to grow, and be inspired by their role and the organisation that they work for. And that starts with us…