Nathan Mondrago is head psychologist at HireVue, which markets software for screening job candidates. He says finding the right employee is all about looking at the little things. Lots of little things.
Mondragon is its flagship product. It is used by Unilever and Goldman Sachs and asks candidates to answer interview questions in front of a camera. Meanwhile its software, like a team of hawk-eyed psychologists hiding behind a mirror, takes note of barely perceptible changes in posture, facial expression and vocal tone.
“We break the answers people give down into many thousands of data points, into verbal and non-verbal cues,” says Mondragon. “If you’re answering a question about how you would spend a million dollars, your eyes would tend to shift upward, your verbal cues would go silent. Your head would tilt slightly upward with your eyes.”
The program turns this data into a score, which is then compared against one the program has already ‘learned’ from top-performing employees. The idea is that a good prospective employee looks a lot like a good current employee, just not in any way a human interviewer would notice.
Approaches like vocal analysis and reading ‘microexpressions’ have been applied in policing and intelligence with little success. But Mondragon says automated analyses compare favourably with established tests of personality and ability, and that customers report better employee performance and less turnover.
HireVue is just one of a slew of new companies selling AI as a replacement for the costly human side of hiring. It estimates the ‘pre-hire assessment’ market is worth $3bn (£2.2bn) a year.
A study in the UK last year found an average of 24 applicants per low-wage job. Tesco, the country’s largest private employer, received well over three million job applications in 2016. With applications rising, employers have automated as much of the process as possible. This started more than a decade ago with simple programs that scanned text CVs for key words. It has now expanded to include quizzes, psychometric tests, games and chatbots that can reject applicants before a human ever glances at their CV.
Jobseekers are forced to prepare for whatever format a prospective employer has chosen, a familiar power shift in the gig economy era. And without human interaction or feedback, an already difficult process has become deeply alienating.
Beyond the dehumanising experience lurk the usual concerns that attend automation and AI, which uses data often shaped by inequality. If you suspect you’ve been discriminated against by an algorithm, what recourse do you have? How vulnerable are those formulae to bias? And is it inevitable that non-traditional or poorer candidates, or those who struggle with new technology, will be excluded from the process?
“It’s a bit dehumanising, never being able to get through to an employer,” says Robert, a plumber in his 40s, who uses job boards and recruiters to find temporary work. Harry, 24, has been searching for a job for four months. In retail “just about every job opening” requires a test or game. He completes four or five a week. The rejections are often instant, piling up without a word of feedback. Every time you start again from zero.
“You never know what you’ve done wrong. It leaves you feeling a bit trapped,” Harry says.
The problem is worst for older jobseekers. Many rely on support from councils or voluntary services to help them fill out applications. “It’s a big barrier. Why is an older guy who was a bricklayer suddenly expected to have IT skills?” asks Lynda Pennington, who organises a jobs club in Croydon.
After 86 unsuccessful job applications in two years – including several HireVue screenings – Deborah Caldeira is thoroughly disillusioned with automated systems. Without a person across the table, there’s “no real conversation or exchange”, and it’s difficult to know “exactly what the robot is looking for”, says Caldeira, who has a master’s degree from the London School of Economics.
Despite her qualifications, she found herself questioning every movement as she sat at home alone performing for a computer. “It makes us feel that we’re not worthwhile, as the company couldn’t even assign a person for a few minutes. The whole thing is becoming less human,” she says.
A fightback against automation has emerged, as applicants search for ways to game the system. On web forums, students trade answers to employers’ tests and create fake applications to gauge their processes. One HR employee for a major technology company recommends slipping the words “Oxford” or “Cambridge” into a CV in invisible white text, to pass the automated screening.
Measures are under way to tip the balance of power back to those seeking work, according to Christina Colclough, director of digitalisation and trade at UNI Global Union, a federation of trade unions. They include a charter of digital rights for workers touching on automated and AI-based decisions, to be included in bargaining agreements.
An imminent update to the EU general data protection regulation will require a company to disclose whenever a decision that “significantly affects an individual” has been automated. But even minimal human involvement – approving a list of automatically ranked CVs, for example – could exempt companies from such regulations, warns Sandra Wachter, a lawyer and research fellow in data ethics at the Oxford Internet Institute. She also notes that a much-discussed “right to explanation” of automated decisions will not be legally binding.