Earlier this month, Lina Khan, chairwoman of the US Federal Trade Commission (FTC), wrote a essay In The New York Times affirming the agency’s commitment to regulating AI. But there was one application of AI that Khan didn’t mention that the FTC needs to regulate urgently: automated hiring systems. These vary in complexity, from tools that merely parse resumes and rank them to systems that give the green light to applicants and undesirable candidates deemed unsuitable. Increasingly, American workers are forced to use them if they want to be hired.
In my latest book, The Quantified Worker, I argue that the American worker is reduced to numbers by AI technologies in the workplace, automated hiring systems leading among them. These systems reduce candidates to a score or rank, often ignoring the gestalt of their human experience. Sometimes they even sort people by race, age, and gender, a practice that is legally prohibited from being part of employment decision-making.
Ironically, many of these systems are marketed as unbiased or guaranteed to reduce the likelihood of discriminatory hiring. But because they are so loosely regulated, these systems have been shown to deny equal employment opportunity based on protected categories such as race, age, gender and disability. In December 2022, for example, a union of women truckers sued Meta, alleging that Facebook “selectively displays job postings based on users’ gender and age, with older workers much less likely to see ads and women much less likely to see job ads. blue-collar workers, especially in sectors that historically exclude women”. It is misleading. More so, it is unfair to job seekers and employers. Employers purchase automated hiring systems to reduce their liability for employment discrimination, and providers of such systems are legally required to substantiate their claims of efficiency and fairness.
The law places automated hiring systems under the jurisdiction of the FTC, but the agency has yet to issue specific guidelines on how vendors of such systems should advertise their wares. It should start by demanding audit to ensure that automated recruitment platforms deliver on the promises they make to employers. Providers of these platforms should be required to provide clear records of audits demonstrating that their systems reduce bias in employment decision-making, as advertised. These audits should be able to show that the designers followed the guidelines of the Equal Employment Opportunity Commission (EEOC) when creating the platforms.
Additionally, working with the EEOC, the FTC could establish the Fair Automated Hiring Mark, which would be used to certify that automated hiring systems have passed the rigorous auditing process. As an imprimatur, the mark would be a useful signal of quality for consumers, both applicants and employers.
The FTC should also allow job applicants, who are consumers of AI-enabled online application systems, to file suit under the Federal Credit Report Act (FCRA). Previously, the FCRA was thought to apply only to the big three credit reporting agencies, but a careful reading shows that this law can apply whenever a report was created for any “economic decision.” According to this definition, candidate profiles created by automated online recruitment platforms are “consumer reports”, which means that the entities that generated them (such as online recruitment platforms) would be considered agencies. credit rating. Under the FCRA, anyone who is the subject of one of these reports can ask the agency that made it to see the results and demand corrections or changes. Most consumers don’t know they have these rights. The FTC should launch an education campaign to inform claimants of these rights so they can exercise them.