Menu Close

Artificial intelligence is increasingly being used to make workplace decisions

Companies are increasingly turning to artificial intelligence tools and analytics to reduce cost, enhance efficiency, raise performance, and minimize bias in hiring and other job-related decisions. The results have been promising–but concerns over fairness and objectivity persist.

Large employers are already using some form of artificial intelligence in employment decision-making. A February 2022 survey from the Society of Human Resources Management found that 79% of employers use A.I. and/or automation for recruitment and hiring.

The move by employers to harness A.I. and related data analytics in an effort to reduce unconscious bias in employment decision-making is no surprise. In the past few years, companies have increasingly prioritized diversity, equity, and inclusion initiatives. After the killing of George Floyd and subsequent protests around the country, businesses pledged $200 billion to increase efforts toward racial justice. Surveys show businesses are committed to increasing DEI budgets, staffing, and metrics, and investing more in employee resource and affinity groups. Pay equity audits are on the rise, along with a host of new laws in New York, California, and elsewhere mandating transparency on employee compensation.

A.I. has been proven to be helpful in a variety of areas related to hiring more diversely, including anonymizing resumes and interviewees, performing structured interviews, and using neuroscience games to identify traits, skills, and behaviors. Some companies conduct video interviews of applicants and use A.I. to analyze factors found within them, including facial expressions, eye contact, and word choice. This use of A.I. can help avoid decisions that treat similarly situated applicants and employees differently based on entrenched or unconscious bias, or the whims of individual decision-makers.

Consider a study conducted at Yale which showed that when assessing candidates for police chief, human evaluators justified choosing men without college degrees over women with college degrees because “street smarts” were the most important criteria. However, when the names on the applications were reversed, evaluators chose men with college degrees over women without college degrees, claiming that the degrees were the more important criteria. If the criteria had been set in advance, unconscious biases against women could have been mitigated because evaluators would not have been able to justify their decisions in retrospect. Unlike humans, A.I. tools won’t deviate from pre-selected criteria to rationalize a biased decision.

How does A.I. do it? In many instances, A.I. can reduce humans’ subjective interpretation of data because machine-learning algorithms are trained to consider only variables that improve predictive accuracy, McKinsey found. Algorithms can consider various characteristics on a resume–including a candidate’s name, prior experience, education, and hobbies–and be trained to consider only those characteristics or traits that predict a desired outcome such as whether a candidate will perform well once on the job. The results are impressive. In a forthcoming paper, Bo Cowgill at Columbia Business School will report the results of his study of the performance of a job-screening algorithm in hiring software engineers. He found that a candidate picked by the machine (and not by a human) is 14% more likely to pass an interview and receive a job offer and 18% more likely to accept a job offer when extended.

Algorithms are not only used for reducing bias in hiring. They are also useful in monitoring employee productivity and performance, and to make decisions regarding promotion and salary increases. For example, parcel delivery companies use A.I. to monitor and report on driver safety and productivity by tracking driver movement and when drivers put their trucks in reverse. Other companies may use A.I. to track employee login times and monitor whether employees are paying attention to their computer screens using webcams and eye-tracking software.

A.I. has even been helpful when choosing candidates for corporate boards. A study at the Fisher College of Business that compared the use of machine learning in selecting directors with human-selected boards found that human-chosen directors were more likely to be male, had larger networks, and had many past and current directorships. By contrast, the machine algorithm found that directors who were not friends of management, had smaller networks, and had different backgrounds than those of management but were more likely to be effective directors, including by monitoring management more rigorously and offering potentially more useful opinions about policy.

A.I. is not without its flaws. In 2018, Amazon abandoned an A.I. hiring practice when it determined it had actually perpetuated bias, largely as a result of the sample hiring and resume data the company provided to the algorithm, which skewed heavily male. Most resumes in the training data belonged to men, reflecting the disproportionate number of men in the tech sector, so naturally, the A.I. tool taught itself that men were preferable candidates. The tool then scored the resumes of people who attended “women’s” colleges or who played on the “women’s” chess team lower. Of course, the problem was not in the A.I. itself, but in the data inputs from the company.

Recognizing the blind spots associated with A.I., some companies have collaborated to develop policies that mitigate its potential discriminatory effects. Data & Trust Alliance is a corporate group that has developed “Algorithmic Bias Safeguards for Workforce” with the goal of detecting, mitigating, and monitoring algorithmic bias in workforce decisions.

Two states–Maryland and Illinois–have enacted statutes regulating the use of A.I. Illinois law requires employers to notify applicants when A.I. will be used and obtain the applicant’s consent. Proposed legislation in a third state, California, takes a page from the European Union’s General Data Protection Regulation (GDPR) by imposing liability on the vendors of A.I. tools.

Federal policymakers and regulators also have an important role to play in ensuring that A.I. is used in the service of advancing an equitable playing field in hiring and retention of qualified workers. Strong metrics and oversight will be needed to check even the smartest algorithms.

Historically, all technologies go through an adaptive phase where we get to know them, recognize their utility, and create methods to guard against their unintended, yet inevitable, deleterious effects. In the end, it is unlikely that there is going to be a one-size-fits-all approach to using A.I. effectively and responsibly. We will learn as we go, turning over many human tasks to machines even as we call upon our humanity to monitor them. Without question, our employment decisions will benefit from the right mix of A.I. with human intelligence.

Gary D. Friedman is a New York-based partner in the employment litigation group at Weil, Gotshal and Manges LLP. A first-chair trial lawyer, he represents employers in a broad range of workplace disputes. This article is drawn from testimony Mr. Friedman gave to the Equal Employment Opportunity Commission on January 31, 2022.

Article: Artificial intelligence is increasingly being used to make workplace decisions–but human intelligence remains vital

Leave a Reply

Your email address will not be published. Required fields are marked *