Scaleworks Acquires Leading Pre-Hire Assessment Platform eSkill View the Release.
KEO ESkill PD The New Legal Risks Of Using AI In HR

You know the high cost of making bad hiring decisions.  In addition to the time lost and the negative impact on the organization, there are hard costs as well. The U.S. Department of Labor put a price tag on it.  It estimates that the average cost of a bad hire equates to 30% of the person’s first-year earnings.  It’s no wonder HR professionals, recruiters, and talent acquisition managers are looking for better hiring solutions.

Many companies have turned to artificial intelligence in human resources to improve recruiting.  While AI in HR can help surface and screen candidates, it can also carry legal risks.

The goal of HR AI is to eliminate bias in employment matters.  It offers employers a way to remove human bias from the equation.  However, it’s not that simple.  Human bias can be replaced with “algorithm bias.”

AI in HR Algorithm Bias

Algorithms can be used to target job ads as particular candidates or seek out passive candidates based on specific criteria.  AI in HR can use predictive analysis to score resumes and assess qualifications based on experience.

In practice, algorithms learn to look for key indicators based on the data sets they are fed.  That can be problematic.  For example, if a company has a historical lack of diversity, the AI might be “trained” over time to score certain racial or ethnic groups differently even with identical experiences.

Upturn is a nonprofit organization with a mission to promote “equity and justice in the design, governance, and use of digital technology.”  It extensively studies predictive algorithms used in the hiring process for signs of bias.  The results were troubling.

“Unfortunately, we found that most hiring algorithms will drift toward bias by default,” Senior Policy Analyst Miranda Bogen wrote in the Harvard Business Review.

A perceived bias, whether real or not, can be the basis for lawsuits.

Hiring & Managing Employees

When AI is used for placing job ads, platforms serving the ads make decisions not on whether a candidate is most qualified, but rather on who is most likely to click on it.  For the platform, more clicks equal more revenue.  Unfortunately, that can also perpetuate stereotypes.  A study by USC and Northeastern University, for example, analyzed Facebook ads for supermarket cashiers.  85% of the ads were shown to women. 75% of the employment ads for taxi companies it analyzed were shown to blacks.

Companies using AI without monitoring are putting themselves at risk.  The U.S. Equal Employment Opportunity Council (EEOC) recently ruled against seven companies that used AI to place ads on Facebook.  It said the practice illegally discriminated against women and older workers.

AI is also being used for screening and assessment of candidates as well as for monitoring and managing existing team members.  Algorithms may score performance reviews, attendance records, and other factors to recommend promotions or even terminations.  Predictive analytics based on top performers may identify candidates with growth potential and even recommend training protocols.

It sounds great, but it also can lead to legal disputes over the validity of the analysis or lack of institutional controls.

Emerging HR AI Legislation

Organizations using AI technology in HR practices may face employment practice claims.  A law in Illinois, for example, goes into effect in January 2020 that limits the use of AI in video interviews with candidates.  The Artificial Intelligence Video Interview Act also regulates notices and privacy.

Other states are looking at similar legislation which could restrict the use of AI in HR.  There has already been a significant rise in litigation over the collection, storage, and use of biometric data.

Using Third-Parties

Most companies that are integrating AI practices into their hiring and management of employees are outsourcing it or using third-party software.  This doesn’t remove the legal obligations of employers to monitor the activities and make sure these vendors comply with laws.

Weighing The Legal Risks of Artificial Intelligence in Human Resources

HR AI portends the potential of advances in the hiring and training of employees.  Right now, it also carries significant legal risks if not implemented, managed, and monitored properly.

Any employer that is implementing artificial intelligence in human resources needs to weigh the benefits against the potential legal risks.  They must also keep a close eye on emerging laws to stay in compliance.

Workforce Skills Tests and Assessments

When you’re evaluating how you are going to improve your HR, hiring, and training practices, you need to work with an organization you can trust.  Whether you need subject-based assessments, job-based assessments, or skills testing, eSkill has the expertise to improve your process.

eSkill’s assessments have been proven to be both reliable and valid, mitigating the risk of litigation. eSkill offers a nearly 20-year Equal Employment Opportunity Commission (EEOC) compliance record, providing your company with a defensible, trusted pre-employment testing platform.

eSkill lets you personalize tests from the largest assessment library in the HR industry.  Learn more about how can increase compliance and reduce risks by getting a free demo.

Subscribe to Our Blog

Stay Social