AI Bias Law: What You Need To Know

The AI Bias Law is the first of its kind and has the potential to set a rudimentary foundation for what’s to come. Some experts beg to differ, while others believe it doesn’t hold enough weight.

However, there is one indisputable fact. As of right now, there aren’t any formally enforced regulations or laws to govern companies and employers using artificial intelligence in HR.

Without clearly defined guidelines to fall back on, the possibility of AI bias in HR technology increases. It may not be in the midst of perfection, but it is the first to ensure transparency.

Delve into what the AI Bias Law is and what it means for the future of AI in Human Resources.

What Is New York’s AI Bias Law?

The legislation was passed in November of 2021 and goes into effect on January 1, 2023. Its purpose is to lessen or eradicate the impact of systematized discrimination, e.g., sex, ethnicity, or race, in hiring software, platforms, and tools.

The regulation will also hold employers, HR tech companies, and employment agencies accountable if any bias is detected within the technology they are using or have created.

The law only applies to AI-powered tools used to screen candidates or promote current employees. This can include software or platforms for:

  • Candidate sourcing
  • Resume reviews
  • Applicant ranking
  • Recruiting solutions
  • Employee performance tracking

Only employers and companies within New York’s city limits have to adhere to the guidelines. It is not state or county-wide as of yet. Penalties are set in place if they fail to comply with the following rules.

EEOC Angela & Commissioner Sonderling
Photo: Angela Hood, ThisWay® Global Founder & CEO,
and EEOC Commissioner Keith Sonderling. In Fall of 2022
the commissioner provided Angela with great support
and guidance. Thank you for meeting with us,
Commissioner Sonderling!

Compliance Regulation Guidelines

To maintain compliance, employers are mandated to notify candidates and employees if artificial intelligence is being utilized to determine an employment decision or promotion. The employer must also inform the candidate or employee of the characteristics or qualifications being evaluated.

At that time, the candidate or employee may request for a person to process their application or promotion. The employer must oblige if that occurs.

Not informing the candidates or employees can result in penalties and costly fines. In fact, if the employer isn’t compliant in general, they can be subject to those same fines:

  • $500 for the first violation
  • Every violation after that can range from $500-$1500 per violation

The guideline inciting the most debate is the bias audit. Any automated employment decision tool must be submitted for a bias audit no more than a year before implementation.

What Does The Bias Audit Consist Of?

This is not a federal-standardized bias audit. New York City developed its own standard that will most likely evolve as time and growing pains come and go, but this is what they’ve created to get started.

The bias audit must be an impartial evaluation of the AI-powered HR technology being used. The evaluation’s objective is to screen for any bias along the lines of race, ethnicity, or gender.

It can only be conducted by a third party or independent auditor. It can not be performed by an in-house expert or the creator of the platform or software.

The audit results are required to be publicly displayed on the employer’s website along with the distribution date of the technology being implemented.

What’s all the fuss about?

Concerns:

Some say New York missed the mark with this legislation and believe there are too many areas of inadequacies. Whether you’re for it or not, we can’t deny that there is a need for unfiltered transparency in the AI tools we use.

Why is there a need for AI Bias regulation?

There’s been a long going worry that AI in HR technology used for candidate sourcing and recruiting solutions could bake in the discrimination we aim to dismantle. Especially, since most AI technology is developed with previous or existing patterns and data.

This is where it gets scary because we could unknowingly have certain biases in the very tools we use to find qualified candidates. Amazon found that out the hard way in 2018.

They discovered that their recruitment engine wasn’t viewing candidates in a gender-neutral way for highly-skilled technical positions. Mainly because they used data containing resumes over a 10-year period, most of which came from male applicants.

Of course, they rectified the issue since then, but how many qualified candidates were possibly turned away just because of their gender?

A prime reason to begin AI regulation on the HR frontier.

Uncertainties…

Even though we can justify the need for the AI Bias Law, it does create a few uncertainties. For starters, the overall law is vague.

It includes any “automated employment tool” that uses statistical modeling, data analytics, artificial intelligence, machine learning, or computational processes. This could mean the general umbrella of automated employment tools, new or old, could be under scrutiny, not only AI.

We’re also unsure of how frequently the bias audit will be updated. It’s possible to assume that it could be an annual task, but the legislation written ‘as is’ falls short on clarification.

What the audit will cover also leaves room for confusion. We’re uncertain if it will screen the hiring process and the automated employment tool or just the tool itself.

Are A.I. Bias Laws in hiring enough to prevent all form of discrimination?

Most experts, legislators, and lawyers say the law isn’t stringent enough to hold companies and employers accountable. In some cases, they may be right.

For instance, the law only covers discrimination based on race, gender, and ethnicity. Factors based on age, disability, religion, and sexual orientation are left out.

To add fuel to the fire, it’s only applicable to the hiring and promotion processes allowing discrimination to possibly seep into other areas:

  • Compensation
  • Scheduling
  • Work conditions, etc.

As mentioned earlier, it only applies to NYC residents and not all employees working in New York City-based companies. If you live outside of the city, none of this applies to you. (See the discrepancy here?)

A Step In The Right Direction

The AI Bias Law may not be what we need it to be right now, but it’s a starting point. From here on, the bias law and audit will be adjusted to the demands of HR and employees.

One of the major demands is transparency, which is loud enough for the federal government to hear. D.C. Attorney General, Karl Racine, has conveyed the government’s interest in addressing “algorithmic discrimination.”

Even going as far as to mandate companies and employers to submit their AI employment decision solutions to annual bias audits.

Regardless of how things unfold, AI isn’t leaving HR technology by any means, knowing that heightens the necessity for accountability and transparency. a

Summary
Article Name
AI Bias Law: What You Need To Know
Description
New York City’s new AI Bias Law could set a precedent for legislation on artificial intelligence in HR technology. Discover what it is and why you might see similar laws on the federal level in the future.
Author
Publisher Name
Thisway Global
Publisher Logo
Get Started

Increase diversity within your organization, ThisWay!

Download Our Guide

A 3 Step Guide For Creating a Genuinely Diverse Company Culture

artificial intelligence for hr