As businesses find ways to shortcut traditional processes with artificial intelligence, lawmakers and regulators in the region are avid as ever to ensure there’s no corner-cutting that runs afoul of longtime rules.
New Jersey has joined the several states weighing laws that would require employers using AI-enabled employment tools in their hiring or promotion to undergo audits to check for potential bias in their selection process. New York just implemented a similar change this month.
Scott Ohnegian, head of Riker Danzig‘s Labor and Employment Practice, said there are still more questions than answers about the new rules. He hasn’t seen a lawsuit develop from it yet.
“As management-side employment lawyer, we try to be proactive to avoid liability,” he said. “It can be difficult to predict where that’d come from until the first behemoth gets sued.”
Regardless, businesses should be paying attention. Because the audits, as well as the requirements in these laws to post audit results and notify applicants when they’re subjected to AI screening, might be something companies have to do — even when they might think otherwise.
“There are likely many employers unknowingly using AI,” Ohnegian said. “As soon as you involve a search firm, unless you have some written agreement prohibiting it, how they screen potential employees probably involves this technology.”
The familiar part of the proposed law in New Jersey is the analysis of the employment process from the perspective of how they might — inadvertently or not — disadvantage members of particular groups.
Mark Kluger, an employment lawyer at Kluger Healey and the firm’s co-founding partner, said these new rules are just taking that staple of employment law and applying it to machine learning tools.
“The big picture here is a concern with the impact that the use of AI in hiring will have on protected classes, primarily on gender, age, race and national origin,” he said. “(The question is) whether the AI tool itself — even if it is programmed in a manner not intended to screen out those groups — will have a disparate impact on those groups unintentionally by virtue of how the algorithm is designed.”
It’s worth taking any employment law seriously. The technology might be new, Kluger said, but the state’s Division on Civil Rights is not a neophyte at reading out disparate impacts in hiring or firing. This proposed change is just an extension of rules that businesses see heavy penalties from when not in compliance.
There are still some quirks to the potential new rules that employment lawyers scratch their head over. For instance, New York’s law stipulates that employers have to give prospective employees the option to opt out of being screened by AI, but it appears there’s no obligation that employers grant them an alternative.
As far as how many businesses are going to be affected by the changes … Kluger isn’t sure, but he guesses all large employers are utilizing AI in hiring. It’s certainly something employment lawyers are starting to get more questions about.
Kluger’s advice to clients is the same as it has always been.
“With any hiring process, whether handled in the old-fashioned way or by AI, I invite clients to check themselves to see whether they have hiring patterns,” he said. “We all have inherent unconscious bias and, when employers are engaged in hiring, with machine learning involved or not, that can end up having a disparate impact on protected classes. Employers hiring people who are like them is one of the common biases in hiring. You have to be self-conscious about those patterns.”
Ohnegian reiterated that a company having third-party recruiting vendors, even those saying their tools comply with the law, wouldn’t insulate a business from potential lawsuits.
His advice? Know what you’re getting into.
“You know that checkbox you click, ‘Accept terms and conditions’ on when signing up for a newsletter or something else online?” he said. “Well, don’t accept those terms without knowing what it is you’re signing up for.”