What a labor lawyer wants HR leaders to know about the risk of AI

Imagine an employee entering customer information into ChatGPT to get a quick overview, without any malicious intent. It’s a worker who interacts with an AI tool like any other piece of production, without insight into what’s happening with the data on the other end.

It’s the kind of situation that employment lawyer Tara Humma warns employers are likely to be exposed to. Humma, who advises multinational employers at Rimon Law, says the danger often comes from well-intentioned employees who haven’t been told where the lines are.

AI compliance: ‘The law says what it says’

But that is not an excuse for the organization, there is nothing wrong with it. “The law means what it says,” Humma says. “Say you can’t discriminate, you have to protect privacy. It doesn’t matter what device you use to break the law.”

Whether an employee posts patient information on social media (sounds crazy) or posts an open AI tool (sounds less crazy, but just as dangerous), the privacy breach is the same. The target does not change the appearance.

Let’s look at some of the most protected information, health information. Despite the fact that this is a popular area for HR teams, there are still some privacy gaps. Since the HIPAA Privacy Act went into effect, federal regulators have received more than 366,000 complaints and imposed nearly $144 million in fines in 147 cases, often for failing to protect patient information. In 2024 alone, covered entities reported 725 major health violations, and the Office of Human Rights closed 22 investigations with financial penalties, according to the US Department of Health and Human Services.

Governance is a compliance move

Since then, the proliferation of consumer-grade AI tools has only increased. This framework shifts the AI ​​regime from a technology conversation to a compliance one, and raises the bar for having a strategy that actually does something.

Another case directly related to HR groups is a recent case in which the EEOC alleged that an employer’s screening software was designed to reject women over the age of 55 and men over the age of 60. This allegedly resulted in more than 200 qualified applicants being fired because of age, and the consent order included $365,000 in OCEE funds. and provide training for years.

Tara Humma, Law of Rimon

Humma says that many of the early policy attempts to use AI are similarly lacking. They are too vague to be useful. Telling employees not to share “confidential information” with AI tools isn’t enough if employees don’t understand what that means for their day-to-day work.

He says, an effective policy needs to state certain types of information that will not enter open tools and explain why employees are aware.

Read more: As the money for AI representation flows, HR becomes the manager

Rules, discrimination and other hot spots

Regulated industries face a steep slope. In healthcare, legal and financial services, privacy requirements are already strict, and AI is developing new ways to breach them without anyone noticing. Attorneys advise that those tenants should leave sooner than most.

Regulations can be a motivating factor for employers who think they have this issue figured out. States, including Illinois, have passed laws specifically regulating AI in employment decisions, and experts expect more to come.

Illinois has already amended its Human Rights Act to make algorithmic discrimination a potential civil rights violation, prohibiting employers from using AI tools that have a discriminatory effect on protected classes in hiring and other employment decisions and requiring no notice when AI is used in those decisions.

EEOC guidance holds employers liable if an AI tool produces bias in hiring or performance decisions, even if the employer did not create the tool. And some of the rules reach further than expected, covering tools that employers have used for years, such as applicant software and personality testing, without calling them AI.

“Try to prioritize it as much as you can,” Humma says, “so your company doesn’t become the next headline.”


#labor #lawyer #leaders #risk

Leave a Comment