Have you ever read something in the news like, “Oh, guys!” or something like that?
No, I’m not talking about the presidential election.
The Wall Street Journal had an article this week about employers using artificial intelligence to determine if executives are at risk for dementia. Here’s the link, but you may need a paid subscription to access it.
Indeed, in some ways this technology seems quite impressive: an AI can apparently determine whether a person is at risk from the way they speak, long before a human doctor can diagnose the illness.
While I was impressed with the advances from a technological standpoint, I was screaming inside, “What about the ADA? What about the ADA? Has anyone thought about the ADA?”
The article doesn’t mention the fact that employers may be violating the Americans with Disabilities Act by using AI in this way, but I think this is a huge risk for employers — worse than the risk of a perfectly sane executive developing dementia in six or seven years.
According to the article, the AI’s judgments are about 80 percent correct. In other words, the AI’s judgments are wrong about 20 percent of the time, or one in five cases. And of course, because the AI is predicting future dementia, not diagnosing current dementia, employers may not realize that the AI was wrong until it’s too late.
I guess I’m not dementia yet, because I can recall that in early 2023 I asked ChatGPT to write a blog post about Groff v. DeJoy, a religious accommodation case that was then scheduled to be heard by the US Supreme Court. (The case has since been heard and decided.) ChatGPT did a great job writing my post, with just one problem: it said that Groff was not a religious accommodation case under Title VII, but a disability accommodation case under the Rehabilitation Act of 1973. It also said that the case had multiple plaintiffs, not just one. Here’s what ChatGPT quoted:
“The Supreme Court recently announced that it will hear Groff v. DeJoy, a case that could have significant implications for the rights of people with disabilities in the workplace. The case was brought by disability advocacy groups who allege that the United States Postal Service (USPS) failed to accommodate people with disabilities in violation of the Rehabilitation Act of 1973.”
(Emphasis mine.)
At least ChatGPT got the name of the case right.
After I wrote that article, I heard stories of lawyers writing briefs with the “help” of AI, and then ended up being sanctioned by the court because the AI invented precedents — that is, the lawyers cited nonexistent court decisions to support their client’s position. As a result, many courts now have rules requiring lawyers who use AI to check cases the old-fashioned way and certify to the court before filing briefs.
And would we want to use AI to diagnose whether a person will develop a serious medical condition at some indeterminate time in the future – and then use that “information” to make hiring decisions?
Yes, that will be the case.
7 Ways This Might Violate the ADA
Here’s why using AI in this way can get employers into trouble under the ADA and many state disability protection laws.
1st place: Dementia, like many other medical conditions, is a disability.
2nd place: I am confident that the U.S. Equal Employment Opportunity Commission, which enforces the ADA’s employment provisions, would declare that a medical exam conducted by AI is a “medical examination.” Indeed, a site supervisor casually asking an employee if she is limping because of a bad hip would qualify as an ADA “medical examination.”
3rd place: The ADA prohibits employers from requiring a job applicant to undergo any type of “medical examination” before making a conditional offer of employment.
4th place: The ADA permits employers to conduct a “medical examination” after a conditional offer of employment is made, but the information obtained cannot be used to disqualify the offeree. The only exception is if the medical examination reveals that the offeree is unable to perform the essential functions of the job, with or without a reasonable accommodation. I don’t think a 4 in 5 chance of developing dementia within 6 years is enough.
5th place: Generally, it is a violation of the ADA for an employer to discriminate against an applicant, job offer, or employee based on concerns that the individual “may” develop a medical condition in the future.
6th place: Employers cannot require current employees to undergo a “medical examination” unless the examination is “job-related and consistent with business necessity.” In other words, there must be a job-related reason, such as a performance issue or behavioral concern that can reasonably be attributed to a medical condition, to require a medical examination. It is not enough to simply send an executive (or other employee) for a medical examination to determine whether the individual is at risk of developing a health condition in the future.
No.7: Asking these questions without a legal justification is a violation of the ADA, even if the employer never actually uses the information against the employee. And of course, if the information is used against the employee, caution is advised.
Let me end on a positive note: if an employee shows signs of dementia (or any other medical condition that may be affecting their job performance or behavior), the ADA allows an employer to send the employee for a medical exam and
Consider whether the employee is able to perform the essential functions of the job, whether a reasonable accommodation is needed or possible, and what type of accommodation would be preferred.
In this context, a medical exam would likely be “job related and consistent with business necessity,” and using AI to assist with the diagnosis (or recommend a reasonable accommodation) should not raise ADA issues.
*Whew* Thanks guys, I feel better now.