Avoiding Potential Legal Pitfalls Associated With AI Use

July 10, 2023 | By Kevin Gray

TECHNOLOGY
A human hand touches fingers with a robot hand.

TAGS: best practices, career development, nace insights, talent acquisition, technology,

Many URR and career services functions are using artificial intelligence (AI) to streamline their processes and enhance their operations. However, there are potential legal risks adopters should consider as they move from dipping their toes into the AI pool to fully diving in.

Inaccuracy and bias are real concerns with potentially weighty consequences, especially because many AI tools learn by digesting the information users feed it or by scraping the internet, says Jeff Tenenbaum, managing partner, Tenenbaum Law Group PLLC.

AI in Career Services and Recruitment
Jeff Tenenbaum will be a presenter during the NACE Summer Learning Showcase: AI in Career Services and Recruitment. This four-session program held on July 27 and 28 and on August 3 and 4 offers a comprehensive exploration of AI tools and their potential impacts on career services and recruitment.

Learn More

Tenenbaum says that while the laws that apply have been in place long before AI, their application in this space is evolving, with users left to figure out how to best adhere to them.

“We're all doing it for the first time, trying to write provisions in contracts, policies, and forms, and trying to figure out how to define what we're talking about,” he says.

“If you're going to have certain restrictions, limitations, or prohibitions on the usage of AI, even defining what that means is not that easy to do. You want to make sure it's defined broadly enough, but in a way that people understand what you mean.

“If you say in your privacy policy that you're going to use data collected only in certain ways and that you're not going to use it in another way, you have to make sure that you follow those policies, including in the AI context. So, there’s nothing new there.”

However, Tenenbaum warns, other legal pitfalls are not so obvious. For instance, generally, if a user uploads information—personal data, spreadsheets, contracts, articles, or anything else—once it is uploaded, the user can lose control of it and any protections, and it is in the AI platform forever, possibly having legal implications on data privacy and copyrights.

“In most cases, you are providing a blanket license to the AI platform under its terms of use,” he says.

“There’s nothing new in the law in the United States to date, but it is a very new application and something that users have to be very, very mindful of. At least one of the platforms has an option users can select where they can request that whatever they upload is not put into the AI platform system. The information will only be used for their interactions with the platform and not shared with anyone else.”

In the employment context, the use of AI is particularly rife with potential legal missteps, Tenenbaum warns.

“There already are some state and local laws coming up to limit or ban the use of AI in the employment context. Those risks are potentially very significant,” he says.

“Banning employees from using AI may not be the best approach, but limiting, regulating, or requiring that any work product created using AI has to be thoroughly vetted and scrubbed by a human being and disclosing that AI was used in the creation of the content might be helpful. This also applies to contracts with consultants, freelancers, and others who are creating content for the organization.”

Except for newer state and local laws that are specific to AI, generally speaking, the same anti-discrimination laws that have been around since the 1960s apply, with modern variations and additions. For instance, Tenenbaum explains that if an employer uses AI to evaluate recorded job interviews, but the system penalizes candidates because they have an accent or speech impediment, the employer is potentially open to illegal employment discrimination.

“The devil is in the details, and it depends how you’re using AI,” Tenenbaum says.

“If you are just using AI to sort applicants by geographic region or by the number of years applicants are out of college, if you narrow the usage and limit it to things where there is limited potential for discrimination or bias built in, that’s probably fine, for now. I do strongly recommend that you ensure you’re limiting usage in ways that any built-in biases won’t lead to discriminatory results.”

NACE JOBWIRE