Ethical Considerations for Using AI in College Recruiting

May 3, 2023 | By Kevin Gray

TECHNOLOGY
Illustrations of data and charts superimposed on the back of a man's head.

TAGS: best practices, nace insights, talent acquisition, technology,

AI tools are used in a variety of ways in college recruiting and hiring processes to automate tasks. Some of these applications of AI have been used for years and have become standard in the processes. Others are newer and, in some cases, we are still discovering their different uses, efficiencies, and potential problems.

To be sure, innovation in this arena is moving very quickly. Overall, some of these AI tools include:

  • AI systems that can help analyze a large volume of resumes and more quickly search for qualified candidates.
  • Chatbots and other automated messaging systems that can streamline communications with potential candidates.
  • AI that can provide new hires with personalized learning and development opportunities.
  • AI that can help with assessing employee engagement and predicting retention and attrition.

For job applicants, AI is another tool in the arsenal to help develop a good working draft of a resume or refine cover letters.

“We’ve been using tools that help check spelling or grammar for years and AI will be another tool to help create more tailored resumes,” says Beena Ammanath, executive director of the Global Deloitte AI Institute and leader for technology trust ethics at Deloitte.

“However, it’s important to remember that you still need a human being at the heart of the process to review and ensure the accuracy of the information generated by AI.”

There are ethical issues associated with the use of AI in the college recruiting process. Ammanath—who is also author of Trustworthy AI and who helps lead Deloitte’s Trustworthy AI framework—says that whenever personal information comes in to play, there is the possibility of bias.

“AI tools can save time and money by automating parts of the hiring process, but it’s important for organizations that use them to understand how the systems were developed and ensure that the AI algorithms are trained on diverse data sets and reflect organizational diversity goals,” she explains.

“Organizations should also look at AI as an opportunity to augment hiring and recruiting rather than automate it altogether. It’s important to have humans validate the effectiveness of the AI systems rather than blindly accept the results. And it’s equally important to have processes in place to audit and test those AI systems for bias.”

Ammanath cites several well-publicized instances when AI-based hiring systems were biased against certain candidates based on their gender or ethnicity, and when hiring systems have discriminated against candidates with disabilities.

“If job requirement parameters are too narrow or rigid, for example, AI systems can also easily overlook qualified candidates,” she warns.

“Every organization that uses AI in some capacity needs confidence that the tools it is using behave ethically and are aligned with their values and expectations. The onus is on all of us—consumers, businesses, governments, and others—to put guardrails in place to avert negative outcomes from AI. Conversations around trust, governance, and risk in AI systems are critically important. And the ethics and risks will vary widely by industry.”

Ammanath stresses that there are two critical factors for limiting bias in AI systems: building diverse AI teams and implementing an enterprise-wide trustworthy AI framework to ensure the development and deployment of fair, ethical, and responsible AI systems. Doing so will help organizations develop safeguards to mitigate and manage AI risks.

“AI’s ability to augment humans can spark new ways of working that blend the best of what machines do with what humans bring to the collaboration: judgement and empathy,” she says.

“That is the real opportunity. AI has the power to create an augmented workforce that challenges organizations to reconsider how jobs are designed and how the workforce should adapt.

“As AI tools become more prevalent and accessible, it’s critically important for organizations to build a strategy for AI fluency and literacy and to look at how to upskill their workforce for the AI era. Similarly, consumers—including students—need to take responsibility for educating themselves about the use of AI tools and the benefits and risks involved.”

blank default headshot of a user Kevin Gray is an associate editor at NACE. He can be reached at kgray@naceweb.org.

NACE JOBWIRE