Good Intentions, Ethics Washing, and PR in the Hiring Tech Market

September 27, 2021 | By Chris Miciek

TECHNOLOGY
Picture of a laptop

TAGS: diversity and inclusion, member voices, strategic planning, technology, trends and predictions,

It would stand us all in good stead to remember that infatuation with high-tech social sorting emerges most aggressively in countries riven by severe inequality and governed by totalitarians.

– Virginia Eubanks, “Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor

How can we engage constructively with vendors and tech partners in the career development and hiring space?

A major concern that has emerged in the recent artificial intelligence/machine learning (AI/ML) push is ethics washing—companies making public statements about how they are developing and/or using technology in ethical ways without taking substantial steps to actually ensure ethical development of tools. Like greenwashing, this practice allows organizations with opaque processes and operations to assure their audiences they are not bad actors but are contributing to societal good. Often, however, the only evidence of this are website pledges and PR announcements. Is such signaling to be trusted?

Rational discrimination does not require class or racial hatred, or even unconscious bias, to operate. It only requires ignoring bias that already exists. When automated decision-making tools are not built to explicitly dismantle structural inequalities, their speed and scale intensifies them.

- Eubanks, Page 190

Note that the workforces of many tech firms are not representative of the U.S. population; this can undermine attempts to create digital tools that are unbiased and nonharmful. In the wake of the protests following the murder of George Floyd and others in 2020, many tech companies, in an effort to be more inclusive of voices in the development process, pledged to dedicate resources to diversifying their workforces. How are these companies doing in keeping those promises? A recent review by Blendoor found that most were not meeting their pledges, and some were even going backwards.

Of course, there are companies that are doing good. How do we identify them? And how do we press the others who want to work with us to do better?

First, those of us working in career development roles need to acknowledge and accept responsibility for trying to fuse competing values into our work and the processes we teach and manage. Hiring, like the rest of modern business, takes its cues from Fred W. Taylor, whose work from the late 19th century has informed much of management science since. Efficiency holds the place of primary virtue. In Taylorism, workers, not people, function as organic parts of the machine. Workers cannot be trusted, and must be managed, studied, and tweaked to improve productivity. Taylor’s work is ultimately dehumanizing, at least for employees, but it is at the heart of business thinking and integral to how practitioners in fields like industrial and organizational psychology are taught and approach their work. It is the unexamined and unchallenged fundamental. This thinking dominates our contexts.

Contrast the values of Taylorism with what dominates career development. For those of us who work with people to help them grow their careers, our values tend to be centered on the human. Our one-on-one interactions, programming, and planning focus on the student and how to reach them and help them grow. Why? So they can get jobs and improve our metrics, sure, but more so that they may find increased access to opportunities and live more fulfilling lives. Beyond that, as a field and association we were quick to speak out on behalf of DACA students. We decry, resist, and root out the ways biases like racism become systematized. We create safe spaces and learn about microaggressions. Then we reform and update workshops, publications, and language to advance these efforts. This is all human-centered work that reflects our belief that people can grow and change. But this work and these values sit in tension with the Taylor-shaped environments that dominate the workplace. While we contend with and even pursue efficiency, up to a point, it stands to the side, a consideration that may help us reach a few more students in the face of tight budgets and limited staffing.

A dangerous form of magical thinking often accompanies new technological developments, a curious assurance that a revolution in our tools inevitably wipes the slate of the past clean.

- Eubanks, Page 183

A similar debate has emerged during the pandemic regarding learning management systems and online proctoring software and their assumptions that students are neurotypical and don’t feel any added pressure with a camera watching them while they take an exam.

For career development, the specifics are a bit different, but the underlying issues remain. If the hiring system has long been problematic, are we solving its problems by adding layers of technology? Simply saying a system or tool reduces bias doesn’t make it so. And such claims without evidence should be suspect when development teams and auditors are not diverse. In a recent piece for EDRi titled “Beyond Debiasing: Regulating AI and Its Inequalities,” the authors point to the hazards of narrowly focusing AI problems as purely technical while ignoring the broader factors that create the bias in society and organizations in the first place.

How can we maintain the human-centered aspect of our work? One way is to model the Career Readiness Competencies we put in front of students. Specifically, we can exercise the “Critical Thinking” and “Equity & Inclusion” dimensions in engaging technology. We can assess the ethical and downstream impacts of our technology choices for our offices and operations. What does the use of these technologies by our offices teach students and applicants about their value and how we—our offices and institutions—value them (or don’t)? Are students worth our time? Are we teaching students a way of interacting with us and the world in ways that are human-centered? Or, are we ultimately fostering a dependency on the use of these technologies?

New technologies develop momentum as they are integrated into institutions. As they mature, they become increasingly difficult to challenge, redirect, or uproot.

- Eubanks, Page 187

To help, below are questions to consider as starting points for internal conversations and for use with current or prospective vendors.

Internal Discussions

  1. What is the pain point that this technology is designed to address?
  2. How exactly does proposed technology address the identified problem? Are there gaps? What are they and how will they be addressed?
  3. What other options were considered? Why were they not chosen?
  4. How will the technology affect student/alumni engagement with career development staff? Where can it go wrong? Is there a plan to identify and mitigate those problems?
  5. If the technology is student facing and students reject it, what is the response plan? Are there channels in place that can identify a problem and notify the correct personnel?
  6. How will the technology affect student behavior both with the career services office and later? What does introducing this technology teach the student? Are there unintended messages being sent?

Vendor Discussions

  1. Does the vendor have a DEI statement that addresses its own staffing and product development process? Is it robust and aligned with department and institutional values? Is the vendor open to discussing gaps or concerns?
  2. What evidence can the vendor provide that its DEI commitments have impact on the organization’s hiring, operations, product development, and assessment cycles?
  3. Can the vendor explain how its AI works and why it chose to use AI to solve the problem it has identified?
  4. How resource intensive is the system? For example, can the vendor provide data on how many people are needed to train its algorithms?
  5. What assumptions are built into the algorithm? The starting dataset? Who trains the data going forward?
  6. If the career center is providing training for the algorithm, does that training pertain exclusively to that school’s populations, or is that training applied to other schools?
  7. Are the outcomes regularly audited to ensure everything is moving in a more equitable direction for everyone? Who conducts those audits? How often and how are data used to effect meaningful change in the algorithm?
  8. Who has access to user data? What safeguards does the vendor have in place to protect user data, and what would happen to user data if the organization were acquired by another? Is any of that data, whether individualized or aggregate, shared or sold to other organizations? What options are in place for students or other users to protect their data and privacy?

The window for us to deeply engage and shape a future we want and that is healthy for our students is closing. Technology has a place in that future, but whether the tools we use truly promote good or reinforce a damaging status quo depends on the choices we make now. Will we master an understanding of technology in order to use it well, like we teach students through the career readiness competencies? Or, will we consign our students and ourselves to a future as cogs in a machine?

Chris Miciek is director of the Center for Career Success at Thomas Jefferson University. He has served on multiple national (NACE) and regional (EACE, MWACE) leadership committees and focus groups regarding technology in higher education. Miciek has also authored numerous articles and delivered multiple presentations on emerging technology within career services, especially on the topic of artificial intelligence. He can be reached at chris.miciek@jefferson.edu

NACE JOBWIRE