Addressing AI Bias: Diversity-by-Design

You are here:
Addressing AI Bias: Diversity-by-Design

Out of the many themes in HR, diversity frequently gets the spotlight. Yet seeing advanced analytics or AI solutions applied to diversity is rare, especially when compared to themes like turnover and absenteeism. Of course, putting numbers on gender, ethnicity, etc. is tricky: the very insights to counter discrimination might also be used to discriminate. As a result, diversity calls for less of a head-on approach than we are used to. In this article, I will explore how we may attain diversity-by-design in model development.

Diversity and AI: an inconvenient truth

When it comes to diversity, many interventions are seemingly aimed at tackling symptoms and increasing awareness. This can already bring much-needed positive changes.

Still, improving diversity tends to be done in broad strokes. And while analytics could provide new insights for more targeted action, the required data often sits squarely in the no-go zones of privacy and ethics. It would seem that analytics and diversity do not mix.

When they do mix, however, analytics can actually have a profound impact on diversity. In fact, when looking at those effects on gender diversity as reported in the media, things do not look good: 

An often-heard explanation is that models enforce or even amplify an unconscious bias of their developers. While I think that is too strong a statement, AI does end up maintaining diversity issues due to human errors in its design. (And these design flaws were already ubiquitous well before the rise of AI: consider car safety and office temperatures, for example.) A popular solution often proposed to counter such issues is to increase diversity in analytics teams: surely women would have spotted the flaws in the examples above!

Indeed they may well have and I am all for increasing diversity and viewpoints in analytics teams. Increasing diversity, however, is not the perfect solution for unbiased AI. If it were, we could claim that “this model was designed in a diverse team and therefore bias-free.” Would you accept that without a second thought?

Related (free) resource ahead! Continue reading below ↓

Learn to Conduct
a D&I Survey

Download our FREE guide to find out how you can gather the data you need to help your organization become more inclusive

We are dealing with human errors, and we all have our biases and blind spots. Employing diverse teams will not solve the problem of biased models fully. And while diverse views and peer review will tend to improve the quality of work, keep in mind that analyses done by a full committee (to eliminate any potential bias) will be slow or even ineffective.

Understanding the problem

So should we give up on analytics when it comes to diversity? I don’t think so. I also do not think that the answer lies solely in demanding that developers think even more carefully about data selection. I think we should address how model performance is attained and assessed. 

The examples mentioned earlier were developed for tasks that initially had nothing to do with a diversity agenda. It is understandable that diversity was not factored in during development, yet the resulting models actually seem to lean towards reducing gender diversity and maintaining stereotypes. To do something about that, we first need to understand how models reach these outcomes. 

Machine learning models are based on patterns in data. Historical data. These data tend to represent the current state of an organization and some years prior. Looking at that time frame, many organizations find that gender diversity, e.g. for key positions, could be better. For the sake of the argument, let’s assume that the average key position holder is most likely male.

Example of faulty conclusion by AI: "male candidates are best suited for key positions"

Figure 1 – Example of faulty conclusion by AI
The current key position holders come from a starting population that was more male-dominated than the organization is today. To increase diversity, more women have been recruited in recent years. Because of that, the current fraction of men in key positions is significantly larger than the current overall fraction of men in the organization. The algorithm (or AI) uses differences between those who hold key positions and those who do not to predict fitness for key positions, and so falsely concludes that gender is a distinguishing feature. (Otherwise, it would observe the same 50/50 split between men and women in key positions as that present across the organization.)

An algorithm does not know the context and takes the data at face value. As a result, interventions to improve diversity may even contribute to a bias in AI that runs counter to your agenda. In the example given in Figure 1, recruiting women caused a significant difference in gender distributions between those in key positions and other employees, which actually contributed to an unwanted bias in the model when it came to judging candidates for key positions.

Online & Self-Paced Certification Diversity, Equity,
Inclusion & Belonging Certificate
Simply hiring a diverse workforce isn’t enough. Learn to build an inclusive workplace where everyone can thrive.
Download Syllabus

Of course, allowing gender as a feature for the model to use is ill-advised, but there is no easy fix to counter AI gender bias. Even if you do not include gender, other differences between genders could be picked up by an algorithm instead, such as working part-time or having a sorority on your resume. 

Fixing the issue

In criticizing models with unwanted or even unethical biases, people tend to focus on data selection. Granted: data selection is crucial. But we also need to ask ourselves if we are training our models on the right target conditions. Focusing on this aspect of model development allows for formalization as a technical solution and has the benefits of:

  • standardization, as it can be implemented across the organization for model development in general (not just for HR)
  • transparency regarding the impact of the model on diversity goals (and potentially other goals as well), as models are explicitly scored against these goals during development

Generally, the only thing we demand from a model is to accurately match historical outcomes based on selected features. Implicitly we instruct AI that the ends (i.e. matching example outcomes) justify the means (i.e. which features to use) and that our history (i.e. data used) represents our future goals. This works well in a strictly mathematical setting, where a context is assumed to be constant and ideals and ethics are best kept outside the equations.

Real-world problems often introduce additional complicating factors. The shortest path to a solution may not be the best, and we need to be aware that AI selects a path according to the definition of “best” that we provide. In developing AI, the shortest path is often the algorithm’s best outcome, because we did not fully specify what the target conditions were. We need to explicitly provide that.

Consider the example of predicting whether a candidate is suitable for a key position: we want the model to aim for predicting this well on available (historical) data, but to do so using features that keep the correlation between gender and being deemed suitable for key positions as close to 0 as possible. In other words: we want identification of key position holders to be good, but identification of gender using the same features should be lousy. Because we know that the model uses employee data, we add this constraint to our model development. The constraint is derived from the strategic goal of gender parity among employees and formalized for those performing analyses on employee data.

Instructing the AI more explicitly: model development with additional constraints - key position candidate identification

Figure 2 – instructing AI more explicitly
One of the organization’s strategic goals is gender parity. The primary data source for the task at hand (identification of key position candidates) is associated with this goal. The derived constraint for models using these data is that their performance on identifying gender should be minimized. Normally, a model would only need to maximize its performance for a given task (arrow below, horizontal axis). This organization has opted for an easily implemented approach to minimize the chances of a gender disparity. While the model is instructed to find the best possible performance for identifying key position candidates, a copy of the model using the same data is checked against gender identification, which should be as close to chance level as possible (vertical axis).

Does this complicate things? Surely. Does it guarantee the elimination of gender bias? Unfortunately not, but it enforces an indication of the model’s estimated gender bias well before it is taken into production. Not only that, but it explicitly instructs the algorithm to find features in the data that tell us the most about key position holders, yet the least about gender: diversity-by-design. Most importantly, it reduces the risk of only finding out about adverse effects on diversity once a model has been released. 

HR 2025
Competency Assessment

Do you have the competencies needed to remain relevant? Take the 5 minute assessment to find out!

Start Free Assessment

Concluding

A machine learning algorithm, or AI application if you so prefer, does not bother itself with our ideals or ethical considerations. AI is simply not there yet. We need to be aware of this lack of contextual knowledge and instruct models accordingly. This is especially important for diversity, as the scenarios we strive for tend to differ from the history we hold up to AI as an example to emulate. 

The most often heard solution is to select data with more care. This would leave much to the considerations of developers and provides limited opportunities for standardization and transparency. It also does not guarantee an indication of adverse effects on known organizational goals, such as gender parity, beforehand. 

The answer, I think, has been staring us in the face for some time. We need diversity-by-design which we can achieve by selecting data with care and being more specific in setting targets for model performance. And in order to standardize and implement that, we need to:

  1. Identify strategic people goals as well as the data through which these goals may be affected if a digital solution were to use it;
  2. Formalize constraints for model development, depending on which data are used and implement these in tooling and ways of working.

Looking at the two points above it soon becomes apparent that this does not concern only developers. Management, data owners, IT, and analytics professionals (and possibly more) are all needed to ensure that we formalize and implement the right additional checks and balances for more responsible model deployment.

Shaking our heads at other companies’ missteps and pointing at developers will not bring us to a mature solution to biased AI. Let’s consider this a call to action and band together! 

Subscribe to our weekly newsletter to stay up-to-date with the latest HR news, trends, and resources.

Are you ready for the future of HR?

Learn modern and relevant HR skills, online

Browse courses Enroll now