Insight

What’s in a word? Addressing gender inequality in AI

Clara Bernstein

By Clara Bernstein

Natural Language Processing (NLP) is an AI application that underpins all technology that requires an understanding of text, including CV screening, essay grading, search queries, and translation. NLP uses machine learning to reveal the structure and meaning of text, training models to understand language.

ChatGPT, for example, is trained on a 570GB dataset taken from the internet, including the entirety of the English Wikipedia. However, the content of the training data results in shocking biases at the front-end interface level and also in back-end word computation. The consequences of bias are clear: reinforcing inequality in society.

While completely eliminating bias in NLP may prove challenging, reducing it is certainly possible. By turning prose into a mathematical problem, data scientists can encourage inclusivity.

The basics of gender bias in NLP

Bias creeps into NLP applications through word embeddings, a fundamental component that enables machines to interpret language. Word embeddings represent words in the vector space, where the distance between different vectors represents the relationship and associations between words.

Word vector visualisation
Figure 1: Word vector visualisation

Google Translate, for instance, associates words like ‘CEO’, ‘boss’, and ‘ambitious’ with men, and ‘nurse’, ‘secretary’, and ‘helper’ with women. When translating specific sentences, this bias is even clearer. For example, the Danish gender-neutral term for a romantic partner translates to the English ‘boyfriend’ or ‘girlfriend’ depending on the descriptor. If the partner is described as a CEO, the word translates to ‘boyfriend’. If the partner is described as a care giver, the word translates to ‘girlfriend’.

Similarly, ChatGPT produces different results for the prompts ‘every man wonders’ and ‘every woman wonders’. According to ChatGPT, every man wonders about their purpose in life, while every woman wonders what it’s like to be a man… And worryingly still, ChatGPT advises women who are entering the workforce to be polite, dress appropriately, and respect their superiors. Men, on the other hand, are advised to focus on the business lifecycle and making money.

It's all about the context

BERT, an algorithm used in almost every English-language query including Google and ChatGPT, demonstrates how context affects bias. BERT’s masking functionality predicts hidden words in a sentence. For example, in the sentence ‘The president went for a walk because [MASK] wanted to get some exercise’, BERT predicts that the masked word is ‘he’ with a probability of 98.7 percent. Interestingly, adding New Zealand to the sentence (a country with greater female representation in the presidency) decreases the ‘he’ probability to 90 percent. While the results remain staggeringly skewed, political equality reduces bias in models due to context and the language we use.

Bias mitigation starts with proactive testing. Bias can be tested through similarity functions, which calculate the distance between word vectors. NLP library spaCy shows that ‘nurse’ is more commonly associated with females, and ‘president’ is more commonly associated with males. For ‘nurse’, the similarity score with ‘she’ is 47 percent, compared to 31 percent for ‘he’. The similarity score between ‘president’ and ‘he’ is 28 percent, compared to 20 percent for ‘she’.

If you can’t remove it, reduce it

Removing bias from language is arguably an impossible task, but solutions exist to debias text by treating it as a linear algebra problem. This requires clearly defined gender-neutral words, gender-specific words, and definitional pairs. Definitional pairs are gender-specific words that cannot be misinterpreted: ‘man’ and ‘woman’ are gender-specific, but not definitional, as ‘man’ can be misinterpreted for ‘mankind’.

In this approach, bias is revealed when words that shouldn't have a gender connotation are close to words with a gender association – like 'nurse' appearing near to 'wife' or 'she'. In the diagram below, gender-neutral words are displayed in the top half. Gender-specific words, with a gender association but not a definition (like 'wife', rather than 'she'), are in the bottom half.

The vectors represented on the “he-she” space. The ones above the X-axis are gender-neutral terms and the ones below the X-axis are gender-specific.
Figure 2: The vectors represented on the “he-she” space. The ones above the X-axis are gender-neutral terms and the ones below the X-axis are gender-specific.

Linear algebra solves the problem by moving word embeddings so gender-neutral words are equidistant to all gender-specific and definitional pairs. In our experiments, when applying this solution as open-source code, the male scoring for ‘president’ reduces by 93 percent while the male scoring for ‘nurse’ increases by 104 percent. This clearly shows the value of the solution, and can be embedded into NLP models and transferred to other areas of bias such as racism, ableism, and ageism.

The future is bright… And debiased

The AI space is expanding rapidly, raising new questions and complex answers. Alongside testing and developing new solutions, we’re actively helping our clients to navigate this new normal through multiple AI capabilities and expertise, including Generative AI. It’s crucial that these models are unbiased and inclusive.

A combination of technical, regulatory, and individual efforts will build momentum towards greater inclusivity, where all members of society have a collective responsibility to beat the bias. Major change lies in the hands of tech giants, and regulators with the power to standardise, audit, and regulate NLP models in a decentralised way. This involves making algorithms and training datasets widely available, applying and publicly declaring debiasing strategies, establishing robust audit mechanisms, and building a diverse pool of talent to make key decisions on model design.

Ultimately, the input dictates the output. Removing bias in the training dataset means addressing the root cause of bias through political and social change. Step changes towards a fairer society will play a role in reducing the bias in NLP models, and the effect of NLP on unconscious bias – creating a virtuous circle that enables greater trust in the technology.

Sources

Figure 1: Getting started with NLP-part3 (Word Embeddings)

Figure 2: Tackling Gender Bias in Word Embeddings

About the authors

Clara Bernstein
Clara Bernstein PA data science expert

Explore more

Contact the team

We look forward to hearing from you.

Get actionable insight straight to your inbox via our monthly newsletter.