Artificial Intelligence

Adapt research shows we can reduce gender bias in natural language AI

Bias can alter how an algorithm functions and result in the same assumptions being repeatedly
Image: Shutterstock via Dennis

13 June 2022

New research from Adapt, the Science Foundation Ireland research centre for AI-driven digital content, aims to reduce gender bias in natural language processing.

Led by research engineer Nishta Jain in collaboration with Microsoft and Imperial College London, the research will be presented at the prestigious European Language Resource Association’s Language Resources and Evaluation Conference (LREC) to be held from 21-23 June in Marseille, France.

In recent times, studying and mitigating gender and other biases in natural language have become important areas of research from both algorithmic and data perspectives. However, previous work in this area has proved to be costly and tedious requiring large amounts of gender balanced training data. 




The work leverages pre-trained deep-learning language models to reduce bias in a language generation context. The new research approach can accelerate efficacies, making this technology more affordable and less time-consuming. The approach is designed to work on multiple languages with minimal changes in the form of heuristics.

To prove this, the research was successfully tested on a high resource language – Spanish – and a very low resource language – Serbian – with positive results. 

Explaining the research, Jain said: “From finding a romantic partner to getting that dream job, artificial intelligence plays a bigger role in helping shape our lives than ever before. This is the reason, we as researchers need to ensure technology is more inclusive from an ethical and socio-political standpoint. Our research is a step in the direction of making AI technology more inclusive regardless of one’s gender.”   

TechCentral Reporters

Read More:

Comments are closed.

Back to Top ↑