The quest for ‘deeply inclusive’ AI

Quest for Quality conference hears a cultural context is necessary for fair AI-driven systems
Davar Ardalan, IVOW

12 October 2018

While artificial intelligence, machine learning and automation are continuing to impact the areas of software development and testing, the issue of how to train artificial intelligence (AI) was raised by Davar Ardalan, founder and storyteller-in-chief, IVOW, at the Quest for Quality conference in Dublin.

Ardalan argued that in the near future and for the foreseeable usage of the technology, AI will be making decisions about real people and as such, should have a cultural context to ensure fairness and ethical treatment.

“AI today represents a fairly narrow view point that excludes the views of tens of millions of people,” Davar Ardalan

“My mission is to bring ‘we the people’ to AI,” said Ardalan.

She said the journey has only just begun to ensure beneficial AI can be developed, informed with a cultural context.

Ardalan quoted Michelle Chang, a writer for, who said “AI systems are built by human beings with implicit biases, so it is no wonder that bots also have biases.”

Somewhat prophetically, this chimed with the news from Reuters that Amazon has now scrapped an HR tool for selecting candidates that has showed a bias towards men, that is suspected to have come form developers.

“We need to protect the human against the machine,” said Ardalan.

However, she highlighted that there is insufficient cultural information currently digitised to enable this ambition.

By way of example, she showed a picture that had been analysed Google’s AI that correctly identified it as a Saint Patrick’s Day parade in a US city, right down to kilts and bagpipes.

By contrast, a picture of a Native American ritual was hardly able to recognise the appearance of people in the photo, due to being in costumes that were unrecognised.

Ardalan quoted AI scholar, and member of the Crow nation, Wolfgang Victor Yarlott, who said “The most pressing concern with regard to social and biases, is that a failure to adequately address them results in weaker models and a poorer understanding of human cognition… Due to this, we are less able to draw conclusions about and model how humans engage and interact with media, and the systems we design are less flexible, and thus, less useful.”

“Deeper understanding of every customer’s personal story and cultural background is vital for competing in a marketplace increasingly dominated by AI,” said Ardalan.

“AI today represents a fairly narrow view point that excludes the views of tens of millions of people,” she argued.

What she terms deeply inclusive AI will require the digitisation of masses of cultural information, from folk and fairy tales to cultural histories and experiences.

“We want our smart devices to know who we are – we will still be human in the future, we will not be robots,” she asserts.

This drive for AI to be deeply inclusive and culturally relevant is being supported by projects such as MIT’s Genesis storytelling project, among others. There are cooperatives efforts from IBM Watson, Google, Amazon, Baidu, to build this cultural resource, digitally enabled to facilitate machine access, to allow AIs to inform themselves as they learn, all with the aim, says Ardalan, “to create a more inclusive society.”


TechCentral Reporters

Read More:

Back to Top ↑