Artificial intelligence and identity

Following a conference appearance in Dublin, various contributors have come together to tackle some of the issues around AI development, such as identity and bias, to ensure the systems are ethical and inclusive
Pro
(Image: Stockfresh)

12 November 2018

By (top left) Juanisa McCoy, Raj Subramanian, Anna Royzman, Albert “Boyang” Li, {bottom left) Adam Leon Smith, Davar Ardalan, Kee Malesky, Wolfgang Victor Yarlott

 

What is Artificial Intelligence (AI) Identity in a social and cultural sense? How do we build deeply inclusive AI in a biased world? As we accelerate into using data within our everyday lives we will need to focus on the best practices for machines and algorithms to interact with our social and cultural identities.
At the recent Quest for Quality conference in Dublin, Ireland, we met like-minded thinkers who are pushing the boundaries in this nascent space. Post-Dublin, we came together to share our collective thoughts on Deeply Inclusive AI and ways to train machines to be ethical and inclusive. Also providing insight: two leading AI and storytelling researchers in the US, including Wolfgang Victor Yarlott.

Wolfgang Victor Yarlott, Florida International University PhD student, and author of Old Man Coyote Stories; Cross-Cultural Story Understanding in the Genesis Story Understanding System:
The primary way in which social and cultural biases are going to appear in machine learning is in the data collected. For example, if you download forum posts to use as data, you need to understand what kind of cultural background the posters come from and what local subculture has developed on the forum.

The most pressing concern with regard to these biases is that a failure to adequately address them results in weaker models and a poorer understanding of human cognition. A system to extract information trained only on articles from the Economist is going to be less capable when used out-of-domain on articles from a student newspaper. Due to this, we are less able to draw conclusions about and model how humans engage and interact with media, and the systems we design are less flexible and, thus, less useful.

Juanisa McCoy, Davar Ardalan and Kee Malesky of the AI and Storytelling start-up IVOW:
At IVOW, we are looking for Deeply Inclusive AI. We believe that an effective fusion of AI, culture, and storytelling will help diminish bias in algorithmic identification and develop inclusive AI software and practices. Inclusive AI accommodates for models of various backgrounds, genders, ages, orientations, lifestyles, philosophies, communication practices—including visual and audio cues—and behavioural/psychological states.

The issue that we are discovering in this interaction is that machine learning models are being affected by social and cultural biases. For the foreseeable future, machines can only know what we teach them, and historically available data is usually skewed towards dominant cultures. The data reflects the bias of our norms. Most of our technological efforts have good intentions when relying on AI as a solution. Sometimes this is not the case.

The Trouble with Bias – NIPS 2017 Keynote – Kate Crawford #NIPS2017

Bias affects algorithms and opens the forum to the following perception and social interaction issues:

  • Stereotypes
  • Recognition problems
  • Denigration
  • Underrepresentation
  • Ex-nomination
  • Social harm/harassment
  • Identity discrimination

When looking at the effects of bias, we have to ask ourselves “what is our identity?” and “what are we missing?”. Take the discoveries of data projects in our everyday and professional lives, like the recent insight into Amazon’s development of a recruitment AI tool to help with high demands. After a year, they noticed it was biased against women and disbanded the tool. Had the team asked questions about their previous and current hiring practices, and examined a small batch among a diverse group they may have found the bug sooner or prevented the issue altogether.

In this sense, there are two major concerns with regard to social and cultural biases. One is the bias of the user providing the training dataset to train the AI, and the other is the fact that AI is a “black box”—we currently have no idea how it learns and what relationships it makes based on the data provided by the user. To solve the problem of user biases in data, we as technologists should take into consideration the holistic user experience when we develop machine learning solutions and algorithms. There needs to be a conscious effort in our methods, and we must research the effects among a diverse user group to understand our needs, biases, and identities. The possible ways to ensure that we keep our data honest and harmless are:

  • Hire ethicists who work with corporate decision makers and software developers
  • Develop a code of AI ethics that lays out how various issues will be handled
  • Have an AI review board that regularly addresses corporate ethical questions
  • Develop AI audit trails that show how various coding decisions have been made
  • Implement AI training programs so staff operationalises ethical considerations in their daily work
  • Provide a means for remediation when AI solutions inflict harm or damage on people or organisations

As for the problem of AI being a “black box”, there is currently a lot of research going on in this area and the progress seems to be promising. For example: recently a team of researchers taught AI to justify its reasoning and point to the evidence on which it made decisions. Also, speaking of biases, IBM recently announced that it is launching a “Trust and Transparency service” to detect bias in AI-based systems. These two studies alone are a breakthrough in the area of AI “black box” research and there will be more news like this in the upcoming years that could help to reduce the social and cultural bias problem. We appear to be heading on the right path.

Anna Royzman, founder and president of Global Quality Leadership Institute:
As the complexity of embedded and interconnected technology is on the rise, a bigger focus should be placed on the procedures that identify potentials for failures and prevent the disasters of technology-human interactions. It is not a coincidence that the machine learning progress is being validated through the testing sets—to discover how well the machine’s learned behaviour is meeting expectations. The development of these test sets, as any other testing practices, needs to be placed in the hands of professionals who are aware of biases and whose focus is on discovering potential risks, not on testing for positives. The risks mitigation of potential “human-unfriendly” behaviour calls for skilful design of experiments (tests) aimed to identify and discover such dangers; the software-testing and quality professionals whose expertise is in this exact area shall be fully involved in all aspects of future technology developments.

The education and training in developing critical mindset, identifying biases and designing test techniques for AI shall be promoted and widely accessible. The more everyone is thinking about the risks and trained to identify them, the better chance we have of accepting and installing the quality criteria that make human experience with technology rewarding, not detrimental or dangerous. That quality movement calls for global support by the technology leadership and the changemakers of the future.

Adam Leon Smith, CTO of Piccadilly Group and a researcher on algorithmic fairness:
While there are lots of ethical concerns with contemporary emerging technologies, there are also opportunities. In contrast to examples of bias based on CVs, one start-up has found success by developing augmented writing tools that provide insight to hiring managers in how their job adverts will be interpreted by candidates from different identities/backgrounds. This shows how it is possible to build software that is culturally aware. If we approach AI developments with our eyes open about our differences, maybe technology can help us understand them better.

In order to tackle these implications, we need to address the biases and implicit harms of machine learning decision-making on our community. Together we can help manifest healthy data practices for inclusive AI identities by asking questions, reviewing our datasets for human cultural and emotional interaction, and applying ethical standards.

Raj Subramanian, developer evangelist at Testim:
As AI-based systems are becoming more prevalent, testing these systems is going to be really crucial to prevent catastrophic problems later. This starts right from the training data we give to train the AI. A lot of focus needs be given to the data, as it is going to have a huge impact in the way the AI-based systems learn and make decisions. This should be supported by constant monitoring of the learning progress of the AI. Both combined is going to be the KEY to the success of AI-based systems. 

Boyang “Albert” Li, Senior Research Scientist at Baidu Research:
One way to mitigate undesirable behaviour is to gain a better understanding of the machine learning systems, or the so-called “interpretability” of machine learning. This research area has received a lot of attention recently and Baidu takes a keen interest in it.

The problem with “devising a dataset that is free of social bias” is that we can never be sure of it, just like we can never be sure that we have eliminated all bugs in a software system that is complex beyond a point. In some critical missions like space shuttles, people spend a lot of money and energy at it. Doing so is extremely expensive, so it is only limited to a small number of scenarios. Even after such heroic efforts have been made, bugs persist.

In this sense, the most pressing concern with regard to social and cultural biases in the context of machine learning and artificial intelligence is our inability to understand behaviours of machine learning systems. The most pressing concern with regard to the public opinion is that the American people do not YET understand that unintended ML behaviours are bugs rather than evil AI.

 

 

Read More:


Back to Top ↑

TechCentral.ie