Leading the fight against black box algorithms

Pro
(Image: Stockfresh)

25 September 2018

IBM is leading the fight against black box algorithms with a new set of open source software to help developers understand how their artificial intelligence is making decisions.

Black box algorithms are the complex code at the heart of systems that people increasingly rely on day to day; from everyday things like the news you read and products you buy, to what stock a hedge fund will invest in, or which clients an insurer will cover. They are increasingly complex in their design and can often be informed by the bias of the coders, who sometimes aren’t even sure how the system reached its conclusion. There has also historically been little oversight or accountability regarding their design.

Now, with the Fairness 360 Kit, IBM is open sourcing software intended to help AI developers to see inside their creations via a set of dashboards, and dig into why they make decisions.

As a service
The software runs as a service on the IBM Cloud and an AI bias detection and mitigation toolkit will be released into the open source community by IBM Research. It promises real-time insight into algorithmic decision making and detects any suspicion of baked-in bias, even recommending new data parameters which could help mitigate any bias it has detected.

Importantly, the insights are presented in dashboards and natural language, “showing which factors weighted the decision in one direction versus another, the confidence in the recommendation, and the factors behind that confidence,” the vendor explained in a press release.

“Also, the records of the model’s accuracy, performance and fairness, and the lineage of the AI systems, are easily traced and recalled for customer service, regulatory or compliance reasons — such as GDPR compliance.”

The software has been built using models from a variety of popular machine learning frameworks to aid broad and customisable use, including Watson, Tensorflow, SparkML, AWS SageMaker, and AzureML.

The vendor added: “While other open source resources have focused solely on checking for bias in training data, the IBM AI Fairness 360 toolkit created by IBM Research will help check for and mitigate bias in AI models. It invites the global open source community to work together to advance the science and make it easier to address bias in AI.”

Algorithmic bias
A great deal of credit for awareness regarding algorithmic bias must be handed to Joy Buolamwini, the founder of the Algorithmic Justice League and MIT media lab computer scientist whose Gender Shades project has helped uncover racial bias in facial recognition systems.

Popular books like Cathy O’Neil’s “Weapons of Math Destruction” and Frank Pasquale’s “The Black Box Society: The Secret Algorithms That Control Money and Information” have also helped raise awareness of this issue, and it seems like the tech industry is starting to do something about it.

An infamous example of AI bias came to light when investigators journalists at ProPublica in the US reported that COMPAS, an algorithm widely used in the US judicial system to predict the likelihood of reoffending, was racially biased against black defendants.

In the UK, police in Durham have been criticised by civil liberties groups for their use of similar algorithms to predict whether suspects are at risk of committing further crimes.

“Programs need to be thoroughly tested and deployed with rigorous oversight to prevent the existence of prejudice – and AI must never be the sole basis for a decision which affects someone’s human rights,” writes Liberty advocacy and policy officer Hannah Couchman.

Research published by Accenture, titled “Critical Mass: managing AI’s unstoppable progress”, found that 70% of organisations adopting AI conduct ethics training for their developers.

“Organisations have begun addressing concerns and aberrations that AI has been known to cause, such as biased and unfair treatment of people,” said Rumman Chowdhury, responsible AI lead at Accenture Applied Intelligence. “These are positive steps; however, organisations need to move beyond directional AI ethics codes that are in the spirit of the Hippocratic Oath to ‘do no harm’.”

Ray Eitel-Porter, head of Accenture Applied Intelligence UK, added: “Businesses need to think about how they can turn theory into practice. They can do this through usage and technical guidelines enshrined in a robust governance process that ensures AI is transparent, explainable, and accountable.”

Wider activity
IBM is not alone in this endeavour however, as recently Google launched a “what-if” tool to help developers look inside the historical performance of their machine learning models.

Fellow tech giants Microsoft and Facebook have both announced intentions to release tools to help detect and avoid bias in algorithms.

This follows the news that German vendor SAP has set up an external AI ethics advisory panel. The Accenture research found that 63% of respondents have ethics committees in place.

The esteemed panel includes Dr Peter Dabrock, chair of systematic theology (Ethics) at the University of Erlangen-Nuernberg; Dr Helen Nissenbaum, professor at Cornell Tech Information Science and Dr Susan Liautaud, lecturer in public policy and law at Stanford and managing director at Susan Liautaud & Associates Limited (SLAL).

The panel will work in collaboration with the existing AI steering committee at SAP, which consists of executives from development, strategy and human resources departments.

“SAP considers the ethical use of data a core value,” said Luka Mucic, SAP CFO. “We want to create software that enables the intelligent enterprise and actually improves people’s lives. Such principles will serve as the basis to make AI a technology that augments human talent.”

“AI offers immense opportunities, but it also raises unprecedented and often unpredictable ethics challenges for society and humanity,” said Susan Liautaud. “The AI ethics advisory panel allows us to ensure an ethical AI, which serves humanity and benefits society.”

The benefits of transparent and ethical AI are not just a moral issue though, they should go some way to building public trust in algorithms in the future.

“The ability to understand how AI makes decisions builds trust and enables effective human oversight,” Yinyin Liu, head of data science for Intel AI Products Group said as part of the Accenture report. “For developers and customers deploying AI, algorithm transparency and accountability, as well as having AI systems signal that they are not human, will go a long way toward developing the trust needed for widespread adoption.”

Black box algorithms
These tools could also prove useful in breaking open the black box of algorithm design.

In an extract from his book “The Death of Gods”, published in the Times Literary Supplement, Carl Miller writes: “Algorithms have changed, from Really Simple to Ridiculously Complicated. They are capable of accomplishing tasks and tackling problems that they’ve never been able to do before. They are able, really, to handle an unfathomably complex world better than a human can. But exactly because they can, the way they work has become unfathomable too.”

In the book, an unnamed researcher from a large tech corporation tells Miller: “It’s power without responsibility. There’s so much power, and so little responsibility. This is not notional abstract power. This is real power about day-to-day lives. It’s both material and cultural and financial. The world has to know that this is how it works.”

Algorithms today
The problem with algorithms today, the researcher explained to Miller, is that the exponential growth in data and compute power means we are now able to throw huge amounts of data at an algorithm to train it, however that means that as humans we are less able to define, after the fact, which inputs have actually helped the algorithm reach its conclusion — it is a black box.

“The reality is, professionally, I only look under the hood when it goes wrong. And it can be physically impossible to understand what has actually happened,” the researcher concluded, ominously. “The reality is that if the algorithm looks like it’s doing the job that it’s supposed to do, and people aren’t complaining, then there isn’t much incentive to really comb through all those instructions and those layers of abstracted code to work out what is happening.”

Another researcher, Jure Leskovec, chief scientist at Pinterest and a machine learning professor at Stanford University, told Miller: “We need to step up and come up with the means to evaluate – vet – algorithms in unbiased ways.

“We need to be able to interpret and explain their decisions. We don’t want an optimal algorithm. We want one simple enough that an expert can look at it and say nothing crazy is happening here. I think we need to get serious about how do we get these things ready for societal deployment, for high-stakes decision environments? How do we debug these things to ensure some level of quality?”

It would seem that the big tech vendors are at least increasingly aware of the inherent risks their algorithms pose to society without proper oversight, and are taking measures to break them out of their black boxes. The real question is: will they like what they see?

 

IDG News Service

Read More:


Back to Top ↑

TechCentral.ie