Ethics, technology and wisdom
2 November 2018 | 0
The job of the white hats, wherever they are, is getting harder.
The task of keeping organisations, and populations, safe from cyber attack has never been harder and will never be less important.
The pace of change and volume of attacks, multiplied by a growing sophistication, has led many groups to change the way they approach the problem.
However, not all of those approaches are worthwhile.
“We will always need humans, we will always need wisdom to make decisions with compassion,” Paul C Dwyer
Renowned security guru, Paul C Dwyer, speaking at the recent EU Cyber Summit, warned that some organisations have gone down the route of training young practitioners in the dark arts of cybersecurity, without necessarily equipping them with the moral frameworks to allow them to understand the impact of their actions.
Young people are being trained as cyber security operatives, he warned, but are not being taught ethics or morality.
They are being taught the ‘how to’ but not the ‘should we,’ and we are going to pay for that in the future, said Dwyer.
Citing the likes of the Israeli Defence Force’s Unit 8200, among others, he said that we are storing up problems for ourselves in creating a generation of highly skilled and experienced practitioners who do not necessarily have the ethical code instilled to prevent their going rogue afterwards.
Presenting a simple formula, he said that as a security operative, you should be defined by morality to the power of ethics, over wisdom.
Wisdom, he said, is key here, as while we are developing artificial intelligence (AI) at a furious pace, but there is yet no such thing as artificial wisdom.
“Intelligence does not equal wisdom,” said Dwyer, and therefore, we will always need people grounded in ethics to make decisions.
“We will always need humans, we will always need wisdom to make decisions with compassion.”
In the wider context of AI, this is an interesting standpoint, especially in light of the presentation from Davar Ardalan, at the Quest For Quality conference in Dublin recently.
Ardalan spoke about how, currently, artificial intelligence is a fairly narrow view point that excludes the views of tens of millions of people. She argued that AI needs to be culturally informed to allow it some context for decision making.
She envisages a wealth of cultural information being made available and readable to inform AI systems and provide a cultural context to develop what she terms “deeply inclusive” AI.
Like two sides of the same coin, both Dwyer and Ardalan have hit on the need for deep and informed custodial care for AI and cyber security. We need too understand the far-reaching impact of these technologies and developments on people’s lives, and ensure that they are developed and guided by the ethically-aware and culturally informed to prevent bias, exclusion and potential discrimination.
We need to ensure that the generations of developers, architects and security practitioners to come are aware of the reach and impact of their work, and ensure they have the tools to develop the wisdom to wield their powers in an ethical manner.