Person at a laptop

Should software developers have a code of ethics?

Pro
(Image: Stockfresh)

3 April 2018

First, do no harm. This is the underlying message of the Hippocratic Oath, historically taken by physicians to show they will abide by an ethical code of conduct. Plumbers, construction workers, law enforcement — almost any professional whose work impacts the public must abide by some sort of ethical code of conduct.

There is one fairly notable exception: technology. While there are organisation and company-specific codes of conduct, such as these guidelines from the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers – Computer Society (IEEE-CS) joint task force on software engineering ethics professional practices, there is no one all-encompassing set of standards that includes the entire industry. Perhaps that is because, as Yonatan Zunger writes in the Boston Globe, “… [T]he field of computer science, unlike other sciences, has not yet faced serious negative consequences for the work its practitioners do.”

But, given the still-emerging details about Cambridge Analytica’s role in building software to help clients manipulate voters, that could be about to change. Computer science and software development have grappled with ethics problems in the past, but it seems these problems are not only happening more frequently, but are also increasing in scale and impact.

In 2015, independent tests revealed that Volkswagen engineers programmed cars to cheat emissions standards. In the wake of the 2016 US presidential election, Facebook, among others, is grappling with an epidemic of fake news, and now is inextricably linked to Cambridge Analytica’s weaponisation of personal user information. The US is struggling to come to grips with Russian hacking and interference in elections, and the role social media platforms such as Twitter and Facebook played.

Good versus evil?
These are just a few examples of how software can be used for nefarious purposes; there is no way to know definitively every possible outcome of the development and use of every piece of technology, every line of code. So, it is up to those who design and build the products, software packages, the apps and solutions that we use daily to do the right thing. That is a lot of pressure.

It is also difficult to navigate what is right and wrong if you are pressured to meet deadlines, or your livelihood is on the line; a code of ethics can provide context and framework for professionals to fall back on, says Dave West, product owner at training company Scrum.org. And while he would like to see such a thing, he says it is understandable that such a diverse-thinking group might not be able to agree on all aspects of what such a code would entail.

“I would love to see a standardised industry code of ethics; we do have our own that falls under our mission of improving the profession of crafting software. At the heart of it are our five major values of openness, courage, respect, focus and commitment. And we feel like that is a solid foundation for anyone to fall back on if they are feeling uncertain about any part of their job responsibilities, because they can step back and look at those values and say, ‘Am I doing the right thing, here, based on these things I believe in?’” said West.

The debate about ethics in software development has raged for as long as the profession has been around. It can be nearly impossible to assess all the potential applications of a technology, good and bad, and that’s both the beauty and the horror of the issue, says Shon Burton, founder and CEO at recruiting firm HiringSolved, which uses AI to help companies identify diverse talent.

Any tool can be a weapon
“Any tool can be a weapon depending on how you use it. There’s no way to know every single possible application of a technology. For us, using AI and automation — the stuff I can think about now that we’re close to it, I can see good and bad, and both are easily accessible. For our applications, we can help clients screen for diverse candidates. But we see, also, that it could be used to screen out people with different ethnicities, races, gender. We have a code of conduct internally that we all adhere to. But we understand the potential and the unintended consequences,” Burton says.

“If safety came first, the Facebook Graph API used by Cambridge Analytica, which raised widespread alarm among engineers from the moment it first launched in 2010, would likely never have seen the light of day,” Zunger writes.

In the absence of an industry-wide set of ethical standards, individuals and even some corporate entities are making public stands behind their values. In December, the NeverAgain.tech movement circulated a pledge to resist “…build[ing] a database of people based on their Constitutionally-protected religious beliefs. We refuse to facilitate mass deportations of people the government believes to be undesirable,” the pledge reads. It has now gathered more than 2,500 signatures.

GrubHub CEO Matt Maloney took a lot of heat for his stand against hateful, demeaning and discriminatory actions and language. And Oracle executive George Polisner very publicly resigned his position in response to his former employer’s co-CEO accepting a role in the incoming presidential administration.

The line
It can be difficult to know where the line exists between right and wrong in this context, even if you are walking it. While one standardised code of ethics could be a solution, it may be more important to teach people how to ask the right questions, says Scrum.org’s West.

“Personally, I’d love to see more education on teaching ethics than is presently available, especially in a professional context rather than just a course about theory, because ethics in isolation won’t work unless it’s part of a broader professional standards. There’s also an issue, though, of the fact that often, individuals can’t build software alone, and they’re also not making these ‘wrong’ decisions all at once, but incrementally,” West says.

Teaching people to ask the right questions involves understanding what the questions are, says Burton, and that everyone’s values are different; some individuals have no problem working on software that runs nuclear reactors, or developing targeting systems for drones, or smart bombs, or military craft.

“The truth is, we’ve been here before, and we’re already making strides toward mitigating risks and unintended consequences. We know we have to be really careful about how we’re using some of these technologies. It’s not even a question of can we build it anymore, because we know the technology and capability is out there to build whatever we can think of. The questions should be around should it be built, what are the fail safes, and what can we do to make sure we’re having the least harmful impact we can?” he says.

Burton believes, despite the naysayers, that AI, machine learning and automation can actually help solve these ethical problems by freeing up humans to contemplate more fully the impacts of the technology they are building.

“Right now, there’s so much pressure to meet deadlines and there’s market pressure to release products, that it’s taking up developers’, CIOs’, CTOs’ and other IT leaders’ time,” Burton says. “If we can automate more of the processes and relieve some of the human efforts so they can apply better critical thinking, and hopefully head off some of these issues before they get critical,” he says.

Right answer
There is no one ‘right answer’ here, and a code of ethics certainly will not put all the ethical issues to rest. But it could be a good place to start if individuals and organisations want to harness the great power of technology to create solutions that serve the greater good.

 

 

IDG News Service

Read More:


Back to Top ↑

TechCentral.ie