Software developers

Stanford research warns developers against relying on AI assistants

The findings come as an increasing number of developers turn to AI pair programmers due to productivity benefits
Pro
Image: Shutterstock via Future

22 December 2022

Developers who use AI pair programming assistants like GitHub Copilot are more likely to introduce security vulnerabilities for the majority of programming tasks, according to new research from Stanfird University.

Although participants were more likely to introduce security vulnerabilities if they had access to an AI assistant, researchers also found they were more likely to rate their insecure answers as secure compared to those who didn’t use the AI technology.

Participants who spent more time honing their queries to the AI assistant, including changing the parameters, were more likely to eventually provide more secure solutions. Those who trusted the AI less and engaged more with the language and format of their prompts were more likely to provide secure code, the researchers concluded.

 

advertisement



 

“Overall, our results suggest that while AI code assistants may significantly lower the barrier of entry for non-programmers and increase developer productivity, they may provide inexperienced users a false sense of security,” they said.

“By releasing user data, we hope to inform future designers and model builders to not only consider the types of vulnerabilities present in the outputs of models such as OpenAI’s Codex, but also the variety of ways users may choose to interact with an AI code assistant.”

The study’s participants displayed a decreased willingness to alter the programming assistant’s outputs, or adjust its parameters, when it supplied insecure code.

The researchers also raised concerns over developer proactiveness. They cited observations that those who used AI assistants were less likely to display care in searching the language’s documentation to protect against safe implement details, for example, which was “concerning given that several of the security vulnerabilities we saw involved improper library selection or usage”.

Participants were set six tasks each across a number of programming languages which included Python, Javascript, and C. Results from tasks relating to encryption were of particular concern to the researchers since, in one task, only 67% of those who used the AI assistant produced correct, secure code compared to 79% of those who didn’t.

Only university students were used as part fo the study which means the conclusion drawn may not be directly applicable to those with years of professional experience, the researchers noted, since those in working in the industry may have more security experience.

Regardless, the results highlighted the need for caution in relying on such AI tools too heavily, especially when working on high-value projects, despite the developer community’s welcoming of them.

GitHub has previously claimed that its own AI pair programmer, GitHub Copilot, improves developer’s productivity, according to its own survey which found that 88% of developers are more productive when using the AI tool. 

The coding platform also claimed that Copilot improves developer happiness since it allows them to stay in a development flow for a longer period of time, as well as solve more complex problems. Competing tools such as Facebook InCoder and Codex, which was used in the Stanford study, both receive significant support from developers who use them.

However, the current implementation of AI pair programmers was called into question after GitHub was hit with a class action lawsuit in November 2022, claiming that Copilot is committing software piracy since it’s trained from publicly available repositories on GitHub’s platform. The lawsuit alleged that creators have had their legal rights violated since they posted code or work under various open source licences on the platform.

Future Publishing

Read More:


Back to Top ↑

TechCentral.ie