AI pair programming tools designed to accelerate development bring benefits ranging from suggesting simple lines of code to the ability to build and deploy entire applications, but the pitfalls are significant.
According to a GitHub survey of 2,000 developers, developers who use AI pair programming tools will not only improve productivity by making some of the more mundane programming tasks easier, but also experience less frustration and can focus on more satisfying work. There are a number of these tools, including this year’s releases GitHub Copilot, Amazon Code Whisperer and tabnins. They joined a long list of existing AI-powered bots like Kite Team Server, DeepMind’s AlphaCode, and IBM’s Project CodeNet.
While AI pair programming shows promise for generating predictable, template-like code — reusable code snippets like conditional statements or loops — developers should question the quality and suitability of code proposals, said Ronald Schmelzer, managing partner in the CPMAI AI project management certification at Cognilytica.
“There are many issues of whether the code is applicable or not, security vulnerabilities and bugs, and countless copyright issues,” he said.
Pitfalls of AI pair programming
Despite the obvious benefits – many of which have been detailed on GitHub opinion poll — Developers should be wary of code completions suggested by the AI, as their accuracy is not guaranteed, said Chris Riley, senior manager of developer relations at marketing technology company HubSpot. Developers must scrutinize all suggestions, which can negate any time savings in scouring developer sites for snippets of code, he said.
Another area of concern is supportability, Riley said. If a significant percentage of code is suggested by AI, developers may not be able to support that code if it’s the source of a production issue, he said.
In addition to applicability and supportability issues, code completion bots raise unique security concerns. While some code completion tools such as Kite Team Server may run behind an organization’s firewall, others rely on public artifact repositories, which may be insecure, Riley said. For example, it is possible for attackers to exploit the model to sneak in Zero-Day Vulnerabilitieshe said.
Community-provided code adds another potentially significant stumbling block: copyright issues. Because AI pair programming tools are trained on a wide range of codes with different licensing agreements, it becomes difficult to determine ownership, said Cognilytica’s Schmelzer. Additionally, if the code generator is trained with data from shared code repositories — notably GitHub — developers could mix copyrighted or private code with public code with no identified source, he said.
The rise of AI pair programming
Many of the problems with modern programming tools for AI pairs were not present in early code completion products like Microsoft’s intellisense, which was first introduced in 1996. These tools allowed developers to easily complete by typing input within the compiler or IDE, without public repo security vulnerabilities or supportability concerns. Developers could take this basic code completion a step further with linters — tools that can prevent simple syntax errors — to validate proposed code, Riley said.
Chris RileySenior Developer Relations Manager, HubSpot
“I don’t think the developers had any expectations beyond that at the time, and we were happy with the Google-style suggestions as you typed,” Riley said. “It was there to increase efficiency, not to be the original source of the code.”
Modern AI pair programmers are going beyond simple code completion and linting and proposing complete blocks of code, Riley said. The tools can provide contextual code completions or write complete functions; advanced text generators powered by OpenAI GPT-3 – like Copilot – can build and deploy entire applications, turning simple English queries into SQL statements that work across databases.
“Having long been skeptical about the authenticity of the AI-driven code completion tools, I have to admit that it felt surreal the first time I tried it [Copilot]said Anthony Chavez, Founder and CEO of Codelab303. “I sometimes felt like it could read my mind.”
But despite advances in technology, the issues surrounding modern AI code-completion tools mean they’re limited in their usefulness, Riley said.
“I don’t think we’ve gotten to the point where these tools can be used beyond rapid prototyping, education and suggestions,” he said.