Login Get in touch
AI 4 min read

The promise and pitfalls of GitHub Copilot

GitHub Copilot is an incredibly powerful tool that can help boost the productivity of developers. However, it does come with some limitations and risks. In this post, Dale Marshall, Chief Architect at ANS, explores what GitHub Copilot is, what those limitations and risks are, and the steps you can take to help mitigate them in this post. 

GitHub Copilot is an AI-powered coding assistant developed by GitHub and OpenAI. It suggests code snippets in real-time based on the context of the code you’re writing or from natural language prompts. It can also help in creating descriptions, comments, and even pull requests based on the context of your code.  

It is trained on all languages that appear in public repositories on GitHub, including PHP, JavaScript, C#, Go, Python, and TypeScript. It is available as an extension in VS Code, Visual Studio, Vim, NeoVim, JetBrains IDEs, and Azure Data Studio.  Add this all together, and it means time savings and increased productivity for developers. GitHub’s own research claims developers who used GitHub Copilot completed the task 55% faster than those who didn’t. 

In my experience and speaking to developers who are actively using GitHub Copilot, it’s great at autocompleting, creating boilerplate code, and generating small snippets of code. The ability to write comments, pull requests, and even documentation is hugely helpful. Speaking to developers I know who have used it, they have all confirmed it’s a genuine time saver, and they’d miss it if they no longer had access to it. It more than paid for its monthly cost in time saved and efficiency gains. 

So far, so good, right? However, there are risks and limitations to be aware of before you decide to use it in a production environment, as well as tools and resources GitHub provides to help mitigate those risks. 

If you think of GitHub Copilot as a digital pair programmer, you should definitely view it as the more junior partner in that relationship. Sometimes, the code it generates is 100% what you need. In other instances, it gets you there but needs a few tweaks, or it can be outright wrong. Everything it generates needs to be checked and verified by a human for accuracy and quality. There are no guarantees that what it suggests will be correct. Therefore, it’s essential to educate users on what it can and cannot do and the need to not simply follow all the suggestions it gives. 

While GitHub does provide security features to help prevent this, such as a vulnerability prevention system that blocks insecure coding patterns in real time, there is no 100% guarantee that insecure code won’t make its way in. It is just like any other programmer in that regard. So, it’s critical that users are educated on this. Secure software development is as important as ever. Secure coding standards, threat modelling, and code scanning should all be implemented if they haven’t been already.  

A key resource to make use of before adopting GitHub Copilot is the  GitHub Copilot Trust Centre. This site details some of the risks and how GitHub addresses them. Specifically, it covers security, privacy and how intellectual property is respected.  

You should make sure your compliance and legal teams are happy with how data is captured, transmitted, stored, and retained. If you decide to adopt, once again, it’s key to educate your users on all of these potential risks. 

In conclusion, GitHub Copilot is a powerful and innovative tool significantly enhancing productivity. However, it is not a magic solution that can replace human judgment and expertise. We need to use it responsibly and carefully and always verify the quality, security, privacy, and intellectual property of the code we produce with it.  

It’s crucial that you train users on its capabilities and limitations. Finally, if you’re considering adopting it, run it past your compliance and legal teams first and ensure they’re happy with how data is processed and secured.