Integrity Score 120
No Records Found
No Records Found
👍👍
Researchers published an academic study looking into the security concerns of GitHub Copilot, an advanced AI system that is presently being utilised for code completion in Visual Studio Code and could be coming to Visual Studio after the preview period expires.
During numerous scenario testing, it was discovered that 40% of the projects assessed had security vulnerabilities.
The project sparked debate on multiple fronts, with ramifications for code quality, legal and ethical considerations, the possibility of replacing human developers, and the risk of introducing security flaws.
The study concluded that 40 percent of 1,692 programmes developed in 89 different code-completion scenarios were insecure, based on rigorous and extensive scientific investigation.
There was no mention of increased functionality to prevent the spread of security flaws, so perhaps more papers and studies are in the pipeline.