Home » Blog » The illusion of trust in AI-generated code

The illusion of trust in AI-generated code

by
0 comments

The adoption of GPT-4 and other generative AI (GenAI) models in the software development community has been swift. They offer astounding benefits, but their allure can distract developers from the reality that this technology is not infallible. If a blind eye is turned to due diligence, AI-generated code from innocent developer prompts can inadvertently introduce security vulnerabilities in your code. For that reason, it’s crucial to highlight the GenAI’s limitations as coding tools, why they create a false sense of trust, and the dangers that result when due diligence is not performed on AI-generated code.

The double-edged sword of coding with generative AI

Generative AI can significantly accelerate code development and has the potential to offer developers unprecedented efficiency and capability — however, it also introduces significant security risks.

To understand how inadvertent security vulnerabilities may find their way into a developer’s code, we need to cover typical GenAI use cases in software development. For day-to-day tasks, developers query GenAI models to identify code libraries and receive open-source software (OSS) package recommendations to help solve coding challenges.

For such queries, whether for Java, Python, or JavaScript/TypeScript, a common thread emerged: GenAI query results are inconsistent. This inconsistency produces a false sense of security because, sooner or later, a varied result chosen by a developer will contain an instance of insecure code.

Further adding to this risk, recently published Stanford University research concluded that a developer’s prolonged use of GenAI may gradually affect their drive to stop validating the code thoroughly, not realizing how often recommendations could contain embedded risks. This misplaced trust can lead to the integration of insecure code snippets, ultimately compromising the application’s overall security.

How generative AI can introduce code vulnerabilities

Warning signs of potentially insecure code present in AI-generated recommendations for developers come in several forms, though the most common signs are:

Outdated OSS Packages: Due diligence on suspicious OSS packages recommended by GPT-4 often reveals that they are outdated, suggesting those package versions had known vulnerabilities. Static datasets used to train LLMs are often the culprit in these cases.

Unclear Package Validation Guidance: This can manifest in a few ways, such as no instructions given to check for updates to the dated software packages or guidance stating that using current-version packages is a “nice-to-have” rather than necessary. Without explicit instructions to verify the latest version of packages, developers may relent over time and use the recommended packages without question.

Phantom Package Risks: Guidance given by GPT-4 can lead to direct usage of indirect OSS packages without including them in the manifest. These “phantom” scenarios occur when GPT-4 does not have the full context from the entire codebase. In practice, a vulnerable package is defined by the transitive package that introduced it rather than a previous developer. As a result, developers might be unaware of these hidden dependencies, which can introduce security vulnerabilities that are not easily detectable through conventional manifest-based dependency checks. This greatly complicates vulnerability management and remediation efforts.

Evolving secure coding practices in parallel with evolving development tools

As new programming tools reach developers, the number of ways insecure code is inadvertently written grows. History has likewise shown us that new secure coding practices evolve and grow to address new shortcomings. The same will ultimately happen with GenAI coding tools. To that end, here are some foundational, technical, and managerial secure coding practices we can employ right now:

Build a mature DevSecOps program

A well-developed DevSecOps program creates a secure foundation for developers to build their AI-generated code practice. The hallmark of a mature program is security checkpoints embedded throughout the entire software development lifecycle (SDLC), including threat modeling, static code scanning, and test automation. Factoring these qualities in with characteristically fast feedback loops, you can safely absorb the increased risk that AI-generated poses as your organization gets acquainted with the new development tools.

Awareness and training

Before GenAI is widely adopted by the development team, they and security teams must be educated to spot potentially insecure code recommendations and common code-writing pitfalls that GenAI can introduce. This practice will help developers and security teams learn how GenAI results are sourced and produced and understand its limitations.

Enforce a code of conduct for GenAI toolsets

Establishing secure coding practices tailored to AI-assisted programming should be treated as a given, but establishing company-approved GenAI toolsets is less discussed. These measures serve to avoid the risks of un-vetted tools and to better investigate and diagnose security vulnerabilities before code is pushed into production. Likewise, establishing use cases for certain GenAI tools will help developers operate within their limitations. For instance, GenAI is ideal for automating repetitive and manual tasks, such as auto-filling code functions. However, when more complex coding and code dependencies come into play, developers should rely less on GenAI tools.

Future outlook: Navigating the AI-Driven development landscape

Integrating generative AI into software development is inevitable, bringing both opportunities and challenges. As GenAI continues to revolutionize coding practices, developers will increase their reliance on these tools. A practice shift of this scale will require a parallel evolution in security practices tailored to the new coding challenges it will introduce, and third-party research studies highlight the critical role of vigilance and proactive security measures against the unintended risks of AI-generated code. Still, we should not be afraid to adopt the best tools and strategies to harness its potential. We have already experienced technological revolutions like cloud computing, where application security had to catch up. Here, we have the opportunity to prepare ourselves and remain a step ahead of the expected security challenges while capitalizing on the immense benefits that AI can bring to the world of coding.

We’ve featured the best AI phone.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

You may also like

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00
Verified by MonsterInsights