Basics for Your Gen AI Usage Policy

Six main items in an AI policy for software development

Blog Basics for Your Gen AI Usage Policy

| 5 min read

Contact us

The main items in a company policy about gen AI usage for software development include identifying your business' generative AI use cases, establishing the gen AI tools developers are allowed to use and the security testing solution to secure AI-generated code, describing the data that must not be entered into AI tools, requiring suppliers to notify and describe their gen AI usage, and describing the AI policy communication and training strategies. As we mentioned in our post about best practices for secure development with AI, firms can base their generative AI policies off their already existing policies.

"Cheat sheet Fluid Attacks - Gen AI policy for secure software development"

Cheat sheet: Six main items in a policy for software development with gen AI

Identify your business' generative AI use cases

It's important to know how gen AI would be used in your business. The use cases that you can identify are the guiding light for establishing relevant security requirements. The following are some examples of use cases:

  • Using ChatGPT to find errors in source code and solutions to them

  • Using Copilot to quickly write software functions

  • Using application security tools to automatically fix code where vulnerabilities were detected

The policy's scope would then be defined by the use cases it covers. Further, mentions of the security risks involved in such use cases would help clearly establish the policy's purpose. And yet another important information to convey in the policy regarding use cases is the consequences of breaching the policy.

Establish the gen AI tools devs are allowed to use

Developers in your company probably already have a familiarity with AI tools like ChatGPT and Copilot. The policy should mention these aiding technologies to state whether their use is allowed and any limitations to it. It may also require devs to notify whether they have used gen AI tools in their tasks and to what extent.

Moreover, a process should be established for devs to request the use of the approved AI tool for a purpose other than that specified in the policy. Further, the policy should make clear what the devs could do to request the use of a tool that is not approved.

Establish the security testing solution to secure AI-generated code

AI-generated code may contain security vulnerabilities, some of which may be due to the tool's blindness to the company's business logic. It is important to state in your AI policy that devs should review the tool's output always. Vulnerability scanners are their friends in this process for identifying the low-hanging fruit. The policy should inform, after careful research, what application security testing tools are allowed. Accordingly, it should acknowledge the tools' reports' significant false positive and false negative rates.

To find more business-critical vulnerabilities, an approach involving the manual source code review by security analysts (ethical hackers) is advised. The policy may then instruct that this manual testing be done alongside the automated one. We describe elsewhere what you should look for in a vulnerability management solution. Here's a simplified ten-item checklist to give you an idea:

  • It allows the identification and inventory of your company's digital assets.

  • It yields accurate reports quickly.

  • It uses multiple techniques (e.g., SAST, DAST and pentesting), and these are constantly enhanced.

  • It offers a single platform.

  • It assesses compliance with multiple security standards.

  • It checks for vulnerabilities continuously.

  • It allows for risk-based vulnerability prioritization.

  • It offers continued help understanding and remediating vulnerabilities.

  • It validates the successful remediation of vulnerabilities, even offering automated mechanisms to stop risky deployments.

  • It allows for report customization and tracking and benchmarking the progress in vulnerability remediation.

Another key information that should be present in your policy is whether devs are allowed to use gen AI tools to get suggested code fixes. And if so, what tools are allowed. Mind you, the clarification must be made that even these outputs have to be checked again for vulnerabilities. What's the point, then? Well, gen AI helps accelerate the process.

If you are still looking for the right solution, we invite you to start a free trial of our tool and enjoy many of the features listed above, including our gen AI-powered Autofix feature. We offer all the features in the list through our flagship paid plan.

Describe the data that must not be entered into AI tools

Your company should have identified what kind of information it handles and where the sensitive data are located. If your company embraces gen AI, it is best to define prior its use which information should not be entered into the prompts to AI tools. This comes after a thorough understanding of the tools' measures for protecting data.

Your company would need to identify how information confidentiality may be jeopardized in use cases like the above. This is because the information entered into some gen AI tools may be used to further train the tool and even be subject to data breaches. Some heavily reported incidents with ChatGPT are Samsung leaking its own secrets three times; attackers accessing the chat history of other users after exploiting a flaw in an open-source library; and the stealing of over 100,000 user accounts.

Accordingly, and this should not be entirely new to your company's policies (except for the consideration of gen AI use), an obligation and a guideline to report AI-related incidents (e.g., data breaches, intellectual property infringement) should be clearly stated.

Get started with Fluid Attacks' Vulnerability Management solution right now

Require suppliers to notify and describe their gen AI usage

Your company must take preventive measures against supply chain attacks. We have mentioned some of the most important aspects to pay attention to following the software supply chain security approach. One of them is to verify supplier policies and procedures to see if they align with best practices and standards.

Your company's gen AI policy should state that suppliers must detail their usage of third party's gen AI as thoroughly as possible. Some key information is the kind of data they input and how they ensure that it is not used in the tool's training. Regarding standards, the policy may require supplier knowledge about and adherence to ISO/IEC 38507:2022. Further, your company may be a supplier too, and, in that case, its policy should require detailed disclosure of its own gen AI usage to projects using your company's software in their development.

Describe the AI policy communication and training strategies

Your company must make its gen AI policy accessible to all employees and communicate to them any major changes it receives. Regarding the policy communication strategy, the policy may include the following:

  • The structure of the messages to be directed at the staff (e.g., a case with a problem, hero and moral)

  • The communication channels (e.g., email, social media) and tools (e.g., videos, infographics) to be used

  • The methods and times to monitor and evaluate communication impact such as knowledge about the policy and attitude towards it

  • The feedback channels through which information is gained to improve the policy and/or its communication strategy

Moreover, your organization needs to make sure its staff reads and acknowledges the gen AI policy and completes training on the responsible use of gen AI tools. Regarding the latter, the policy may include the following:

  • The training formats and materials

  • The staff in charge of imparting training

  • The methods and times to monitor and evaluate knowledge about using gen AI tools responsibly

  • The feedback channels through which information is gained to improve the policy training strategy

For a useful resource to have in your company's policy communication and training strategies, see our cheat sheet with five best practices for developing securely with gen AI. And if your company allows the use of our extension for the IDE (integrated development environment), our documentation is a useful resource to not only learn how to use it but also how it handles data securely.

Subscribe to our blog

Sign up for Fluid Attacks' weekly newsletter.

Recommended blog posts

You might be interested in the following related posts.

Photo by Logan Weaver on Unsplash

Introduction to cybersecurity in the aviation sector

Photo by Maxim Hopman on Unsplash

Why measure cybersecurity risk with our CVSSF metric?

Photo by Jukan Tateisi on Unsplash

Our new testing architecture for software development

Photo by Clay Banks on Unsplash

Protecting your PoS systems from cyber threats

Photo by Charles Etoroma on Unsplash

Top seven successful cyberattacks against this industry

Photo by Anima Visual on Unsplash

Challenges, threats, and best practices for retailers

Photo by photo nic on Unsplash

Be more secure by increasing trust in your software

Start your 21-day free trial

Discover the benefits of our Continuous Hacking solution, which hundreds of organizations are already enjoying.

Start your 21-day free trial
Fluid Logo Footer

Hacking software for over 20 years

Fluid Attacks tests applications and other systems, covering all software development stages. Our team assists clients in quickly identifying and managing vulnerabilities to reduce the risk of incidents and deploy secure technology.

Copyright © 0 Fluid Attacks. We hack your software. All rights reserved.