Safeguarding Data: Best Practices to Leverage AI for Business

Brian Vickery

2/26/20242 min read

Business interest in Artificial Intelligence (AI) is on the rise, fueled by the proliferation of new solutions being released into the marketplace. With the growing adoption of AI technologies, it's imperative to acknowledge the potential risks associated with mishandling sensitive data. Individuals and Organizations need to understand how these technologies function in order to leverage them effectively while also ensuring the security of intellectual property.

"In a recent Gartner. Inc poll (Oct 2023) of more than 1,400 executive leaders, 45% reported that they are in piloting mode with generative AI, and another 10% have put generative AI solutions into production. This is a significant increase from March and April 2023, in which only 15% of respondents were piloting generative AI and 4% were in production." ["]

This increase in usage elevates the risk of mishandling of sensitive data including personal information and intellectual property. Generative AI systems like OpenAI's ChatGPT, Google's Gemini (formerly Bard), and Amazon's Q utilize large datasets to produce responses, and typically track conversation history. It is hard to know what might happen to the data you input into some of the new tool sets, so its best to use caution.

Despite potential data security concerns, AI holds significant promise in enhancing efficiency for users and organizations, provided that appropriate precautions are taken. Whether you're an individual user or part of an organization, there are specific steps you can take to ensure responsible AI usage with adequate data protection.

For individual users:

  • Check to see if your company has a policy or set of rules in place to utilize AI in your job function.

  • Review the terms of any AI tool that you are considering using and ensure you fully understand what they mean, especially related to how they handle and use data inputs.

  • Turn off Chat history & Training when possible. This ensures that your inputs aren't utilized to further Train the AI model.

  • Review any input data to remove sensitive information (Personal, financial, or Intellectual property).

  • Review output data to ensure it is correct and add back relevant information as needed to utilize the data outside of the AI systems.

For Organizations:

  • Create Policy and Train your staff. (Rules for use, Tools Allowed, etc.)

  • Secure Intellectual property with encryption and controlled access.

  • Consider implementing a data protection solution that supports services like ChatGPT. (Nightfall, Forcepoint, etc.) These help prevent users from putting sensitive information into Generative AI tools.

  • Curate a secure set of tools for use.

    • Microsoft Azure Open AI is set up to prevent inputs from being shared outside of your company, allowing you to build a better tool (although at a higher cost).

    • Build your own internally hosted LLM using open source pretrained models.

With the increasing utility and related public interest in AI tools, it is critical that organizations and teams are proactive in planning how these tools can and can't be used. Without proper guidance, your staff and team members may use AI tools in ways that could jeopardize your data. Make a plan and be aware of all the potential consequences of your decisions and you can realize the benefit of these advances in technology with confidence that they will only benefit your organization.