The Case for an AI Policy
Why every company needs one, even if you don't plan to use AI
AI has existed for decades, primarily as the interest of niche scientists and a few future-minded thinkers. At the end of 2022, ChatGPT, a generative AI LLM (large language model) application, launched publicly, making AI use accessible to all. Suddenly, everyone was thinking and talking about AI. Conferences rushed to announce AI-focused learning tracks, AI-related dictionary searches were up 62 percent, and ChatGPT reached 100 million users in just two months.
AI tech funding surged ahead of all other tech investments, up 27 percent at $17.9 billion in late 2023. Meanwhile broader tech transactions, like startup deals, slowed down year over year.
AI promised to revolutionize our work and add as much as $4.4 trillion annually to the global economy.
In a year, AI went from a niche interest to a global obsession. It’s no wonder that many organizations were caught off guard. The year became a never-ending game of “catch-up” as companies evaluated how AI might fit into their operations and strategic growth plans.
62% Dictionary searches for AI-related words up 62 percent
100M ChatGPT reaches 100 million users in two months
$17.9B Investment in AI technology startups reaches $17.9 billion—up 27% over the previous year
$4.4T AI-driven creativity and productivity predicted to add up to $4.4 trillion annually to the global economy
Why a Policy | Purpose and Scope | Core Guidelines | Use Cases | Administration
Meanwhile, companies struggled to stay ahead of the risks and benefits of AI at work. For example, one survey found that two-thirds of respondents used AI at work without their boss knowing.
Take the widespread adoption of AI combined with some naïveté about AI’s reach, power, and risks, and you have a recipe for potential disaster. An AI policy is a business imperative, even if your company doesn’t intend to adopt AI technology right away. Without a policy in place, employees lack clarity on their responsibilities for AI use at work. They may use publicly available AI tools that could unintentionally cause organizational harm. For example, this could lead to embarrassing data leaks like those at Samsung.
Additionally, having a policy in place can help drive more consistent and transparent practices among employees, vendors, and customers around AI use. Finally, a policy could also help serve as a guide on understanding how AI might be used as a productivity enhancement, not a replacement, which engenders trust and a collaborative spirit.
Organizations must be more upfront about how they’re using AI in the workplace if they want a competitive advantage and want to earn, and keep, the trust of their employees
Bias and inaccuracyAI tools, including generative AI like ChatGPT or Bard, don’t know right from wrong. A human must validate the information AI tools produce. AI policies that require human oversight protect organizations against inaccurate, biased, or harmful AI use and can help ensure safe and ethical uses of AI outputs.
Compliance or legal riskWith regulatory and legal oversight and guidance lagging behind AI tool adoption and use cases, it’s wise to start understanding the organization’s use of AI tools and its potential risk exposure now. Clear AI protocols make it easier to track compliance with laws and adapt as new rules and regulations emerge.
Security or data breachConfidential or other sensitive information can be mistakenly leaked outside the company when there’s no clear guidance and parameters on appropriate AI tool inputs. A policy can help safeguard against these risks with information on acceptable AI tools and uses of those tools at work.