Use Cases and Other Considerations
With guidelines established, probe deeper
With these initial AI policy guidelines established, it’s time to dig deeper to see what else you may want to consider capturing in your AI policy.
Ask your stakeholders to bring questions, potential use cases, and other ideas forward. While you’ve set up general directives, the policy is only as helpful if it’s useful. Should there be exceptions to the established guidelines? If so, in what instances? What should the approval process be? Are there use cases not currently contemplated in the policy that should be incorporated somehow? Does your business or industry have specific regulatory or privacy considerations? Should you consider any use cases by department or team?
Usage. How will or could AI use change with departmental or future business needs?
Data and privacy. What added safeguards should be considered?
Security. How does AI impact the company’s cybersecurity policies? What access controls should be implemented?
Accuracy and quality control. How should the company’s standards and quality checks change with the introduction of AI?
Why a Policy | Purpose and Scope | Core Guidelines | Use Cases | Administration
Explore how each business use case and domain impacts the organization’s AI use, data and privacy, security, accuracy, and quality control.
Glivvy is a software company. Its engineers plan to use AI in upcoming software development. Will the company include usage guidelines for the team in the company guide or elsewhere? Does the testing protocol safeguard customer and company confidentiality? At which steps should accuracy be vetted, and how will reviews take place?
Reed-Roscoe, LLC is an investment firm. In the heavily regulated financial industry, they must carefully protect their clients' data. AI-powered fraud detection software could be a game-changer, but to test it, what additional security risks must they consider? Has the Securities and Exchange Commission (SEC) released AI compliance laws to review? What quality assurance safeguards will they put in place so the software doesn’t flag legitimate trades and inconvenience customers?
Neely Staffing is thrilled to try new AI-powered tools to vet candidates and help recruiters become more productive. But what practices will they use to ensure bias isn’t embedded in the tool and the data? How will the agency protect applicants’ privacy and personal information? Neely is a hybrid workplace. How will working from home impact security? When do humans need to verify candidate screening output to ensure qualified candidates aren’t filtered out?