Article
Understand, Manage AI Risks for Public Entities
Public entities are using artificial intelligence (AI) in a variety of ways, including to enhance public access to information and services, and to make internal processes more efficient by automating routine tasks. However, a public entity needs to be aware of and take steps to mitigate the risks of using AI in its operations.
AI generally refers to computer systems that perform tasks usually requiring human intelligence, such as learning, comprehension, problem solving and decision making.
The below are only a few of the risks to public entities when using AI. Members are encouraged to continually monitor AI developments to identify and mitigate other and new risks to their operations from the use of AI.
Data Privacy and Security
AI is trained on large amounts of data. AI tools may potentially retain data that is entered into its system, learn from it and reuse it.
In Minnesota, government entities are required to comply with the Minnesota Government Data Practices Act (MGDPA), Minnesota Statutes, Chapter 13. The MGDPA classifies government data by type and governs access to the data. Government data categorized as “not public data” is not accessible to the public. This includes data that is classified as private, confidential, protected or otherwise not public.
Public entities must follow all applicable privacy and data protection laws (e.g., MGDPA and Health Insurance Portability and Accountability Act) when using AI. It is important for public officials and employees working with AI to know the classification of the data they are working with and to make sure they only enter public data into AI tools that are open and available to the public, such as ChatGPT by OpenAI and Claude by Anthropic.
If not public data (or other sensitive information) is entered into a publicly available AI tool, the data may become part of the training data for the AI tool and be available to others through the AI tool’s responses. This could violate the MGDPA and/or other data privacy laws.
When employees are unsure of the classification of data they are working with, they should contact their responsible authority for guidance.
Implementing a policy governing employees’ use of AI may help mitigate this risk. Policies can provide guidance by addressing acceptable and unacceptable AI uses, identifying which AI tools employees are authorized to use, specifying training requirements, and providing consequences for violations.
Errors
AI does not always give accurate responses or correct information. An AI hallucination occurs when the system generates a response that contains false or misleading information and presents it as fact. Hallucinations can be caused by a number of factors, including insufficient training data, incorrect assumptions made by the AI system or biases in the training data.
For example, when an MCIT employee recently asked an AI system “How many Minnesota counties have a population under 5,000?” it responded, “There are currently eight counties in Minnesota with a population under 5,000. These are Beltrami, Chippewa, Kittson, Lake, Lincoln, Norman, Traverse and Waboose counties.” This is a hallucination. Some of these counties have populations of more than 5,000 people, and Waboose County does not exist.
Employees should always review information obtained from AI tools for accuracy. Relying on inaccurate or fabricated information can cause significant harm, including financial losses and/or legal liabilities (e.g., lawyers have been sanctioned for citing hallucinated cases in court filings), poor decision-making, reputational costs and loss of the public’s trust.
Bias
An AI tool may learn biases present in its training data or through its design. Those biases may be perpetuated or amplified in the tool’s results, which reduces its accuracy and can result in harmful discriminatory outcomes.
For example, one prominent company discovered its experimental AI hiring tool was downgrading résumés that included the word “women” (e.g., “women’s golf team captain”) and graduates of women’s colleges. The company determined the hiring tool was trained on historical hiring data where successful candidates were generally men.*
Employees should be aware of and watch for signs of potential biases when using AI tools. Failing to address biases in an AI tool can lead to legal liability and reputational damage. Bias and fairness audits can be performed regularly to ensure AI tools are not perpetuating existing discrimination.
*Source: “Insight—Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women,” Reuters, Oct. 10, 2018.
Topics



