Subscriber Benefit
As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe NowThere’s no shortage of jokes about the sluggish pace of government bureaucracy, but in the case of artificial intelligence, slow and steady might be the best course of action for state and federal lawmakers trying to regulate and govern a technology with wide-ranging implications.
Fears about the potential misuse or unintended consequences of AI prompted more than half of all U.S. states to introduce AI legislation in the 2023 legislative session, addressing issues ranging from election security and health care to impacts on mental health and hiring and lending practices.
The Indiana Legislature didn’t take any action on the issue this year, but a special bipartisan committee will meet Oct. 2 and Oct. 25 and plans to hear from experts to identify the risks, challenges and opportunities the technology will bring to industries, jobs and government services.
“I’m of the mindset that we need to figure it out and embrace it because it’s coming,” said Indiana Senate President Pro Tem Rod Bray, R-Martinsville.
Sen. Liz Brown, R-Fort Wayne, who authored a data privacy bill that passed during this year’s session, said she anticipates lawmakers will hold off on any AI legislation until the committee releases its findings and recommendations.
“I think we have to be cautious because I don’t even know if we all operate on the same definition,” Brown said.
There are multiple categories of AI technology, including machine learning, speech recognition and large-language models, like ChatGPT, which have drawn considerable media attention.
Even before ChatGPT catapulted AI into the national consciousness, Indiana state agencies were using the technology to communicate directly with Hoosiers.
During the pandemic, the Indiana Office of Technology deployed chatbots for the Department of Health’s website to answer questions about COVID-19, from vaccine services to contact tracing to scheduling appointments.
But overall, the state has taken a cautious approach. The technology office uses artificial intelligence tools in its cybersecurity efforts to monitor networks across state agencies, but state officials have been hesitant to allow state agencies to use generative AI tools due to security concerns.
“From the state’s perspective, the big fear is, ‘How do we ensure our 30,000 employees are not putting something into a generative AI engine that should be secure, proprietary or confidential,’” said Tracy Barnes, chief information officer in the Indiana Office of Technology. “That’s the big piece that’s missing right now.”
Barnes hopes state officials will focus on adopting a structure of policy and governance to ensure the technology is used properly.
“There’s so much data that could be skewed because of how free the internet is,” he said. “These are things as a state we need to figure out.”
State officials charged with ensuring Hoosiers’ sensitive data remains private have also found themselves thrust into the position of governing artificial intelligence.
“AI governance right now is a moving target,” said Ted Cotterill, who serves as Indiana’s chief privacy officer and as general counsel for the Indiana Management Performance Hub, which helps state agencies use data science and analytics to solve state policy challenges.
The hub uses machine learning and algorithms for probabilistic record linkage, a technique used to determine whether a given pair of records is actually a match.
In states such as West Virginia and Texas, lawmakers this year established advisory councils to study and monitor their agencies’ use of AI systems. In Connecticut, state agencies are being required to take an inventory of AI systems in use, while North Dakota lawmakers passed legislation clarifying that the definition of a person “does not include environmental elements, artificial intelligence, an animal or an inanimate object.”
Connecticut lawmakers acted earlier than many states. In 2022, they passed a data privacy bill that created a task force to look at AI regulation.
“There are so many potential beneficial uses that I think will outweigh the risks,” said Connecticut state Sen. James Maroney, the bill’s author, “but we have to be cognizant of the risks and put in some broad guardrails to make sure we are testing things before we employ them.”
Other states have adopted a more aggressive approach. California lawmakers passed 10 AI-related bills this year, including one that urges the U.S. government to impose an immediate moratorium on the training of AI systems more powerful than Chat GPT-4 for at least six months to allow time to develop “much-needed AI governance systems.”
States are also relying on guidance from the federal government when crafting legislation. Earlier this year, the National Institute of Standards and Technology, part of the U.S. Department of Commerce, released an AI risk-management framework that provided best practices around AI development, design, implementation and deployment.
State officials are concerned with how AI might be used to influence children. Indiana Attorney General Todd Rokita recently joined a bipartisan group of attorneys general from around the country calling on Congress to create an expert commission to study ways AI can be used to exploit children through pornography.
“We are engaged in a race against time to protect the children of our country from the dangers of AI,” the state attorneys wrote in a letter. “Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”
With the 2024 election fast approaching, some organizations are concerned about how AI can be used to spread misinformation. In March, fake images of former President Donald Trump being arrested by New York City police spread online, and the technology has been used to digitally alter the words of politicians and pundits.
AI has the ability to perform repetitive, mundane tasks, freeing up humans for more creative and complex work, but the trade-off is not without risk. AI can make choices that impact whether a person is authorized for a bank loan or accepted as a rental applicant. If machine-learning software is trained on a dataset that underrepresents a particular gender or ethnic group, it can result in biased outcomes.
In response to this concern, local lawmakers in Washington, D.C., passed a bill that prohibits users of algorithmic decision-making from using eligibility determinations in a discriminatory manner.
“AI algorithms may inherit biases from training data, leading to discriminatory outcomes in areas like hiring or lending,” Cotterill said. “If we’re providing a service that is using an AI-enabled system, we need to be doing it right the first time.”
The Indiana Legislative Interim Study Committee on Commerce and Economic Development is expected to release its findings and recommendations on artificial intelligence by Nov. 1.•
Please enable JavaScript to view this content.