Keeping up with changing AI is educational challenge

  • Comments
  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
This audio file is brought to you by
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00

IBJ generated this image using Adobe Firefly with the prompt “lawyers arguing with a judge in a courtroom” and art and pop art tags. Note the man on the left has too many fingers on both hands and the gavel doesn’t appear to have a handle. IBJ enhanced the resolution of this image with Topaz Photo AI. (AI image/Adobe Firefly)

Legal and ethical questions that will arise from the increasing use of artificial intelligence—particularly generative AI that uses existing information to create new content—could test current laws and courts’ ability to untangle the technology.

Frank Emmert

Frank Emmert, director of the Center for International and Comparative Law at the Indiana University McKinney School of Law,IBJ talked with  who is researching artificial intelligence, blockchain technology and cryptocurrencies.

This summer, he was invited to serve on the Silicon Valley Arbitration & Mediation Center Task Force, which is developing guidelines for the use of artificial intelligence in dispute-settlement procedures.

Emmert told IBJ he’s interested in what the technology will mean to democracy, how it can be used for fraud, and the inevitable biases built into the algorithms.

And he explained why he’s not worried about privacy in AI’s seemingly endless hunt for data.

This is an edited version of the conversation.

What are the various legal and ethical issues around artificial intelligence?

We really took a quantum leap with ChatGPT. The computing power and the AI concepts that are being developed are going to replace humans for certain tasks. Not for everything, but it’s going to go far. We’re going to start with fairly menial stuff, like assistance in hospital care and the kind of work that people generally don’t really appreciate, like heavy lifting and construction work.

Then, of course, it goes into all of these intellectual pursuits like education and research.

In education, we will have to deal with students cheating if they turn in a paper and it’s an AI paper that was [created] with some tool—and ChatGPT isn’t even the most impressive. There are more advanced tools out there; they’re just behind the paywall right now, so not everybody’s going to have access. It’s going to be hard for the educators to understand whether it’s really the student’s work or whether it’s AI.

A lot of people talk about privacy. They’re worried about data. That’s not really what keeps me up at night. AI needs massive amounts of data. … [AI programs] want a large amount of data on a large amount of people because they need to average it out and detect patterns, and then they can build future applications.

I think our individual data is quite meaningless for AI. They don’t care what I do with my life. They care about what the people in greater Indianapolis do, or in the Midwest or in the U.S. or on the planet. So, I’m not terribly concerned about my data going into these large-language models or other applications that build use cases for AI.

What is already getting more interesting is fraud using AI, like deep fakes. Even if the experts debunk it, there will be some people in our country who want to believe that there’s something to it. It’s going to impact our democracy, our democratic process, because people will say, “I saw it with my own eyes.”

I see that as a problem, and I’m not sure how you can stop that. We can regulate it, but you need to find someone who did it, and you have to have the means of sanctioning them. If they do it in North Korea, what are you going to about it? It’s all floating around on the internet, and I don’t see how we can easily stop that.

Then there’s the cognitive biases. If you think about what AI does, it looks at patterns and the past to predict the future and generate texts. If the past is not very good, the predictions or the text it generates on that basis might not be very good, either.

I’ll try to give an example. Let’s say you are cranking through a large number of job applications or people who apply for grants, and you have AI to help you select people who are eligible or interesting candidates. Based on past experience and depending on how the algorithm is designed, the AI might conclude that certain minorities are not going to be as good … for this job as a white person because, in the past, they haven’t done as well on the jobs. But the AI wouldn’t know that it’s because they didn’t get equal housing and public schooling.

These cognitive biases, they can be dealt with on an algorithmic level. The regulators and the politicians can write that into the code. We can have a statute, but does that prevent the algorithm from doing it? When it looks in the rearview mirror, it might extrapolate into the future, so we’re perpetuating stereotypes.

But I think the real news is literally in what it’s going to do to displace jobs—entire fields in the industry sector. How will we stay in control and still manage the AI and actually supervise it and be responsible for it? We can’t. We will still need humans supervising, evaluating and then ultimately taking responsibility for the AI, and I think that’s where the real regulatory challenges are going to be.

What are some laws and frameworks in the United States that apply to artificial intelligence, and what do you see on the horizon?

In the U.S., privacy is all case law. It’s all over the place. Some states have a regulatory framework or statutory law, and others don’t. And then there’s the issue of how you regulate the flow of data across state lines.

The [European Union] has adopted AI regulation. If you compare it to what we do in the U.S., the president has given a mandate to federal agencies and said every agency has to build a plan. Of the 41 regulatory agencies in the U.S. at the federal level, only five have so far even started working on this plan. They’re just not doing anything. We’re in competition with China, the EU and India. If they are moving faster and are doing better, we will pay a price for that.

As a state, Indiana has to stay ahead. We are in competition with our neighbors. We have to build an environment that attracts high-quality jobs. Companies have to stay ahead.

Our government agencies need to prepare for this for our schools and universities. We have to create courses and educate our students. If we don’t … we are losing out. We’re going to be looking at the others who are pulling ahead and leaving us behind, and jobs are going to be created in places where they are better organized.

People have written that if you know how to apply AI and use it to your advantage, then it’s going to be an extension of the human brain.

You’re going to use it as your co-pilot. But that requires that we are the pilot, that we know what we’re doing and that we can control the AI and can use it to its best effect. But if we don’t know, then we’re not going to be the pilot. We’re either going to be in the back seat, and it’s going to take us somewhere we may or may not want to go, or other people are going to fly, and we’re going to walk.

How can people and businesses protect their intellectual property?

It’s really a question of copyright. Whatever AI does, there will be bits and pieces taken from something that someone has already done.

There’s an old saying that plagiarism is copying from one person, and research is copying from everyone. I guess you could apply that to AI. It becomes plagiarism if it copies from a small number of sources, and it becomes innovation if it copies from a large number of sources. And where is the line between those two? You’re going to have to litigate that somehow.

How should businesses be thinking about how to use these generative AI tools?

I’ve been telling my students for a while that something like 40% of the entry-level jobs are going to disappear in the next five to 10 years. Then the question becomes: Who are the law firms going to hire? They are going to hire fewer people, but they’re going to need special skills, and among those special skills is how to use the AI properly, how to give the AI the kind of buzzwords and questions that lead to good results, but also how to review what the AI produces and recast it in such a way that it’s not redundant or duplicative.

The AI isn’t perfect yet. It’s still in its infancy. … But at the same time, the AI is getting better all the time. I’ve watched it. Stuff that you couldn’t do three or four months ago … it’s scary fast. It’s really self-improving. It’s learning and teaching itself. AI is already writing code, and the day is going to come where it’s basically programming itself. That’s going to be a regulatory problem, because then we have to really watch out that we don’t lose control.

Let’s say the AI is being used and supervised, but it’s also getting better. Our tendency as humans is to become more lazy, right? We trust it more and more, and we work less. … We have to stay ahead. We have to have a new generation of young people who are going to enter the job market who know how to do this professionally and ethically.

At McKinney, we are preparing for a course on AI, but I’m going to say we’re only one of a handful [of law schools], if not the only law school, in the country that is currently doing that. By the time our current generation of law students graduate, this is going to be an issue in the job market.

Plenty of law firms and larger corporations are going to interview people and ask, “Can you use the AI?” And of course, they’re going to say yes, and then they will say, “Show us.” They’ll want to see whether you’re a power user and whether you can responsibly do this, that you know what you’re doing, and you can evaluate the results to see whether they’re good. The people who won’t be able to do that will not get the job.•

Please enable JavaScript to view this content.

Story Continues Below

Editor's note: You can comment on IBJ stories by signing in to your IBJ account. If you have not registered, please sign up for a free account now. Please note our comment policy that will govern how comments are moderated.

Get the best of Indiana business news. ONLY $1/week Subscribe Now

Get the best of Indiana business news. ONLY $1/week Subscribe Now

Get the best of Indiana business news. ONLY $1/week Subscribe Now

Get the best of Indiana business news. ONLY $1/week Subscribe Now

Get the best of Indiana business news.

Limited-time introductory offer for new subscribers

ONLY $1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Get the best of Indiana business news.

Limited-time introductory offer for new subscribers

ONLY $1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Get the best of Indiana business news.

Limited-time introductory offer for new subscribers

ONLY $1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Get the best of Indiana business news.

Limited-time introductory offer for new subscribers

ONLY $1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In