Subscriber Benefit
As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe NowScott Kosnoff, a partner at the Indianapolis office of Faegre Drinker Biddle & Reath LLP, had been advising insurance companies about risk for a couple of decades when he began reading more about big data and artificial intelligence and how it was beginning to drive decision making, initially for tech companies but increasingly for all firms.
He saw immediately the impact AI could have in the insurance industry, where companies are all about gathering data and predicting risk. But the more Kosnoff learned about the field, the more he understood that the algorithms—or the rules that determine how computers analyze data—carried their own risks.
So about six years ago, he decided that, to protect his clients, he needed to start talking to them about AI—even if they weren’t ready to hear it. And what he’s learned applies across industries of all kinds.
“While AI and algorithmic decisions are front and center in the insurance world, it’s basically the same set of issues for other sectors of the economy as well,” he told IBJ. “And so, I think in the future, I probably will not be so exclusively dedicated to insurance because this AI stuff is everywhere.”
Last month, Faegre Drinker announced that Kosnoff would co-lead (with Washington, D.C.-based attorney Bennett Borden) an interdisciplinary artificial intelligence and algorithmic decision-making team that the firm calls AI-X. IBJ talked to Kosnoff about the team.
Why has AI become so important in the insurance industry?
Insurance companies are in the business of predicting the future. When they take applications from policyholders, they’re trying to figure out who’s a good risk and who’s not such a good risk. And who’s likely to live a long life, and who might die prematurely, and who’s more likely than not to get in a car accident.
Artificial intelligence and algorithmic decision making are really powerful because they allow us to look at an ocean of data points and see previously unknown connections between those data points and outcomes that we care about. So, if you are an auto insurer and you’re trying to figure out who would be a good risk from a driving standpoint, you’ve got just a wealth of data to look at and you can start to draw connections between different behaviors and different facts and the likelihood of an accident, for example.
The same thing applies to life insurance or for lending—who gets a loan and at what interest rate. And who gets in a college, who gets which scholarship. And who gets an interview for a job—and who gets the job after they get the interview. The use of AI is so pervasive because it at least opens the possibility of predicting the future much more accurately than we can do on our own.
What are the legal ramifications?
Policymakers and regulators have expressed concern about just fairness. Is the algorithm that is generating these predictions of the future fair? Another concern that policymakers and regulators have expressed is about explainability and transparency. Human beings like to understand how important decisions that impact their lives are reached.
But the biggest thing that’s really gotten the most airtime, at least in the U.S., is that AI and algorithmic decision making have a potential for perpetuating past discrimination. On the one hand, turning the decision making over to a machine seems like a good thing, right? Because the machine is going to be free of human biases and prejudices, and it’s just going to do what it does.
But the problem … is that the data that the machines use doesn’t exist in a vacuum. The data is a reflection of society and everything that has happened up to this point. And so, when you’re using any piece of data for purposes of predicting the future, it may have what we call embedded bias, as a sort of a reflection of past societal norms. That can creep into the decision making in a way that no one can really predict.
How did the AI-X team come together?
When I started doing this work, our firm was still known as Faegre Baker Daniels. And we had not yet merged with Drinker Biddle. It was only after the merger had already been approved that I figured out that we had this data-consulting subsidiary, which had a roster of data scientists who were working on stuff like this. And it was very much one of those, “you complete me” kinds of stories.
One of the partners in the law firm, Bennett Borden, is also a founder of Tritura (the firm’s data-consulting group). He’s wickedly smart. He’s both a lawyer and a data scientist. And he and I do a lot of this work together.
So one of your clients can come to you and not only get advice about AI but also have the team look at its algorithms and see if they see any problems.
That’s exactly right. We can look at your algorithm, for example, and help you determine whether it might be unintentionally discriminating on the basis of race. Which is powerful and obviously very important. But of course, there are other kinds of protected classes, too, that are independent of race. And that’s much harder to do from an algorithmic-testing standpoint.
Give me an example.
Well, let’s say you’re living in a state where it’s illegal to discriminate based on sexual orientation or preference. We don’t have a reliable way of gathering information on those two factors. And so there’s no good way of testing for it.
If you could give one piece of advice to any company that’s thinking about jumping into using AI for whatever purpose, what would that be?
I think being involved in AI is going to become a survival skill for most organizations. But you have to be smart about it. You have to be thoughtful about what could go wrong, and you have to get out ahead of it. Waiting until the problem presents itself is awfully late in the game. So, when you start thinking about AI, it’s important to have a risk-management framework in place that identifies not just the benefits of your intended uses, but what could go wrong. And then you have to think about the severity of the possible consequences and what you can do to try to avoid them.•
Correction: The spelling of Washington, D.C.-based attorney Bennett Borden’s name has been corrected from an earlier version. In addition, a reference to where the firm is based has been removed. See more corrections here.
Please enable JavaScript to view this content.
So Artificial Intelligence and lawyers……must be a punchline to a joke here somewhere