AI and insurance: Weighing opportunities against risk for businesses

Cyber and TechnologyArticleJune 27, 2023

Zurich North America’s Jennifer Hobbs joined an insightful panel discussion on how artificial intelligence may impact the insurance industry.
Share this

From potentially life-saving advances in healthcare to the WGA strike, there has been no shortage of discussion of artificial intelligence (AI) recently. While anyone who searches for something on Google or engages with a chatbot has been using AI for some time, the advances in large language models (LLMs) coming to light have been startling to most outside of scientific research expertise. In fact, even many with such expertise have expressed concerns ranging from worries of general misuse of the technology to “risk of extinction.” 

As the saying goes, though, “the horse is out of the barn,” with open APIs (application programming interfaces) such as ChatGPT available to the general public. Concerns surrounding this remarkable technology now need to be addressed by prioritizing how to use it for the benefit of people and how to limit its risks.

Jennifer HobbsThe issue affects nearly all industries, insurance most definitely included, making a webinar hosted by Insider Engage on May 18 very timely. “Hype or game-changer? What does AI really mean for the insurance industry?” assembled a panel of knowledgeable specialists from different sectors of insurance to offer their insights. The panel included Jennifer Hobbs, VP, Lead Data Scientist for Zurich North America.

Hobbs explained how LLMs work and what makes them so powerful.

“Large language models are really growing out of advances in the deep-learning space,” she explained. “Obviously, they’re large. That gives them a huge capacity to store information and therefore make broad predictions. But the other really unique thing about this is that they’re pre-trained.”

“They’re trained in these usually unsupervised/self-supervised methods, often to predict the next or preceding or surrounding tokens, words, sentences, paragraphs — and without having to provide annotations, which is a major challenge in a lot of the other areas of machine learning,” Hobbs continued. “And I think that’s what’s really leading to a lot of the excitement — that they’re able to handle unseen tasks, unseen prompts, unseen pieces of data that they haven’t been trained on.”

AI efficiencies and exposures: Getting the balance right

The easy access to the public, and as such to any business, is what makes the latest evolution in LLMs a game-changer, according to Hobbs.

“With truly relatively minimal effort, you can start seeing the impact it could have in your business, and so I think the bar to adoption is much lower than it has been.”

Public access, however, is also driving many of the risks in use of the technology. While the general public’s use of tools like ChatGPT will help the technology evolve, and hopefully improve, Hobbs warned that businesses need to carefully guard against using confidential information in similar fashion.

“Don’t throw your private data into the public instance,” she said, noting one major corporation has already gotten in hot water for leaking company data through ChatGPT. “If you’re experimenting with ChatGPT, make sure it’s in a private environment, that you have a private instance, before you start throwing sensitive data in there.”

Battling AI hallucinations and biases

The discussion also touched on another major risk of LLMs, one that has the ring of dystopian sci-fi, but which is now a part of our reality: hallucinations. Artificial intelligence, like humans, can completely fabricate stories, photos and other content that looks authentic. This not only has potentially catastrophic impacts in terms of spreading misinformation, but it could hurt the reputation of any business using the technology for customer-facing uses, such as chatbots. But Hobbs sees the problem as more of an extension of a challenge we have already been living with collectively for a while.

“At a certain level, it’s very similar to search right now. I go on [the internet], I search something … I go to a website, I read it … I don’t have a guarantee the website is correct, but I as the human am using my judgment based on what I know of the sources, and what type of domain it’s put on, what type of references they give … and I can assess the reliability of that source,” Hobbs explained. “Here [with open-source AI] with the information just given back, you don’t know what it’s relying on, how that information has been obtained or been generated. Very often it’s correct, but how much incorrect [information] and what type of incorrect [information] is allowable for all the hundreds of right answers that it might give?”

“And then there’s the issue of bias, both explicit and implicit,” she continued. “We know these models aren’t trained on the full, uniform distributions of the world. We know they tend to be a little bit better on English, although still very good on many of the other major languages. But even if you pick up a little bit of difference [in language] and you use it on the same task but on English text versus French text, what impact does that have on the downstream application that you’re attempting to build? Does that have any unforeseen, unintended consequences? I think that puts the onus back on the person developing that product, developing that application, to really sift through and ask those tough questions, to understand what the impact of the model and the product they’re building will have on the end user.”

The positives and potential of AI in insurance

For all the foreboding possibilities of LLMs, the positives are also seemingly infinite. And in the insurance industry, LLMs and other machine-learning technology is already in use in many practical and helpful ways.

“They’re being used by a number of companies to help make sense of the huge amount of data they have,” Hobbs said, “whether it’s extracting information from their documents — their notes from claims, from underwriting — to help them access risk better, to process claims to improve their efficiencies … that’s certainly going on.”

Hobbs pointed to fraud as a major issue where use of the technology is both a risk and an opportunity, enabling criminals but also giving investigators a more powerful tool to uncover fraud.

Within all the opportunities to improve the customer experience, Hobbs noted it is essential to keep people at the heart of key decisions in insurance.

“Keeping the human in the loop is really paramount to ensuring safety and reliability and ethical use,” she said. “And that’s really key, particularly in insurance and even more for pricing and rating.”

Whether one is thrilled by the possibilities of AI or dismayed by its threats, Hobbs said one thing is certain: businesses need to immediately start learning more about it and considering when and how to use it.

“The technology is here to stay. It’s not going to go away,” she said. “Certainly, there’s a lot of hype right now. It’s not a silver bullet; you’re not going to go download an instance and tomorrow you’re going to be a fundamentally different company. There’s a lot of engineering, there’s a lot of data, there’s a lot of governance … ethics questions … that go into developing impactful insurance-focused products. But I think it’s important to start. So, wherever you are on that journey, this is becoming a more and more prevalent technology. Become familiar with it; there’s a lot of resources out there. The first step is understanding the risks. Prepare yourself for that. Get your data in the right place — the right format with the right security. The possibilities and the impact really span every element of the value chain. It’s exciting times.”

Also participating in the Insider Engage webinar were Chris Mullan, SVP of Product, EigenTech; Bill Keogh, Operating Partner, Eos Venture Partners; and Sridhar Manyem, Senior Director, Industry Research and Analytics, AM Best. The event was moderated by Matt Scott, Contributing Editor, Insider Engage. To watch a recording of the webinar for free via BrightTALK, register here.

 

Zurich neither endorses nor rejects the recommendations of the discussion presented. Further, the comments contained in the webinar are for general distribution and cannot apply to any single set of specific circumstances. If you have a legal issue to which you believe this article relates, we urge you to consult your own legal counsel.