The implementation of Gen AI in claims
With Fredrik Thuring, Head of Operational Analytics, Tryg
Published: 22 January 2024
Where are we already seeing successful deployment of generative AI within the claims process?
Across the industry, we’re still figuring out how best to use gen AI. I have seen cases of actual deployment in production and two examples come to mind.
First are the large and complex claims and, on the other end of the spectrum, the small and fast claims. There are two different applications of gen AI in those use cases.
For the large and complex claims, it’s essentially using gen AI to summarise a lot of often lengthy reports. Historically, handlers need to read and understand these reports to be able to assess the claim to the fullest. There might even be examples of this live in production, but I know there’s experiments with using gen AI to gather all the reporting information, make a summary, and thereby save some time for large and complex claims handlers. So that would be one practical use case of gen AI – perhaps not fully deployed yet but it represents the direction of where the industry is moving towards.
On the flip side, we have small and fast claims, where many insurance players are using straight through processes. So, they’re encouraging the customer to go online, write a description of the claim in a free text field and then submit it. Now, there are non-gen AI methods to solve these problems, and I think they have been deployed quite extensively, but now we see those methods replaced by gen AI, and the performance is just better. So, say a person had a claim, they’d search up the insurance company’s webpage and behind the scenes, there’s already a gen AI algorithm, reading it and categorising it and sending it to the right claims department. In some cases, it may even assess coverage. It accomplishes all of this with minimal human interaction. So that would be, on the top of my head, the two use cases to where are we are close to leveraging gen AI.
What challenges must insurers overcome for generative AI to be truly utilised within the claims process?
There are here also two key points. One is technical and the other is a cultural one.
On the technical side, I would say that’s a fairly limited challenge, especially for players who already have straight through processes in place or are automatically handling claims. It’s basically replacing the behind-the-scenes techniques with a gen AI algorithm and that will perform better.
So I’d say the cultural challenge is the real issue– how do you get your claims handlers and operational staff to really embrace gen AI as a tool for making their job more interesting? To my mind, that is the real twist of generative AI’s potential. There are elements of the work being done by humans at insurance companies that can be replaced by generative AI and, by doing so, the tasks performed by humans will evolve and become more engaging. As an industry, we might need profiles that can better cope with artificial intelligence, creating an environment where we have an AI algorithm doing the number crunching, looking into tables, into information, whilst having a human in the loop to provide that empathetic base and making sure the customer is feeling safe and is being well-treated. That’s the kind of cultural shift that we as an industry are leaning towards.
Will empathetic AI become a reality? What’s the timeframe?
The short answer is yes. I’m not sure what the timeframe will be: I’d still argue that the final outpost for us humans, lies in empathy and creativity. We’ve seen gen AI move to those areas as well but there’s still a long way to go. But especially the labour-intensive tasks, like looking into data and checking things are correct, they represent the low hanging fruit that personal claims handlers will go after.
How can insurers address the security and accuracy concerns in the timely implementation of generative AI?
Security and accuracy are entirely different subjects.
So with security, how do you protect your own data and ensure your customers receive the best treatment based on it? There are ways to solve this. You can use AI algorithms at the premise that you know is secure and where no data leakage will occur. Another example is that you condition your usage of an off-prem solution, such as making it clear that your company can send data over, but the solution provider can’t use it for training. So that’s how you would deal with the security issue.
With regards to accuracy, I’d suggest that, as a company, it’s reasonable to begin your initial efforts by starting small and internally. I don’t think it would be wise to make your first use case to deploy a gen AI algorithm which can assess a lot of personal data and communicate with customers. So, it’s important to start with small cases and build your experience from that.