OPERATIONALIZING AI: A RENEWED FOCUS ON DATA MATURITY
The initial hype that greeted the launch of ChatGPT, the first of the LLMs to launch, has subsided somewhat as companies have begun to better understand both its potential and its challenges. The potential is hard to oversell; for once the hype may, over time, actually be surpassed by reality. But right now, carriers are coming up hard against the limitations of their own capabilities to truly operationalize AI.
A key limitation to successful implementation of AI involves data. There’s no shortage of data but, rather like dirty gold, it needs to be cleansed and sorted to reveal its true worth, said Rajeev Gupta, Co-founder and Chief Product Officer of Cowbell.
There are always embedded signals in the data. Every single data point can have multi-dimensional attributes and you really need to identify and work out what makes sense to you.
RAJEEV GUPTA
CO-FOUNDER AND CHIEF PRODUCT OFFICER, COWBELL
This is an involved process, which starts with cleansing, coalescing and mapping the data against internal data quality standards to work out where there are gaps, overlaps or mistakes, before letting it flow downstream to different processes. ‘Data maturity is the first step,’ said Gupta. ‘Know where you are on that maturity curve, be objective about quantifying where you stand today.’
It’s in this data and infrastructure foundational work that generative AI can have some early wins for a carrier. According to Parimal Kumar, VP, Head of Technology at QBE, generative AI can deliver a step change when it comes to adding value to legacy platforms, which can be decades old and patched together with vintage code.
‘Generative AI can read the source code, clean it up and create documentation,’ said Kumar. ‘It might not get it 100% right but even at 70% that’s a big lift and it can convert it into more modern language like Python.’
This kind of project deploys AI tools to get data and systems ready for more AI, creating a virtuous circle of improvement and optimization. Russell Page, Chief Information Officer at Hagerty, described how a modelling factory, a set of procedures that automatically generate predictive models with little to no human intervention, can help analytics and data science teams move from a development model towards a stable, reliable and auditable production model.
A modelling factory is a piece of embedded software within your tech stack, with a contextualised, high quality, easily deployable mechanism for DevOps or ML Ops type of work. It means developers have at their disposal a set of libraries to find the most elegant way to solve a problem. It gives you agility, accelerated deployment and optimised resources and consistent quality.
RUSSELL PAGE
CHIEF INFORMATION OFFICER, HAGERTY
When it comes to other functions beyond technology, however, AI implementation faces further barriers, including valid concerns about data privacy, bias, ethics and looming regulation. For Kemi Nelson, VP Data Products and Strategy at Liberty Mutual, the key here is bringing in compliance and legal teams from the beginning and working together to deliver solutions in a spirit of ‘creative empathy’.
And when it comes to putting guard rails around AI to curb hallucinations, bias and unexplainable outcomes, Paul Avilez, VP Data Architecture at CNA, said the trick is to treat the likes of ChatGPT as you would a new recruit. ‘You wouldn’t let a newbie hire go out unsupervised making decisions across the business so don’t let generative AI make any decisions that are not observed by a human,’ he said.
Indeed, Rajeev Gupta of Cowbell noted that bias is common to both AI and humans – but it’s easier to fix in AI.
Operationalizing AI is going to require careful management, and data maturity and rigorous governance are going to be the foundations underpinning this potent technology so its full potential can be realized.