Many sectors are buzzing with the potential of AI but not everyone in health and protection insurance is swayed by its power yet, hears Richard Browne.
Download the roundtable supplement for the discussion by following this link.
Artificial intelligence (AI) and machine learning has demonstrated its prowess in specific areas, such as medical imaging, where it can enable machines to spot cancers that human doctors might miss, potentially leading to earlier diagnoses and improved patient outcomes.
This success in specialised settings has fuelled excitement about its broader potential. However, its widespread application in the complex and highly regulated health and protection sector remains a subject of intense debate.
As the Health & Protection roundtable in association with Comarch heard, there are some technology leaders who fear it may even be entering “dangerous territory”.
The cautious perspective
Tim Gough, chief information officer of Simplyhealth, was cautious and noted: “We’re quite a bit further down the road. We do quite a fair amount, and we’re pushing hard to go even more in that space.”
Gough also emphasised that AI is not entirely new to the industry.
“But the expert systems, they’ve been around for years. The real step up is in PR and marketing.”
“I don’t think its unreliable – it seems like there are more guard rails in place to kind of give us that confidence to use it.”
But for him the big question remained: “How accurate is it?”
Gough delved into the issue of “hallucination,” a phenomenon where AI generates false or misleading information.
But that problem seems to be improving as he said: “They seem to be managed to a point, but just garbage is different than the wrong answer or skipping something slightly wrong. It’s harder to pick up on.”
But he cautioned: “And I think that is what the bigger risk for me at the moment.”
He added: “It just seems a bit wrong if you trust it, because it always sounds so confident when it gives you the answer. This is dangerous territory.”
The optimistic view
But others at the roundtable offered a more optimistic perspective on AI’s potential, with Algirdas Dineika, head of technology consulting for the UK and Ireland at Willis Towers Watson, taking that view.
Dineika said: “The models are improving – especially with Deep Sea.
“A few more steps and we will be much closer. “
He noted that could take three to five years – “but at the same time infrastructure and the whole ecosystem will improve.
“The whole market doesn’t know where AI is going to go, but the whole market is learning. It’s a massive experiment.
“Everyone is trying to do it, so the amount of data and feedback you get from the technologies is amazing.
“So maybe this is my wishful thinking, but my personal opinion is that within three to five years we’ll get some kind of a real application.”
Juan Redondo Fajardo, managing principal of Capaco from Colombia, also took a positive view.
He noted that AI had already had great success for a fintech call centre that he personally knew about in Colombia. The company had replaced 18 call centre staff with the latest version of ChatGPT.
He said the quality of the voice of the artificial AI call centre ‘person’ was impressive.
“The AI had a Colombian accent, like it was a 24 year old female, confident and who could pay attention and speak slowly.
“I was impressed with the chat. My friend replaced 18 people with that just six months ago.”
“It does the qualification triage.
At the end of the initial stage, the AI assistant hands over the conversation to a real person to close the business.
And even though the customer was talking to an AI, “80% of them thought they were having a natural conversation.”
He concluded that even in the developing world, where labour is cheaper than in developed countries, “if you can switch that directly to call centres that can have an interaction with you and AI, I see the business case there.”
“A fintech will do it and then the rest will follow.”
John Underwood, director of technology at Cirencester Friendly, noted; “AI agents are coming to the market, which are pre-trained by job role and job function. Some of them are looking to hit the market in the next quarter.”
The importance of the human touch
While acknowledging AI’s potential, several participants at the roundtable also emphasised the importance of the human connection, particularly in sensitive situations.
Frances Hoyle, chief information officer of Vitality Life, pointed out: “I think there are certain things that technology will never replace.
“Focussing particularly on the health business line, we would never allow a member who calls us with a cancer diagnosis to not speak to a human.
“We would never allow them to go through a generative process, because it’s just you’re back to that experience and that you know the appropriate side of how we help somebody through those types conditions.
“But holistically, the other, the other half of the process can completely fit into those categories.”
On using AI to predict the health of a customer, Hoyle said: “you could use predictive analytics to tell you whether you’re likely to get cancer in the next five years.
“But ethically, we wouldn’t do that. I don’t think we could. And even though there are elements of predicting that, there are elements of where you can look at somebody’s activity to take the Vitality programme.
“You can look at somebody’s activity to determine, again, propensity for diabetes or so on, and you can do in a positive way, and that’s obviously the Vitality share value model.
“So for me, we shouldn’t assume it’s global, that this is going to solve for everything because because it won’t, and I don’t think it should. I think it brings us back to the point about ethics of technology and how we’re using that actually in the real human world, day to day.”
Download the roundtable supplement for the discussion by following this link.
