Innovation expert warns of dangers of ‘Gen AI’ for protection industry

As artificial intelligence becomes one of the most talked about new technologies since the advent of the mobile phone, not everyone is convinced that it will prove to be a good thing for either humanity or the protection industry.

One of those people is Shân Millie, founder of Bright Blue Hare, who was upfront about her opinion on both AI and generative AI (Gen AI), such as Chat GPT.

“I’m not a fan girl of AI generally and certainly not of Gen AI specifically,” Millie said.

Millie was speaking on Thursday (29 June) at the Protection Review’s ProtectX7 on the humanity of protection and what that means for individual leaders and their relationship with technology and data.

“To be be honest I wouldn’t trust generative AI to tell me the time – let alone embed it in a claims process or – God forbid – make it my go-to for customer interaction,” Millie said.

“My message to you as protection practitioners, pathfinders and leaders is, it’s actually your responsibility to put data and tech in its rightful place and to transparently, purposely and verifiably, put it to work for your customers,” she said.

Speaking on the “inconvenient truths” around AI, Millie noted, “Chat GPT and generative AI actually are inherently, extremely, incredibly risky for the business. These automated systems tap into vast carbon voraciously hungry data sets. And those data sets, by the way, are sucked in without concern, without  attribution and 99% of the time with no audibility.

“You can’t see where those sources are coming from. You can’t interrogate those models. And then what happens is those chunks of data are literally slapped together in a plausibly coherent manner.

“So, the way I think about it is, its like typing in a search query and getting back a return that could consist of stuff that actually isn’t true or doesn’t exist at all.” she said.

But the problem goes even deeper than that, building on negative stereotypes and promoting prejudice.

“What we know for sure already though, is that text image AI, generates gender and racial stereotypes, exacerbates and amplifies – virtually exponentially – the inequalities, biases and all kinds of embedded exclusion that are already there,” she said.

But what should brokers and advisers do it it their investors, peers or even their boss say that despite the ‘teething problems’, they should just experiment with it and do something anyway?

Previously there may not have been any official regulations on automated decision making – but that is no longer the case, Millie said, noting both the EU AI act, which  classifies AI systems intended to be used for insurance purposes under their high-risk category and the upcoming Consumer Duty.

“One of the many impacts of consumer duty has been that enforced priority to fix well-known and long-lived deficiencies in people, process, technology and culture.

“Well, no more talk. We must now measure, report and verify, our ability as firms to consistently deliver on things like fair value, creating products that customers actually understand, systematically identifying and supporting vulnerabilities and acting on foreseeable harms.

“And the clue is in the word, foreseeable isn’t it?

“In my work,  I’m seeing how Consumer Duty is fundamentally changing the way that transformation and digitalisation projects are conceived, and executed changing corporate innovation and partnering focus.

But all of this is nothing new for the insurance industry.

“We need to recognise that we are the original automators at scale. Automating, decisioning and profiling based on personal data is the bedrock of protection. We’ve been doing it for a very long time” she said.

“The humanity of protection means making critical thinking about technology and data your guiding light as practitioners and leaders in protection and requiring outcomes that meet the standards set out by consumer duty, first, and above all else,” Millie concluded.

Exit mobile version