Customers are increasingly generating protection insurance enquiries using artificial intelligence (AI) services such as ChatGPT, but this is presenting serious issues for themselves and advisers by doing so.
Cura Financial Services co-managing director Alan Knowles told Health & Protection the concerning new trend created data protection issues where customers were inadvertently releasing sensitive information by using the technology when contacting advice firms.
Furthermore, he highlighted this caused problems in identifying where customers may have vulnerabilities.
“One notable trend that emerged for us this year, and is likely to grow, is the rise of AI generated enquiries,” Knowles told Health & Protection.
“We are now receiving daily enquiries originating from ChatGPT and similar tools. These aren’t just website visits prompted by AI recommendations; we are receiving detailed emails from new customers outlining their medical histories and insurance needs.
“We learned that AI tools are researching suitable brokers and then drafting emails for customers to send in a similar format.”
While this has advantages, because the information received upfront is detailed and well structured, Knowles added there were also risks.
“We lose the early opportunity to spot vulnerabilities, as these AI drafted emails are very polished and can mask communication-related vulnerabilities,” Knowles continues.
“People are entering sensitive personal information, including medical details, into online tools without understanding how that information may be stored or used.
“While it is positive that tools like ChatGPT are signposting firms like ours, they may also give customers incorrect assumptions about whether cover is available, since they do not understand the nuances of protection advice.”




