No individual data should be going into artificial intelligence (AI) open source systems for private medical insurance (PMI) use, former regulator Isabella Macfarlane has warned the industry.
Macfarlane, who was lead associate of insurance supervision at the Financial Conduct Authority (FCA) until this summer, also emphasised the sector needs to embed TRUST when using AI.
Now head of London markets at Insurance Compliance Services (ICS), Macfarlane told the Association of Medical Insurers and Intermediaries (AMII) Health & Wellbeing Summit a colleague created the acronym TRUST for using AI.
It stands for Transparency, Responsibility, Understanding, Security and Testing.
“Transparency, so if you’re using AI, make sure people know that you’re using it and how you’re using it,” Macfarlane told delegates.
“Responsibility – make sure there is a human in there and in terms of the FCA, make sure there is an accountable senior manager function who is accountable for it and in the same way, make sure that you are employing people that understand AI. Don’t just use it for the sake of it. It needs to be understood by people bringing it to the market.
“Understanding – you also need to make sure that’s understandable.”
Protecting personal data
And Macfarlane pointed out that security is a critical issue – particularly for private medical insurance (PMI) with the handling of very personal data.
“Security – obviously, there is a lot of data that goes into AI. You need to make sure that you have the safeguards and governance to protect this data,” Macfarlane continued.
“One thing that I would emphasise more than anything else – especially in the PMI market is making sure no individual data goes onto AI. That’s a pretty obvious one.
“If you are using AI in a wider context and looking to develop your own one, there’s ways of making sure it’s enclosed and personal data can’t get out, so you could cross develop that.”
Macfarlane told delegates that they also need to address testing.
“You need a proof of concept – making sure that from the off, you know what’s going to happen and you know that it’s going to work and when it’s a soft launch or first launch, make sure it’s in a very safe environment,” she continued.
“And never let it just run, there always needs to be human intervention, ongoing monitoring, ongoing testing.”
Unconscious bias
But Macfarlane urged delegates to have an AI policy and to be aware of unconscious bias.
“There are those keys and making sure it’s in your governance framework, that it’s embedded in there and having an AI policy is a very good idea, but there is a lot of unconscious bias within the AI world,” Macfarlane continued.
“So if you’re feeding AI into your underwriting model, ensuring the unconscious bias is not there and if it is, it is not going to make life exponentially worse for vulnerable customers of all kinds.”





