ProtectX8: Explainable AI is key to building digital trust – Lubczanski

Explainable artificial intelligence (AI) is the key to bridging the gap between the benefits of the technology and maintaining trust among policy holders.

This is according to Dr Lizzy Lubczanski, client manager at Swiss Re, (pictured) who addressed an online ProtectX8 audience this morning.

She said that while AI had huge potential to increase efficiency, develop new solutions and enable better customer outcomes, the protection and health insurance sectors still need customers to be willing to provide their data for AI.

She added employee commitment is also crucial for insurers to achieve these outcomes.

Touching on trust in AI, Lubczanski pointed to a July 2023 Health & Protection article about concerns from underwriters that participating in training AI models could result in their eventual redundancy.

She noted ProtectZ’s group of under 30s from across the industry concluded that AI was unable to tell customers and compliance departments why it had made certain recommendations.

Compounding the problem was a report from Which? in December concluding that trust in the insurance industry as a whole had reached a new all time low.

Lubczanski maintained that while customers will happily give away their data to tech giants, insurers have to work much harder to gain the same level of digital trust.

Drivers to digital trust

Lubczanski explained the Swiss Re Institute has identified the drivers of digital trust are a combination of social, cultural and economic status and trust will depend on an individual’s own behavioural characteristics and frames of reference.

These drivers can be split into nine dimensions that influence digital trust and they fall into three zones – reassurance, security and reliability, she added.

Reassurance keeps a human element of AI helps create trust in the ability of machines to make decisions autonomously.

Security provides for regulations and cyber security which help users feel safe in exchanging data with other parties

And ease of use and accessibility through creating digital platforms allow for easy functionality, navigation and fine print that is easy to read.

“Trust in AI lies in understanding how it uses data to reach fair and ethical decisions without compromising human dignity,” Lubczanski said.

“Explainable AI is created from being able to see the benefits of a intelligent decision making tool while understanding the mechanisms themselves and believing that AI is free of bias,” she added.

“Explainable AI is what will bridge the gap between the benefits of AI and maintain trust among policy holders.”

Role of cultural and psychological factors

But Lubczanski also pointed out that insurers can build trust in AI and digital trust more generally by providing transparency, understanding and a clear explanation of data requirements while also considering the role of cultural and psychological factors.

These factors, she added, include:

1) Giving back control – letting customers preselect what data they share with you
2) Being transparent – explaining when and how data will be used and anonymised
3) Allowing customers customisation – enabling customers to steer their interactions to enforce the perception of control
4) Making it convenient – but not so much that they drift into something they don’t want and allowing time for pause and reflection
5) Embedding into an ecosystem – to develop trust in a wider network and reduce impulsivity in data sharing

She also noted that there were three E’s which insurers can use to shape digital trust: empowerment, engagement and emotional connection.

Lubczanski added that as an absolute minimum, insurers must always follow data protection standards and AI governance that is consistent with laws and regulations. That includes partnerships with the likes of the Veritas consortium to help financial institutions evaluate their AI solutions against the principles of fairness, ethics, accountability and transparency.

Focus on explainable AI

“Don’t lose faith,” she continued. “The system that customers use to answer questions about their digital trust is not necessarily the same system they use when making digital decisions.

“In a 2021 survey 72% of users said they trust Facebook not much or not at all – yet around 2 billion users log onto the platform every day and half a million posts are made every minute.

“Remember to focus on explainable AI and use whatever tools you have to make sure you really understand your customers and employees to help bring them along your AI journey,” Lubczanski concluded.

Exit mobile version