ABI Conference 2024: AI’s potential in healthcare is huge but risks are greater – Mehra

The potential for the use of artificial intelligence’s (AI) in healthcare is huge, but the risk around its use is even greater than in other industry sectors.

This is according to Avi Mehra, associate partner and clinical safety officer at IBM, who was participating in a panel session at the Association of British Insurers’ Annual Conference earlier this week.

Mehra maintained that over the last 18 months the pace of change in technology has been “incredible,” describing these times as “one of the most transformational moments for industry and society”.

“Every industry is affected. Society as a whole is affected and we’re still grappling to work out what the impact is going to be,” Mehra said.

“Now, in healthcare as we all know the risks are even greater. The stakes are greater. Patients lives are at risk whether you’re a provider or an insurer.

“And so this is obviously a big concern.”

Nutrition labels for AI

While Mehra told delegates the potential for AI is “huge,” he added there are concerns around it being used in negative ways to discriminate against individuals in terms of claims, to further lead to more digital exclusion.

But Mehra added there were some practical ways that organisations can deal with these issues.

“First, we’ve heard the word being mentioned a number of times, and if we’re going to have trustworthy AI systems, it starts with trustworthy data,” Mehra continued.

“So you need to know, you need to understand how that AI model has been trained, what data, by who, what governance is in place.

“We have this concept that’s growing called nutrition labels for AI.

“When you go shopping and you’re choosing your cereal for the morning, you have a clear understanding of what’s in it, what’s good for you, the same should be true of your AI model.

“Who’s trained it? What data? What governance is in place? What are the risks? What are the limitations?”

Use case by use case approach

And in addition to trustworthy data, Mehra said organisations also need to be really careful about the use case of AI and take a use case by use case approach.

“For some of those clinical use cases where AI is being used for decision making, we need to be much stricter,” he warned.

“We need to have greater scrutiny because the impact is far greater than for example if you were using maybe  an AI powered chatbot for customer experience, where the risk is slightly lower or more on the non clinical end.

“So really take a use case approach and the regulation should follow the use case not the technology.

“You can’t regulate the technology as a whole. The regulations need to follow the use case.”

Keeping humans in the loop

But Mehra also warned humans also need to be in the loop.

“I’m scared to think about how many decisions are being made now by AI-powered systems where we as individuals do not know that an AI system is being used to make that decision for us,” he added.

“So from my perspective on those big issues around using AI to inform decision making, whether you are an insurer or a provider, humans need to be in the loop for that level of oversight.

“The reality is with what’s going on in AI today, and the progress being made, the potential is big but there’s a lot we still don’t understand.”

Exit mobile version