As artificial intelligence becomes more deeply embedded in clinical care, healthcare organisations are being urged to prioritise risk management alongside innovation. Lockton warned that deploying AI can create significant legal, operational and governance exposure if deployed without safeguards.
Precision medicine is not new, but AI has increased its speed and scale. Healthcare organisations are now using it to build genomic data repositories, support early cancer detection and treatment, and address health inequalities among minority populations. However, research has identified racial, ethnic, sex-specific and ancestral disparities in precision medicine systems.
“These biases can negatively impact prediction accuracy, therapeutic responses, and the generalisability of treatments, especially for populations under-represented in clinical data sets,” Lockton said.
According to Lockton, uneven data and biased treatment assignments can distort machine learning models, affecting how biomarkers are identified and how treatments are recommended. Gaps in genomic data may limit understanding of disease in minority groups, and biased algorithms can influence clinical risk assessments.
Regulation is another concern. Lockton said that AI development has moved faster than the laws designed to govern it, creating uncertainty around intellectual property, liability and product approvals. As rules change, they may introduce “new requirements that an existing AI-driven development process fails to meet—resulting in approval being withdrawn for the resultant drugs.”
Liability remains unclear because of the “black box” nature of some AI systems. If harm occurs, responsibility may fall on healthcare providers, developers or system owners, and could extend to directors and officers.
“In this context, having a robust contractual relationships between the various parties engaged in the development, supply and use of AI is essential to ensure the liability of each party is clearly defined and understood,” Lockton said.
Even if liability is not proven, AI failures can lead to financial losses, especially given the long development timelines in healthcare. Lockton also warned of systemic risk, where a small error could affect many patients. To reduce risk, Lockton recommended strong validation and monitoring of AI systems, ongoing clinician oversight, the use of diverse datasets and strict data protection controls.
It concluded: “By embedding risk management into each stage of AI adoption, healthcare providers can reduce liability exposure, safeguard sensitive patient data, and maintain trust. Ultimately, this de-risking will help to realise the potential of precision medicine and deliver lasting value for patients and organisations alike.”