Technologies like Artificial Intelligence (AI) hold the potential to modernise health and care provision in the UK and provide innovative solutions to some of the sector’s most pressing challenges. With ongoing debate around the sector’s use of AI, and the government committed to building an NHS fit for the future, the time to discuss the complexities of putting advanced technologies into healthcare practice is now. In this article, Dr Claudia Lindner discusses the BoneFinder® tool, an AI software developed by academics at The University of Manchester, which allows doctors to automatically assess the position of the hip bones in X-ray images.
- Although delays in assessing X-ray images for children with cerebral palsy (CP) can lead to missed signs of hip displacement and prolonged interventions.
- Researchers from The University of Manchester developed BoneFinder®, an AI tool that automatically calculates key hip displacement measures.
- AI should work to augment clinical capabilities, for example through Secure Data Environments (SDEs) that host anonymised clinical data and embed AI into medical services.
Machine-learning tech to speed up assessment: a Manchester case study
Cerebral palsy (CP) is the most common motor disability among children in the UK. For children living with CP, the risk of hip dislocation is extremely high, leading to severe pain and difficulties with sitting, washing and dressing. Hip dislocation can be prevented by regularly assessing the hip and performing minor procedures promptly when signs of hip problems arise, but if left untreated, it can require extremely invasive surgery and reduce quality of life for patients.
The Cerebral Palsy Integrated Pathway (CPIP) is a national surveillance programme for the assessment of the musculoskeletal system in children with CP. Children receive regular X-ray images for specialists to identify any issues. While taking regular X-ray images for monitoring purposes is beneficial, the assessment of the images for hip displacement is often delayed and early signs can be missed – with staff training, workload, and technology infrastructure creating delays in obtaining manual measurements of hip displacement. Compounding this are measurement inconsistencies between users, as well as errors when inputting data into different systems.
The development of machine-learning technologies, such as the fully automatic software tool – BoneFinder® – developed by academics at The University of Manchester, allows us to automatically assess the position of the hip bones in X-ray images. Such technologies can locate the outline of the bones in the image and use this information to automatically calculate clinically relevant measurements. This includes the automatic calculation of the key measurement used to assess the extent of hip displacement in children with CP.
An automated measurement system has the potential to save clinicians time and to improve patient care by enabling more comprehensive, consistent, and reliable monitoring of hip displacement in children with CP.
Deploying software in clinical settings
The BoneFinder® tool has been validated with data from Alder Hey Children’s Hospital. Tests on clinical images of over 3,000 hips of children with CP show that the tool creates measurements with accuracy similar to doctors. The results demonstrate that the tool works for children whose hips are beginning to dislocate or are dislocated. This allows for reliable and consistent monitoring, aiming to improve patient outcomes through hip surveillance programmes.
The work aligns with Health Data Research UK’s mission to unite the UK’s health and care data by contributing to a national database of hip measurements in children with CP, enabling important future research. The research outcomes are openly accessible, including the technology for automatically outlining bones in radiographs through the BoneFinder® website.
Feedback from some trusts indicated that, ordinarily, it could take months from taking the X-ray to reviewing the data, which sadly delays the interventions and outcomes for the child. The ultimate goal for this tool is to ensure that every child with CP is appropriately monitored for hip migration by speeding up the assessment process and automatically populating the CPIP database, improving the quality of care for children with CP.
Balancing AI and human expertise
A key concept for developing AI-based medical imaging solutions is ‘hybrid intelligence’ – the combination of human and artificial intelligence. One essential mechanism for this collaborative approach is the human-in-the-loop design, where clinicians remain actively engaged in the decision-making process. While developing highly intelligent AI systems is crucial, it is equally important to preserve and enhance human intelligence. Maintaining this balance ensures that clinicians can effectively interact with AI systems without losing their critical judgment and expertise.
The importance of trust in AI systems cannot be overstated. One significant issue with its widespread deployment is the tendency of clinicians to trust AI systems, sometimes even when they are incorrect. This phenomenon, known as automation bias, occurs when users rely too heavily on automated systems, potentially leading to errors in judgment. For example, clinicians are more likely to trust systems that have a higher perceived benefit, which may lead them to unknowingly accept an incorrect decision as correct. Although faith in AI systems is a necessity for their successful adoption, this risk underscores the need for continuous education and awareness to prevent over-reliance on technology, and the requirement for comprehensive education programs to train clinicians on the potential biases and limitations of AI systems. This training should focus on developing critical thinking skills and awareness of AI-induced biases such as automation bias.
It’s also important that these technologies are transparent in their development and performance. One aspect of transparency is providing information about the data used to develop and validate the technology. According to a recent review, many medical imaging solutions – including regulatory-approved AI-based medical imaging systems – lack external validation of their results. Key contributing factors are likely difficulties and delays in accessing relevant clinical care data. Without clear evidence of performance, it becomes challenging to build the trust necessary for the successful implementation and adoption of AI-based healthcare technologies.
Access to data and the future for AI-enabled healthcare
One of the fundamental challenges in developing medical imaging solutions for clinical practice stems from the nature of the data involved. Imaging data may have been collected as part of a study such as UK Biobank, or as part of clinical care. Images collected from study data are easier to access, and very useful for research purposes. In contrast, clinical care data can be difficult to access but are an absolute requirement for developing and validating medical imaging tools that are to be used in clinical practice.
Access to anonymised clinical care imaging data for research purposes needs to be improved to enable faster and more effective research in translational medical imaging. The establishment of secure data environments (SDEs) – data storage and access platforms which uphold the highest standards of privacy and security of NHS health and social care data when used for research and analysis – provide a potential solution, as they allow approved users to access and analyse data without the data leaving the environment. SDEs aim to enhance collaboration, drive innovation and improve efficiency within the research space.
In terms of developing and validating real-world medical imaging tools, SDEs based on clinical care data are required. While these exist, medical images such as MRI scans or X-ray images are often not available in these environments. Considerations should be given to extend the remit of clinical data-based SDEs to include medical images, or to streamline the processes for accessing and sharing anonymised medical images from clinical care. As highlighted by the Sudlow Review, this would facilitate more effective and joined-up research collaboration, and ultimately lead to quicker implementation and integration of technologies into clinical settings. The Department of Health and Social Care (DHSC), which holds responsibility for policy on SDEs, should create guidelines to allow for faster sharing of anonymised clinical images.
AI should serve the purpose of augmenting clinicians’ capabilities rather than replacing their judgement. It is crucial to provide clinicians with continued opportunities to practice their critical decision-making, in the presence of AI systems. This approach to embedding education on the risks, as well as the opportunities, into clinical settings is multi-faceted, starting as early as medical training, and will need to continue throughout a clinician’s career. There is a need for a cross-departmental strategy, involving DHSC, the Department for Education, and the Department for Science, Innovation and Technology to safely ensure not only the rollout of AI within clinical settings, but the rigorous and essential training requirements for its safe usage.
Emphasising education, trust, and hybrid intelligence, and removing barriers to accessing clinical care data for developing and validating technologies like BoneFinder®, offers a roadmap for the effective and ethical use of AI in healthcare.
By addressing these key areas, the potential of AI can be harnessed to improve patient outcomes while safeguarding the critical role of human expertise in clinical decision-making.