By: Virginia B Hill, MD
We are practicing neuroradiology in an exciting age: machine learning (ML) has great potential to improve the technical quality and interpretation of medical images, to make imaging examinations faster and more tolerable, and to make previously unattainable predictions about disease. Along with these potential advancements comes great responsibility to make sure we continue to protect our patient’s privacy, to use standards and evidence to improve imaging healthcare, and to formulate a code of ethics, regulations and laws to protect patient safety.
To truly develop robust, accurate ML algorithms requires large amounts of imaging data. Non-medical imaging ML has achieved success by analyzing millions of images. Datasets of this scale are difficult to achieve in radiology and deidentifying and preprocessing huge numbers of images for ML analysis is labor-intensive and expensive, raising the need for methods to accurately and rapidly strip personal information from images. In addition, developing algorithms on a single institution’s images risks the introduction of bias and overfitting, which can be mitigated by sharing images or algorithms with other institutions; using different image sets for training, validation, and testing; and participating in publicly available archives.
The easy availability of software frameworks and public image datasets has allowed physicians and non-physician scientists without both ML and radiology expertise to develop ML algorithms. It is important to ensure that radiologist and scientists acquire the computer science and radiology training necessary to evaluate algorithms for bias and error and collaborate to provide the expertise from both disciplines necessary for accuracy and generalizability. Incorporating computer science courses into medical education will help physicians to provide better image interpretation and to critically evaluate algorithms. Standardizing imaging and ML protocols also will promote generalizable, reproducible results. Many of these issues were elegantly addressed in educational sessions at the ASNR 2019 meeting, including the ASNR Artificial Intelligence Task Force.
In our mission to improve the medical imaging quality and value, we also must address the questions of patient image ownership and access. While we should not deter scientists from developing algorithm development by impeding fair compensation, it is also important to explore ethical issues of exclusive dataset access and what payment structures further our mission of providing for the common good of patients who have provided the data needed to develop new tools.
It is ethically and scientifically important to address resource inequality and population bias as well. Lack of access to ML algorithms in underserved areas will increase health disparities. In addition, including images from underserved populations in algorithm development will result in more robust results. This is an opportunity for grants in underserved areas to improve ML services for well-served and underserved communities alike.
We also will need to develop a framework for dealing with liability in ML, including our human tendency to non-critically accept computer-generated results.
In short, we will best serve our patients and the common good by collaborating, rigorously evaluating algorithms for quality, and developing an ethical, regulatory and legal framework for clinical ML.
Ethics of AI in Radiology: European and North American Multisociety Statement, January 3, 2019. https://www.acrdsi.org/-/media/DSI/Files/PDFs/Ethics-of-AI-in-Radiology-Statement_RFC.pdf, accessed 6/23/19.
Thrall JH, Li X, Li Q, et al. Artificial Intelligence and Machine Learning in Radiology: Opportunities, Challenges, Pitfalls, and Criteria for Success. JACR 2018; 15(3PB):504-508.
Larson DB. Ethical Considerations in the Use of Clinical Imaging Data for Artificial Intelligence. Presented at RSNA Spotlight Course, San Francisco, CA, May 31, 2019.