54.1 F
Cambridge
Saturday, May 18, 2024

The Future of Artificial Intelligence in Medicine: An Interview with Isaac Kohane

Isaac Kohane, M.D., Ph.D. is the inaugural Chair of the Department of Biomedical Informatics, the Marion V. Nelson Professor of Biomedical Informatics at Harvard Medical School, and the Program Director of the AI in Medicine Ph.D. track developed by the Department of Biomedical Informatics (DBMI) at Harvard Medical School. He also serves as the Editor-in-Chief of “NEJM AI,” a monthly journal on the applications of AI and machine learning in clinical medicine. Dr. Kohane joined The HPR to discuss the future applications of AI in medicine, and where policy regulation might be beneficial to promote safe and equitable usage. 

This interview has been edited for length and clarity.

Harvard Political Review: The new AI in Medicine Ph.D. track, of which you are a Program Director, recently closed its first round of applications. How did you and your colleagues decide to form this track? And aside from the information available on the website, how do you envision this track, and what sets it apart from the other Ph.D. track offered?

Isaac Kohane: This track came about because of consumer demand. Applicants to the other Ph.D. track, Bioinformatics and Integrative Genomics, expressed strong interest in clinical AI, which was not possible because that track was funded by the National Human Genome Research Institute. My own Ph.D. back in the 1980s was in artificial intelligence; with the emergence of large language models, it became even clearer that there was an immense opportunity for medicine to take advantage of this new intelligence with all its warts to help improve the state of the art. 

Unlike a computer science program, the focus for this track is really on the medical application. We want our students to apply quantitative skills to medicine because there is immense need. We want our students to understand: What has been done so far, what needs to be done? What are the both technical, as well as economic, and ethical issues? What are the business opportunities? 

We want our graduates to not only be state of the art researchers in their artificial intelligence, but to really understand what are the challenges that medicine has today. We give them the opportunity to take medical electives so they really understand what medical students are learning about physiology, about pathology. We provide exposure to workflow in the hospital by bringing them into the hospital. Medicine is in very dire straits currently. And if we are to meet those growing gaps with artificial intelligence, then we are going to need scientists and engineers who both understand the computational aspects and the human medical, workflow, regulatory, and ethical challenges.

HPR: What specific types of work do you envision graduates from this program to be doing?

IK: This speaks to where we can, in the near future, have the most leverage on changing medicine. The answers are within medicine and outside medicine. 

From the outside, there are a number of large data companies, such as Microsoft, Google, and Apple, that we don’t think of as healthcare companies, which are becoming much more involved in healthcare. Due to the aging of the population, we have a growing group of individuals who need medical care, and the demand for medical services is rapidly increasing. There will be numerous jobs at these existing large consumer-facing and business-facing data companies, as well as lots of jobs in health-related companies that need the expertise of graduates from this track, such as medical imaging companies and electronic health record companies. There are also many startups that will be looking for these individuals because there exist so many opportunities to revolutionize the way medicine is happening, given the fact that our workforce is falling behind the needs. Also, in the governmental and regulatory space, it’s convenient for individuals to understand the problem. 

Within the healthcare system, it’s a little trickier: Healthcare is constrained in a variety of ways by restrictions around the way they are supposed to do business today, and it’s hard for them to disrupt their own businesses. Nonetheless, there’s a huge need for individuals within the healthcare system, who understand the technology and can help these hospital systems, assess, evaluate, and acquire AI technologies in healthcare systems. I think there will either be a Chief AI Officer in most of these hospitals, or a head of AI who reports either to the Chief Information Officer or Chief Medical Officer. 

Throughout the whole vast ecosystem of healthcare, which is one-sixth of the gross domestic product of the United States, there will be a demand for many individuals who understand this technology, who can help push it forward, who can give genuine expert insight, both for within the healthcare system and from outside the healthcare system.

HPR: What are some specific clinical applications of AI in the near future? 

IK: From a financial perspective, a lucrative application of AI is healthcare business, by which I mean making billing and reimbursement more efficient. 

On the clinical side, there are companies that enable doctors to say their diagnostic or therapeutic plan out loud: AI will generate a clinic note that’s customized both for the patient, which can get doctors away from some of the bureaucratic burden that has really disenfranchised and disappointed many who had gone into clinical medicine. There are many other such kinds of administrative activities like requesting authorization for a patient’s visit.

For me, the most interesting part is clinical applications. However, incorporating automation into healthcare in ways that it’s currently not is going to be very slow. Now, this may be very different in low-income countries, which have less resources, and therefore also less standardized, existing infrastructure. In low resource areas with very few doctors and resources, having AI support on, let’s say, a wireless smartphone will help significantly with decision making. This can be implemented right away without having to completely redo existing healthcare infrastructure.

Last of all, the biggest potential use of AI in medicine is directly to patients. This is happening with ChatGPT; we have some interesting cases where patients have used these generative AI models to help themselves. However, I don’t know about the thousands of cases where it may have been helpful or may have, in fact, been harmful. 

Given a healthcare system that is struggling to keep up, figuring out ways to have AI make patients become smarter and more effective as patients is going to be an enormous part of the future.

HPR: How do you believe that governmental or regional policy should regulate how AI is being used and implemented in medicine?

IK: Given the fact that we actually don’t know how AI is going to be used exactly, it seems a bit premature for the regulation about how it should be used. Nonetheless, I think there are some basic properties of AI that would be in the public interest to agree upon. 

First, it’s better for the public to know on what data these large language models were trained. This transparency will tell us if it’s representative of the patients who we want to treat, or not. Second, in addition to training using data, many of these generative AI models and large language models go through a process called alignment in which the programs are essentially trained to show what kinds of answers are better than other kinds of answers. For example, it’s hard, not impossible, to get GPT to tell you how to kill yourself or how to make a toxin. This alignment process essentially represents some values, at least implicitly, around the behavior of these large language models. In that context, it would be very useful to know what utilities or preferences or values are being maximized in these large language models? Is it those of the patient, those of the healthcare system, those of the government? 

HPR: What do you anticipate to be the primary dangers or challenges of implementing AI in medicine?

IK: I think the greatest danger would be AI that’s implemented by players that don’t necessarily have the patient’s best interests at heart. The stakes are so large in medicine, both in terms of happiness, in terms of money, in terms of quality of life, that you can imagine that there’s a lot of reward for actors that may or may not have our best interests who deploy these models. The values and goals of these AI systems will require regulation, as well as who’s supplying the development and funding the development of these programs. These models are highly capable, and if they’re aligned with the kind of preferences and values that we teach in medicine, that’s great. But there are many, many other possible alignments and sort of danger is having players aligning the systems in ways that save money, perhaps at the detriment of a subgroup of patients.

We have a lot of opportunities to redress wrongs, including grievous wrongs due to bias using AI, but that’s only so long as the people who are training and aligning these models actually have our best interests at heart. A combination of addressing who’s doing it with which goals, making transparent the properties of these models, and having a monitoring process to ensure that these systems are in fact acting in their best interest would be important, effortful first steps.

- Advertisement -
- Advertisement -

Latest Articles

Popular Articles

- Advertisement -

More From The Author