false
Catalog
The Beat Webinar Series - Episode 14 Live from HRX ...
Post Panel
Post Panel
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome to the BEAT episode 14, On Demand from HRX. Today we're in Atlanta and we're going to be talking about digital innovation, AI, and the future of cardiac electrophysiology. I'm Mike Lloyd, and since our host city is Atlanta, I thought it would be nice to have an Atlanta commentator crew here. With me today is Neil Bhatia at Emory University, and Dr. Faisal Merchant, also at Emory. Thank you both for coming in and helping us with this topic. Thanks for having us. Our job for this next 20 minutes is to take the session that was done yesterday, recorded yesterday, digital innovation, AI, and the future of cardiac electrophysiology, which is kind of a mouthful, and distill it. What I wanted to do is talk about a few points that came up and distill that hour down to 20 minutes and be very specific in what these AI topics involve. Let's start, and I think the most important that is so frustrating to me is AI. It's like saying arrhythmia, right? So definitions and vocabulary is critical before we even get into the discussion. So being our techie in Atlanta, I need you to tell me what AI means, what machine learning means, neural networks, and so on. Absolutely. I think this is a great question and one that causes a lot of confusion. When we say AI, we just mean the general field of training machines to think like a human. Think, decision making, reasoning, predict. Now under artificial intelligence is machine learning, and this is where the algorithms come in. Like is it supervised machine learning, unsupervised, reinforcement learning? That's kind of more the nitty gritty of the algorithms to predict or whatever kind of event that you're trying to study. Beyond that, we have these artificial neural networks. Now before we started using these artificial neural networks, we would use simple linear and nonlinear models, but with artificial neural networks, we're actually mimicking the way the human brain thinks and using layers or nodes, which represent neurons, to improve the way it predicts rather than a nonlinear and linear model. And there you have different models, convolutional neural networks, recurrent neural networks. That's what the artificial neural network is, it's just a better way of learning that we could not do before, but that is a machine learning type. So it goes AI, machine learning, and then artificial network. And now the big hot topic is these large language models, which is really generative AI. So instead of predicting or diagnosing these models, which is under machine learning, they learn this data and then they try to learn how to generate new data. So instead of predicting, they're making new data, making new text, making images, and that's under a machine learning subtype. So I think the broad term AI includes everything, but the really nitty gritty of the algorithms is really machine learning. Got it. I think I got it. So yesterday, a panelist was asked, how do you incorporate AI in your EP practice? And it was kind of funny to me. He said, oh, I spend a lot of time trying to understand it, and I think a lot of us are struggling with that. But what I wanted to ask you, Faisal, is what, right now, in your next clinic on Monday, what exactly is, how is AI working in EP clinic? Yeah, I mean, I think currently in a very limited fashion, right? And Neil alluded to this a little bit already, but you can really think about using AI and using these models in two big buckets. There's the sort of predictive analytics. Can I take a 12-lead ECG, apply an AI model, and predict who's going to develop AFib? Can I take an echocardiogram and predict who's going to develop AS in the next 10 years? There's the sort of predictive components, and there are a number of predictive models out there that are used to a certain extent clinically right now, but I think they are largely in the realm of prediction. Then it's up to the clinician to figure out what to do with that data, and we fall back to the way that we normally do things. Then there's generative AI, which is different than the prediction. That's this idea of maybe having an AI assistant write you clinical notes for you. Or if you see a patient come into clinic with a stack of outside records that's this thick, or an electronic record that's that thick, can you have a model generate a summary for you, or do a discharge summary for you, or that kind of thing? That I think we are just starting to dabble now. There are software that you can use that will generate the note for you in clinic. We're starting to use those in a very limited fashion. I think the uptake of generative AI is going to be much, much slower than the predictive stuff. The predictive stuff, you can choose to use the predictive models however you want, but if you're really going to lean on generative AI for clinical care, you've got to have a high degree of confidence that the information's accurate, that it's put together well, because it's going to impact patient care and be a part of the medical record. I think we're using that in a much, much more limited fashion, largely not experimental, but in research settings right now. Generally speaking, I think a lot of our, at least the clinic base, we are using those generative note-making type of things, and it helps. A lot of our, especially our younger faculty, are doing it already and getting the notes done. Are you using it in clinic now? I've used it a few times. I haven't incorporated it in a big way. Part of that is that if I'm still going to have to go back at the end of clinic and review that note, make sure it's accurate, make sure that it reflects what I want it to reflect, that takes some time also. At least for my workflow, I can sit down and use a voice recognition software like Dragon or something and generate a note pretty quickly that I find is more time efficient for me than having to go back afterwards, but ultimately, can you envision a time where you wouldn't have to necessarily spend all that time at the end reviewing the AI-generated note? Perhaps, but I personally don't have that kind of confidence yet, because it's my name that's going at the end, and if there's something inaccurate or a mistake made, ultimately you're the one responsible, right? But I think there may be greater comfort with that over time. I don't think we're there. I'm not personally. Well, that's good to note. We'll talk about that a little bit later about responsibility. How do we use it in the lab? That's a great question. I think using AI in an EP is difficult. So AI is really good for kind of diagnosis and also helping with repetitive tasks. So if you want to say, for example, like MRI, segment an MRI or segment a CT prior to an ablation, that it will do well. Now if you want to take an MRI and say, hey, can you predict if this patient's going to have sudden cardiac death in five years, now that's a lot more complex, and it's not going to be very accurate. So using it for a diagnostics tool, such as echocardiogram diagnosis or an EKG diagnosis for cardiomyopathy or EKG localization for PBCs, that is where I think we can really use that in the EP lab to help with workflow or help with diagnosis. But I think beyond that, really prediction of these more complex pathophysiology that we as EPs do not understand, I think it's going to take some time. And you do a lot of VT. Do you, have you been enthusiastic with these AI-based predictive models about just taking all this input and saying, okay, ablate here, or this is likely the arithmogenic substrate? What do you... I don't think we're there yet for that. Ventricular tachycardia, especially ablations, are complex. There are different phenotype of patients. Electrogram characteristics might not indicate a successful site. There's more to just that. And I think we're just not there yet in terms of being able to predict where to ablate or beyond that. So I think in terms of workflow, segmentation is very helpful. We started doing automated segmentations of our VTs, which has been very helpful and improved our workflow. We're not depending on the imagers, has been very helpful. So I think some of those kind of practices, we can use it right now. Beyond that, I think we still need to validate, given the large heterogeneity of VTs in our population. Okay. One thing that came up yesterday, I think Tom Dearing mentioned about how we have to seize the day, how we have to right now identify what we can use. And it got me to think about how you would talk to maybe a senior Luddite EP. And if that person were to come in one day and say, hey, show me what to do with AI. Faisal, what would you tell them? Where would you tell them to start? There are a lot of folks out there that aren't even, are just learning about the definitions. Yeah. I mean, I think that really concrete examples of some of the predictive tools, I think make a lot of sense, right? Intuitively, it makes sense that you can take an MRI or an ECG and train a model to predict AFib or to predict heart block or whatever the case may be. I mean, that already exists. There are models. Some of them have been validated. I think it's a little easier to demonstrate value there. Now the extent to which these models improve risk stratification beyond what we've traditionally used biomarkers or just reading the EKG yourself, I think remains to be seen, but you can at least say, look, this is another approach we can use to risk stratify patients or to make patient-specific decisions. The other stuff, the generative AI, the clinical notes, the dealing with the deluge of, say, billing data or remote transmissions from a device or wearable data. I mean, there are massive amounts of data out there. Do AI approaches have the potential to help us deal with that? Yes. In my mind, I think it's very hard right now to point to specific successes and tell an administrator, look, there's a value add here. If we invest in this software or this model, it's going to improve our workflow by X or make us this much more efficient or increase our billing revenue by Y. I think we'll ultimately get there, but I think that's going to take longer. But ultimately, the idea that, as Neil said, if these models really work well with repetitive functions, I mean, think about it. We have a lot of staff right now that we hire that do repetitive functions, right? And they do them well. There's a job to be done. But if you can train a model to do some of your billing or submit claims for you, whatever the case may be, or if I could train ChatGPT to do all my peer-to-peer review, I would pay a lot of money for that. I think there are some potential really useful applications. We're a little ways away from saying they're ready for prime time, in my mind. So let's talk about CHAT-GPT for a minute, and one thing that's in our worlds, aside from seeing patients, is an academic side, writing, reviewing papers. How have you seen that, how have you seen that impact, the language models impact, manuscript preparation, or peer review stuff? Neil. You know, it's a, you know, it's pretty impressive what CHAT-GPT can do in terms of, you know, writing these papers. It's scary. Yeah. It's pretty good. He writes better than me. But I, you know, I think it's really important that, you know, I think for some things it can be very helpful, but I think it's important that, you know, especially for, you know, physicians that are starting out, fellows, you know, residents, you know, part of writing that's a very integral part of becoming a physician, and so I think we need to be a little bit wary, because remember, CHAT-GPT is just learning on whatever is out there. It's not coming up with original thoughts, and I think that's important for, you know, any physician to know that. Faisal, have you used that in your academic endeavors? I haven't, and I mean, the only way it's tangibly affected anything I do is now when I submit a paper, I have to check a box that says, that's right, some AI algorithm was not used to generate this manuscript. Look, people far smarter than me are gonna have to figure out ways that you can use technology to differentiate what was generated by a bot and what was actually written by a human, and ultimately there may not be ways of differentiating, but, you know, I think about it a little bit differently. Right now we think of it as an either-or, right? This paper or this analysis was done by a bot or it was done by a human. I think ultimately we will view these things as, again, as tools, right? It's a little bit like, I'm sure 50 years ago there was a similar discussion about whether kids in school should be using calculators, right? Like, no, you should learn to do math by hand. And then ultimately the calculator became a tool, not a replacement for learning arithmetic. Like, I think ultimately a generation down the road they will be using these tools to augment and supplement what they do, not in lieu of putting in effort, but that it's gonna take some time to figure that out. There was a lot of talk yesterday about its potential, AI's potential to reduce physician burnout, and one of the biggest impacts in my practice from AI to us has not been from our side, but from the patient's side, and this deluge of data that comes from wearable technology and so on. And there was a comment made by the engineer I thought was interesting, and he said that the burnout came from the abundance of data, not the AI. Neil, do you think this is gonna make our lives easier or more complicated? I want your honest, honest opinion. Not AI's the greatest thing, you know? You know, ultimately I think, look, we have a ways to go, but I think ultimately, I think in some ways it's gonna help burnout. But it needs to be for specific tasks. So one, I think, example is patient messaging. We get deluge of patient messages. I think, you know, ways to say if this message says this, send this to the nurse or send this to the physician, because right now the messages are just becoming too much. That is a way that easily trainable, and it's not like, you know, a big treatment decision we're giving up, but it could help with physician burnout. And I think in these use case scenarios, like with these repetitive tasks, I think AI can help. I do worry, though, that, you know, with all this deluge of data, you know, AI is gonna pick up some, it's gonna tell you something that, oh, this needs to be acted upon or whatnot, and that can also, you know, cause physician burnout. Kind of with the Apple Watch, you know, these false readings, AFib, when they truly are not AFib, I worry that this is where this might go. So I think in certain use cases, if we're careful with the task, understanding the weaknesses, you know, of these algorithms, I think it can ultimately help, but it's gonna take some time. Faisal? Yeah, I mean, you know me, the cynic in me tends to think it'll make things worse, not better. But again, I think the idea that it's not the algorithm, it's not the machine learning model, it's how you choose to deploy it. Now, we have a remarkable ability in healthcare to do things in a very inefficient way, and so you're gonna apply a tool that's very powerful in an inefficient way, it's probably gonna make things inefficient even more. And, you know, you can think of an example, for instance, like, it wouldn't be hard, I'm sure that there are people right now developing AI models that, you know, there's nothing inherently bad about in EMR, but we all, I think, hate writing notes in EMR compared to the way we used to hand write a note 20 years ago, right? If you develop some AI model that helped you generate a note, great. But you can also envision generating a model that quickly goes through your note and makes sure that you've documented everything you need to code for a level 3 visit, or something like that. And then at the end of your note, it won't let you sign it until you've gone back and corrected all of those fields to allow you to generate that bill, right? That kind of stuff I see coming very quickly, and has the potential to just make things worse. And so, you know, I think we have to be thoughtful about how we deploy these things. We have smart people who understand clinical care. Doing it in a thoughtful way could be incredibly powerful. That hasn't been our track record in healthcare, maybe this will be different. I wanted the last part of the session focused on the future, and I mean a lot of this is in the future because this is a young field, but I wanted to talk about this hypothetical concept of the singularity, which is kind of corny and Armageddon, but I think is important where artificial intelligence becomes advanced enough to become autonomous and independent of human thought and or input. And it got me thinking about a sort of pseudo singularity for EP. Is there going to be a time, Neil, where this is good enough where we're gonna be not terribly needed? I think, you know, that's a great question and, you know, these kind of, you know, forecasts, look, I think ultimately these tools are really helped to aid physicians, right? These things, you know, they're trained on us, on our decision making, and, you know, as a physician, you know, there's more than just, you know, regurgitating facts or, you know, papers, so I think that while AI can be helpful, I think AI is gonna be very helpful in kind of helping reduce, you know, burnout, repetitive tasks, better diagnosis, you know, maybe giving us insights into disease processes that we might not have figured out, but I think ultimately, I think if we use it in a smart and thoughtful way, I think these tools will be, can aid us, but I do not, I do not foresee it replacing us. I think we're gonna be okay. Yeah, you know, I, you, we've talked about this some. I think the one thing in healthcare that is a little bit different than other industries, and one of the speakers yesterday made this point that, you know, you can deploy AI models and finance, and if they don't go well, okay, you lose money, but that's different than life, right? Like, the stakes are a little bit different in healthcare. In fact, they're a lot different, and I think that element will put the brakes on this sort of, well, just get rid of all the human doctors and turn everything over to, you know, an algorithm, and part of the reason for that, and again, this may be not cynical, but I think a realistic perspective is that there's a level of accountability, both from a medical-legal point of view, but even generally, I think patients want to know who, what, how decisions are made, right? And there's an entire black box element to a lot of these models that makes them hard to understand, almost by design, and that's okay. That, that also is the other side of the coin of why they're so powerful, but that lack of accountability, that lack of understanding, I think will really slow down the extent to which we turn the wheel over completely to an autonomous algorithm to make clinical decisions for us. At the end of the day, if I go, if I have a medical problem and I need somebody to help me figure out which treatment course is right, there's something comforting about sitting opposite somebody in clinic and having them walk you through that, rather than putting a bunch of parameters onto a computer screen and having it say, this is what you should do. And I think that'll keep the human element at the center of healthcare for the foreseeable future. You mentioned responsibility, and ultimately it's the, it's your responsibility of what goes into your note or whatever medical deliverable you have. Talk to me a little bit about what you think could be some medical-legal problems with this. Yeah, I mean, I think that there will undoubtedly be medical-legal situations that come up. It will ultimately get sorted out in case law and in court or, but you know, it could take any number of different forms, right? Let's say you develop, you use some predictive analytic to predict that somebody's going to develop AFib and put them on an anticoagulant and they have a bleed. Well, who's responsible? The model that said that they were going to have AFib? The model that said, yes, you should take an anticoagulant? The opposite's also true, right? We've alluded to this deluge of wearable data. You're never, any model is never going to be a hundred percent, right? What if a model says, oh no, I don't think this tracing is AFib and it actually turns out to be, and that person goes on to have a stroke. Who's responsible? The model? The company that developed that algorithm? The institution that bought the model and software from that company? The physician who deployed it? The patient who consented to the use of AI in their clinical care? I mean, there are a lot of things to think about here. I don't know how that legal situation will get sorted out, but it undoubtedly will have to be. Interesting. Neil, sorry, do you think it's fair to say then that AI will not replace EP physicians, but EP physicians who use AI will replace EP physicians who do not? I think there's still a long, long, long way for that to happen. So there's still hope for the older, non-technologically savvy? I'm not so sure. I really feel that if you don't at least learn about this and engage, it may be career shortening. I think it's important to understand, like you said, people just throw around these terms. They don't understand what are the semantics of machine learning, AI, deep learning, how these things work, what are the strengths, what are the weaknesses? And I think, what is the data trained on? Is it trained on one center? Is it multi-center? These things are very important because this affects their prediction model and how you use them. So I think it's important to understand what's going on in terms of research and how we're clinically using it. I think we found it kind of hard in some ways. We can predict something, but how do we make it clinically useful? What is actionable on predicting this? If you can make a model that predicts sudden cardiac death in 20 years, well, what do I do with that? And I think we still have a long ways to go to really kind of using it where it really shines and really helps. And educating ourselves is the first step. Absolutely. Discussions like this and meetings like this probably are good for AP as a whole. We've been talking today on a post-session review of yesterday's HRS titled Digital Innovation, AI, and the Future of Cardiac Electrophysiology. Neil Bhatia and Dr. Faisal Merchant, thank you so much for reviewing this with us. Thanks for having us.
Video Summary
In BEAT episode 14, host Mike Lloyd and Atlanta-based experts Neil Bhatia and Dr. Faisal Merchant discuss the impact of digital innovation and artificial intelligence (AI) on cardiac electrophysiology. They clarify AI terminologies such as machine learning, neural networks, and generative AI, emphasizing their distinctions and applications. The conversation explores current AI uses in clinical settings, highlighting its strengths in predictive analytics and generative functions like clinical note writing. However, the integration of AI faces challenges, including the need for high accuracy and reliability due to patient care implications. The experts express optimism about AI aiding in repetitive and diagnostic tasks, cautioning that its full integration will require careful validation and thoughtful deployment to avoid exacerbating inefficiencies. They underline the importance of understanding AI's limitations and potential legal responsibilities, concluding that while AI will enhance EP practices, it won't replace human clinicians but rather supplement their roles.
Keywords
digital innovation
artificial intelligence
cardiac electrophysiology
machine learning
predictive analytics
clinical note writing
Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
P: 202-464-3400 F: 202-464-3401
E: questions@heartrhythm365.org
© Heart Rhythm Society
Privacy Policy
|
Cookie Declaration
|
Linking Policy
|
Patient Education Disclaimer
|
State Nonprofit Disclosures
|
FAQ
×
Please select your language
1
English