false
Catalog
The Beat Webinar Series - Episode 14 Live from HRX ...
The Beat Live from HRX in Atlanta
The Beat Live from HRX in Atlanta
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome to this episode of The Beat, brought to you by the Digital Education Committee from the Heart Rhythm Society. We're today live from HRX, and we're bringing you a live stage so that you can experience what occurs at the HRX meeting. My name is Prash Sanders from the University of Adelaide in Australia, and my co-chair today for this online part of the component is Dr. Melissa Middeldorf, who's currently at the University of Groningen in the Netherlands. So we have an esteemed panel that are presenting for us today. The topic is Digital Innovation, AI, and the Future of Cardiac Electrophysiology. The moderator on live is Dr. Hamid Ganbari from the University of Michigan Cardiovascular Centre, and I'm going to hand over to my co-chair to talk about the panellists who will be speaking to us. Thank you, Prash. So we've got a really diverse team of people today. We've got Blayron Barilio, who is the CEO of 91Life. Now he's a data scientist and modern technology expert who's going to provide some insights into the world outside of cardiology. We've got Thomas Dearing, who's from Piedmont Heart Institute. Now Dr. Dearing is an electrophysiologist who serves as the chief of the arrhythmia centre at Piedmont. We've also got Chen Ho Lim from Tulane University. Now Chen Ho is a machine learning engineer, and he's the assistant director of digital health and AI development at Triad Centre. We've got Jagmeet Singh, who's from Harvard Medical Centre and Massachusetts General Hospital Heart Centre. Dr. Singh is a professor of medicine at Harvard Medical School, and he's also written some books on this topic, so it'll be good to get his opinion in this area. We've got Kevin Thomas, who is from Duke University Health System. Dr. Thomas is a professor of medicine in the Department of Medicine, Division of Cardiovascular Diseases, and he's got some expertise in division equity and inclusion, so he's got to show some sort of aspects in that field. We've got Elaine Wan from Columbia University Medical Centre. Dr. Wan is the associate professor of medicine in cardiology and cardiac EP at Columbia University. So you can see there we have a nice diverse group of people who are going to provide some unique interactions and ideas in the field of digital innovation in AI. So we'll go directly live to the stage. I know we've been really thinking about this for a long time, and I am more than delighted to have this outstanding panel and excited to share some of their insights. I think that in recent years and months, we've had lots of progress, and really the AI systems have been really catching up to the level of performance in many kinds of tasks. It's very likely that AI is going to be one of the most revolutionary innovations for increasing and enhancing productivity in our day-to-day life, and it's going to have huge impact in our healthcare system specifically. I had the pleasure of working with my colleague, Dr. David McManus, and we chaired and edited a series of articles that are published now in Harvard and Journal that you can get access to if you just click on the link that's provided in the description of this session. And I'm delighted to have the authors of those articles here with me to discuss some of the key concepts from those articles. So without further delay, I want to go ahead and have them introduce themselves. Elaine? Hi, good afternoon. I'm so excited to be here. My name is Dr. Elaine Awad, and I'm an associate professor at Columbia University. I'm a physician scientist, just like Hamid, and I'm an electrophysiologist, so I do pretty much all of the procedures. I also run clinical trials at Columbia University, has been part of some national and international trials. So I'm so happy to share the stage with my colleagues. Kevin? So, good afternoon, everyone, and it's really exciting to be back here in Atlanta. I went to undergrad here at Emory, and so always good to come back to the city. And excited to be here with you all for now the third annual HRX conference. And so it's literally, I'm going to shameless plug here, it's one of the most amazing conferences that I've been to. I just think the breadth of people who come together, I think is incredible, and you learn so much and have conversations that we don't have at our hardcore scientific sessions. And so it's really a joy to be here. So I'm a professor of medicine at Duke University, I'm an electrophysiologist. I'm also a vice dean for equity, diversity, and inclusion at Duke, and a health equity researcher. So kind of had a full portfolio of things. And so I'm excited to talk about AI, and I was sharing with someone earlier that I'm incredibly excited about AI and terrified all at the same time. And so hopefully we can get into some of that conversation today. Hello, everyone, my name is Changho Lim, and I'm the assistant director of digital health at Tulane University. And I'm a machine learning engineer. And I'm really excited to be here and discuss the digital health evolution that's been happening around us today, and looking forward to this talk. Good afternoon, everybody, honored and privileged to be here at this meeting and to be sitting next to an engineer. My father was an engineer, and I always like the way that engineers can take technology and ideas and translate them into reality. So I like this diverse group. This is a great meeting, and it's different than our usual ones. And I think what I'd really like to ask all of you to do is make sure you interact with us either on the stage here during this presentation or catching us in the hallways. Because by sharing thoughts, by sharing ideas, and by asking difficult questions, we get better. My name is Thomas Dearing, and I'm a Piedmont health care electrophysiologist. So I had an incredibly long drive. And as Jack and I were talking about, I'm totally worn out by the three-mile distance that I had to travel to get here. I've done a lot of work. I've run our arrhythmia section. I've also lead what we call now our cardiac governance group, which is the entire system of 25 hospitals. And I really lead a lot of value-based care within our health care system. And I think AI has an important role there. And I also like a different term for AI, augmented intelligence, not artificial intelligence. So now I'll hand it over to my colleague to the left, Jack. Hi, I'm Jack Singh. I just want to say that AI actually stands for actually Indian. I'm Jack Singh. I'm a cardiac electrophysiologist at Mass General Hospital, a physician scientist, and a professor of medicine at Howard Medical School. Just delighted to be here. I think we have a phenomenal panel out here. Just delighted to be sitting next to Tom and Blern on this side, and really excited about the conversation we're going to have. And hopefully, we'll be able to tease apart many aspects of electrophysiology and AI, not just on stage, but even after we're off stage. Hi, everyone. My name is Blern Baraliu. I'm CEO and founder at 91Life. No jokes from me, unfortunately. I'm a mathematician by background. Started working, studied pure math, then sort of heard the science of Wall Street. So I did data science and AI and trading and derivatives and so on and so forth. But I always had envisioned a more meaningful purpose in my life. So 13 years ago, and my wife was better than me. She had studied and is now an interventional cardiologist. So I started thinking about how we can apply math to medicine. It was very difficult. Nobody believed in the beginning. They all wanted those servers in the basement where they kept the patient data. And actually, there was an upgrade from the files in the cabinets. But then ultimately, we found a way to get into data. And I quickly realized that electrophysiology is at the forefront of what's going to drive innovation. So our dedication is to advance and contribute what we can to this digital health with, I agree with Tom, with augmented intelligence, where we empower physicians with intelligence from big data and sort of other technological innovation. I'm honored and privileged to be in this tough group. So I'm a bit outside of my depth, but I'll try to hang on. Thank you. Thank you so much. What a wonderful group. And I'm excited to kind of start talking. If you have any questions, please put it into the chat. And I'll be sure to have it post to our panelists. So I want to start with the first question. How can we use AI in clinical practice to improve our clinical decision making? And if you could maybe mention how you're actually using it in clinical practice right now and how it's helping you, and if you could also touch a little bit on how you think that is changing how you're interacting with your patients and technology. Well, I think for all of us electrophysiologists, probably digital health and AI came first for analysis for ECG, because early on when we had ECGs, we had so many to read, sort of machine learning and improving diagnosis of these electrocardiograms seemed the most obvious leap. And then for us as electrophysiologists, a lot of the mapping, that seemed to be another thing that was easy for it to be implemented for us. I think in the clinical side, we see a lot of the AI now for helping us for diagnosis and EMR to sort of shorten the time to figure out what patients need. And also helping in making sure that they get to the right specialist, looking at the right charts, et cetera. But I think it's been brought in about how to implement all of this powerful technology. And I think one of the things that might be limiting is we have all this data, but then how are we going to bridge the app from the physician using it to the patient? And then also this AI is designed for different operators. So for example, a lot of the different algorithms are how our doctor is going to use it. But then as we saw earlier, some of these companies is how our patient's going to use it. So I think the user interface should need to be specific on who's going to use it and how can we bring those two end operators together using these new technology. Great. Yeah, I've been spending a lot of time really trying to understand it and figure out how I'm going to incorporate it into my practice, into my research, but really specifically about how it's going to improve patient care. Because I think we can very easily become intoxicated by the innovative things that it does. And it's amazing. There's no doubt about it. I use iterative AI all the time for writing emails and some practical things that I feel like have made me more efficient. I'm asked to write a lot of letters for promotions. I've learned how to incorporate it in that workflow. And that's, I think, really accomplished some important things. But I really want to challenge us to keep patients at the center of everything we do, because that's what's going to be most important. And the early opportunities and how I've used it in my practice is to, I've used the virtual AI platform through Nuance and DAX to engage with patients and to take notes and new encounters, follow up notes and things like that. And it really has been pretty incredible. And it's made me faster. It's made me be able to focus more on the patient, be more present in the exam rooms. And so I think that's really important. The other part of that that I think that we have to consider is burnout is a real problem for clinicians. It's a real problem. And so, again, as we think about AI, there's lots of questions that we can ask. There's lots of priorities we can have. One of those has got to be, how do we make wellness better for clinicians? Because the burnout rates are currently at an all-time high. And so thinking about how to integrate this into your practice in a very dogmatic, practical way is really important. And so as I've thought about how I'm going to use it, particularly in the early stages, as we're still learning and iterating, incorporating it into, again, how I interact with patients, how I can get my notes done, and having that workflow be more seamless is how I've interacted with it. It's a little difficult for me to speak on how it changed my practice as an engineer. But in terms of what we're focusing on at Tulane today, it's a lot about digital health on commercial devices and readily available devices for the patients. And I think of it kind of like how you could go to the hospital when you have a fever and pay $500, but instead, you could also just go to Walgreens and give a NyQuil and then feel better the next day. So I'm not sure if this is the best analogy, but in terms of this, I think that there are a lot of tools available nowadays where AI can help patients see some of these risk factors right away and communicate with their physicians better instead of having to schedule an appointment and go to the physicians right away. So yeah. I agree with all of the comments that have been previously made. I think we have to look at AI in this particular perspective. I think we're early on in the journey. You know, we're kind of driving a Model T right now, and we have electric cars that are 100-plus years later becoming the manner by which we do it. And being in that early phase, I think we have an awful lot of opportunity to see where the weaknesses are, where the gaps are, and develop AI tools for that. And develop AI programs to fill effectively those gaps. So as others have said, patients get effective care, and they get it actually in a cost-effective manner, because I think that's going to be a very important component going forward. We already know, and you mentioned, Kevin, nicely about physician burnout and overwork. And I think it's not just physicians. It's the entire clinical team that is, you know, short-staffed and oftentimes frustrated and burned out. And we know right now that there are, like you mentioned, Elaine, there are things like, you know, the EKGs, you know, which are set up initially with a reading that can then be modified. And we know that the diagnostic capability of these devices using AI to read reports from implantable devices and read wearable reports and read imaging reports are equal to or better than many physicians. So we have to figure out how to integrate that so we can free up the physicians, free up the other clinicians so that they have time to actually communicate to the patients. Because one of my biggest concerns is with all the work that is out there and the demographic changes which are occurring and the large number of patients suffering from heart disease is that folks don't have the time to really true to talk to the patient, find out what are their goals in life, what is important to them, and put it into that perspective. So I look at AI as a tool, a very, very important tool, maybe the best tool that we've ever had in our field. But we have to look at it as a partner, as a tool, and together by putting those things in place, I think we can get to the next best place. So in our organization, we're using it in somewhat of a minimalistic way at this particular point, helping with scheduling, helping with some diagnostic considerations, and trying to limit the amount of burden that the docs are truly dealing with. But I see immense potential here. But I think if I had to make one statement, I would say, what we need to do is to use that Latin phrase, carpe diem. We need to seize the day and figure out where we want to go. Define what is important, prioritize, and move effectively in that direction. So a lot's already been said. I'm going to break it up into four parts, and I think when you look at AI, you can kind of look at it as predictive analytics, or analytic AI, generative AI, robotics, virtual reality. And we've been dabbling in each one of these aspects of care across multiple disease states, and I think the commonest ones that many of us encounter are atrial fibrillation and heart failure. So just to give you an example, for atrial fibrillation, we've done a fair amount of work where we used ECGs to predict AFib in the future, and work at our place has suggested you can predict it with a certain accuracy five years from now. At the same time, there's work from our hospital that is actually from a patch monitor, you can predict which patients are going to develop atrial fibrillation in the next 13 days, and not only that, even predict which patients are going to develop ventricular tachycardia in the next 13 days with an AUC of 0.92. So pretty darn good. Those are investigational still, but they're around the corner, I think. They need to be validated pragmatically in some clinical trials, but they're around the corner. And then not only that, we're using cloud-based algorithms off the Apple Watch to monitor for the QT interval. Again, investigational, but it's actively happening in all of our centers, and it's only a question of time when many of these inpatient situations will become outpatient monitoring and home-based care using conventional variables. So we're getting there. We're moving in that direction, and I think some of that practice is getting into our daily lives. On the robotic side of things, I would say augmented reality. So Jen Silva here founded a company called SentiAR, and we are using augmented reality while doing AF ablations where we can pull in the electroanatomical map holographically and actually move your catheters in a personalized way inside the heart. So there are forms of AI that are actually already finding their way into clinical practice investigationally, but soon will become a regular form of our day-to-day practice. And the same thing with heart failure. I know I won't take too much more time, but I think in heart failure, right from the diagnostic component, from the predictive component, and the treatment strategies, you can only imagine that self-management approaches using generative AI where patients can actually talk to the data sets like a real person is, again, happening around the corner. We're using generative AI in our center, just like Kevin mentioned, to help us with DAX or Bridge, write our notes, or through chatbot AIs actually help create and construct a response to our patient's email. This is investigational, and then that can be overread by the nurse and sent forward to the patient. But it's relieving us of a lot of burden right now. So certainly, a lot happening in the space. Go ahead. So from the other perspective as partners to clinics, when you're talking about how the clinic is changing and where it's going, we're helping with remote monitoring of devices and sort of optimizing the device clinic. But as Dr. Monzi said earlier, we're still in the realm of 90% about technology and operational efficiencies and 10% AI. It's still 90% of the time it's about curating data and fixing the system and making it easier, reducing burnout. But I think more important in this respect where the clinic is going is to think about the philosophy and the vision, where we can go. And when I co-founded 91, the ideology or the idealism was pretty simple. We need to take the doctor back to the center of patient care. The physicians have been disintermediated over the last 30, 40 years from patient care. It's become a lot about administration, insurance. It's become very difficult. It's about RVUs and Q2 and Q3, all these hours. So how do we do that? The way we conceptualize this, we wanted to create these tools and applications and augmented intelligence and mathematical modeling. But the idea was to sort of go back to how patient care is envisioned from a medical and scientific point of view. So one, you have integration of data. Then two, you have sort of delivering that knowledge to the patient and discussing with patient and ultimately three, negotiating that patient care with all the participants. So our idea is the first sort of step in truly internalizing the value of AI is to do much better job at integrating data so that you can create sort of this concise representation of information that is found out there in vast amounts of research and data and history and sort of different knowledge across hospitals and health systems and make that palpable, make that sort of digestible for the physician. So that's the first integration. The second, I think what's going to be much harder is to also help the physician deliver this knowledge to the patient. So in other words, for example, explain hazard rate and probability and Bayesian framework to a patient, which is not going to be easy because math is just not that easy. And then ultimately you empower the physician and the patient together to negotiate patient care with CMS, with other payers, with hospital systems and so on and so forth. So I believe as the esteemed physicians here talked about some of the sort of low hanging fruit in terms of how AI is being used. Ultimately, the goal here is to take all this knowledge, all this information, all this power and put the physician at the center of patient care so that we can go back to the way physicians had this power to decide about how to treat patients. That's terrific. I guess what I'm hearing is that the way I always think about technology adoption is really about technology, but also about people. If technology is like 10x better, it's like a no brainer. If I could just walk in and talk and it creates a note, adoption. The problem is that 10% better technology. It's a little bit better and then it becomes like, is it seamlessly integrated in my clinical practice? Do I need to get an extra notice? Am I going to get an extra alert to my email? It can lead to all sorts of other things. So integration also I think relies a lot on a lot of things that you described. It has to be seamless, especially if it's not a 10x better technology. I want to double click a little bit on what you mentioned about burnout. Whenever I talk about burnout, number one cause of burnout surprisingly is technology. We're talking about here about technology causing less burnout, which seems as a clinician doing Epic every day, I am very suspicious of this. Can you convince me that this is going to be better for me, Elaine? I think it's a catch 22. I think all of what my panelists, co-panelists are saying. And I think it's so interesting from the last figure to the panel, what they're saying here is, well, the last part is AI can do a lot of complex things. Diagnose ECG, look at the medical records, but what things can it not do? You can hear our panelists talk about what is taking our time? Is talking to our patients. What can it not replicate? Is trust and relationship in the physicians and providing care. And it's a catch 22 because the more sensors, more data that we're taking, the more we have to explain the patients that we're finding, you have hypertension, you have a fib, you have heart failure. You need to do this, this, this, and this. And to educate them about all these things for them to trust us that this is what they need to do to improve their healthcare takes a whole entire long list of things. So I think that it's good in diagnosing all these things and it will allow us to provide better care, but then it helps us realize that we need to do a better, find a better way to have our patients work with us to improve their healthcare. So until AI can help with that also, this is why I say it's really important about the user interface for them to understand. And especially when we're dealing with patients who are in their eighties or over sixties, you know, if it's a fib, there's a lot of distress in technology. And or literacy and digital health. I think that's the bigger problem, not, not necessarily distress, but not knowing how to use it. That might lead to some distress that ends up being a complicated sort of catch 22. I guess what you're saying is like being a physician is a set of tasks, right? And then if you unbundle it, there's a lot of tasks that, you know, chasing charts or doing RC revenue cycle management that can outsource that. So we'll have more time for, for things that we can do. Is that kind of what you're alluding to? Yeah, absolutely. I mean, just to highlight what Doug was saying for the HRS, we had the panel of articles that was published in heart rhythm just before this conference. And I encourage everyone to take a look at it. And we talked about the digital dashboard, what would be best and very simple, like for digital dashboard, some things we said, well, the most important thing is like for me to contact the patient. So just having their contact information, like their phone number on the digital dashboard pulled up immediately, if it's a red flag would make my life easier rather than go to EMR and search, you know, what is the best way to contact the patient? So I think those are ways to help efficiency or implement all of these algorithms for us to deliver better care. Yeah, I agree. I think, you know, I think everyone's going to have a healthy amount of skepticism early on. And it, you know, look, it's going to require some investment, because we have to build platforms that will allow AI to do what it does best, right? And that's going to take time to build those things. Whether it's, you know, ways to, and this is real, like we, I'm sure many of us feel it now, like the sensors and wearables are incredible. We get great data. It's a great way for patients to be affirmed, or, you know, brought in if there are concerns that you see. But it's a lot of time and investment. And so if we can create platforms and AI is able to process that data in a reliable way, it can make things a lot easier for us and take, you know, and I think the way you said it was really good, like we do a lot of things in our capacity in caring for patients. And a lot of it takes a lot of time. I mean, how much time do you spend with a new patient, trying to track down medical records, trying to get a nice summary of, you know, why are this person here to see me? What has happened? How many ablations have they had? Where are they in the treatment process? How many, you know, antiarrhythmic drugs that they've been on, right? And so just think about the ability to be able to synthesize and get that done for you so that you read a nice summary of what's happened, and then now you're ready to go see the patient, right? So AI as like a data information specialist at your side, so you can, you know, spend more time ablating maybe potentially. Absolutely. Well, I don't really burn out from like patient care. My computer burns out, but I do think that the burnout really comes from the abundance of data, not the AI itself. And, you know, and so much of the AI models now are still like insistently like decision demanding than rather than decision supporting for the physicians. And I think that's really the core of the AI development that needs to happen to optimize the physician workflow. I think dealing with inefficiencies and redundancies makes it hard for all clinicians and leads to burnout. So I would say from my perspective, there are a couple of things that I think are necessary as we continue to develop AI. One is reliance and excellence. It needs to be really accurate so that we have confidence. And I mentioned earlier, I look at it as a tool and I look at it as a partner of mine so that when we get results, we can rely on the fact that they're accurate. That leads to the second component to my idea, and that is expediency. We need to be able to communicate to patients, get that data to patients effectively, and we need to be able to get it to them in terms that they understand. A PhD, you know, engineer is going to be a lot different than, you know, someone who has English as a second language and minimal education. We've got to be able to look at it as a partner who can help with education as well. And I think that is really key because then when the patient comes in to see us, we know where they are. They have learned more about what is going on and what can be beneficial. And I think last but not least, we need to make it so that everything can, you know, connect and work together. Many times what happens at institutions is, and it's through no negative fault on the part of the doctors or the administrators, but people function in silos and everybody's so busy is that communicating is very difficult. So if your job is to do X and your job to do Y and my job to do Z, we can compartmentalize it and can have ways where AI can help communicate effectively. So we're not creating redundancies, but we're making sure that things do react and get done appropriately. And I think being able to communicate to patients about results and being able to move it in that direction would be helpful. So I think it's important that when we're talking about AI out here, I'm guessing we're talking about generative AI and the role of generative AI and not, you know, the analytic and the machine learning end of things, which have different connotations associated with it. I think the generative AI, I think it's important also to know that utilizing that in clinical decision-making is no entry zone at this point in time, because there are enough issues with there are enough issues with confabulation, the hallucinations and inappropriate and incorrect advice that can send you on a wrong train. So I don't think it's going to help out there immediately. Obviously, when we start training, you know, curated data sets with limited language models, and specifically, that's a whole different scenario where we can start using generative AI to help us with managing our patients. But in terms of, you know, enhancing efficiency, I think what you said, Hamid, is really spot on. It has to be seamless, has to be a part of your workflow. If it's not a part of your workflow, then it's not going to work. Point number two is, I think the problem is that as we use generative AI to, you know, create our notes and save time, to summarize our charts and save even more time, our clinic visits from 20 minutes can become seven minutes. Is that going to mean that that efficiency is going to lead us to see more patients? And what we thought we were using generative AI to enhance the humanism in cardiovascular care or medicine as a whole, where you can spend more time face to face with the patient, now is being replaced by additional visits. And I think that's something we as a community really need to be, you know, sensitive to and prevent, you know, that slippery slope of just making efficiency into just seeing more patients and not about the patient itself. With that, I'll pass it on. Yeah. So in the beginning, technology was built by experts to be used by experts. And then Steve Jobs came along and said, listen, we got to make it simple. So I think the solution is you need a true partnership between technologists and mathematicians and physicians. And if that were to be, let's say, a 400 meter Olympic run, I think we just got off base and we had a false start. And I say this because health care is the second slowest to adopt new technology after the government, you know, as an industry. So I think what we need to do is this partnership between technologists and mathematicians have to happen in a way that information has to be delivered in simplistic form. However, the drill down has to be available there for the physician in an easy enough format so that the trust is built between what the mathematicians and technologists are providing, what the physicians are consuming. So Dr. Wan said it right. The hardest part, I think the challenge we're far from even conceiving right now is how do you replace, if you will, or replicate the relationship between physician and patient. I think until we have true AI, at least, that's going to be very difficult. But we don't need to shoot for that. I think we need to just allow more time. So what Jack was saying, instead of having more patients every seven minutes, you spend more time with the patient to deliver the knowledge and to negotiate the care, while the delivery of information from the system comes from an explainable AI math technology model that allows the physician to, from time to time, poke and say, OK, let me see this sounds suspicious. So you make it simple enough, but you don't make it a black box. I think the biggest problem we've had is that when AI started medicine, it started as a black box. I was at the European Cardiac Congress in Warsaw in June, and there was someone who was presenting a model that worked really well for defining all these arrhythmias. And when asked, how does it work, I really didn't like the response. He said, well, listen, we've done it so many times. It works. It's tried. It's got FDA approval, and it works. I don't think there's going to be enough, because I know the physician is not just inquisitive, but passionate and caring about what decision they're making about the patient. So can it help in decision making? Absolutely. But it needs to develop this trust between, I call it machine and physician, but really it's between the people behind the machine and the physician. Sounds amazing. You guys persuaded me that it is making me more productive. It's faster. I'm going to have less burnout. I'm going to have lots of time to go home on time. So why are we using it all the time? What's stopping us? What are some barriers that you can think of? Why aren't you using it all the time, in your practice? We need the audience to help us. We need innovators to help us. I think that's what the panel is saying. We need shortcuts to make it in our clinical workflow. I think that all of us could just look at the number of apps we have on our phone for health care delivery. We're asking for consolidation, for easier access to bridge the gap. So I think all of our panelists are saying we're just in the beginning and we're thankful for HRS to bring innovators because obviously there are a lot of gaps and a lot of needs. I think those are the limitations right now. And I just want to echo what the previous panel said about the black box. Hamid, you and I and Jag and other people in the audience, if you're an engineer or scientist, you want to understand why does it work. And understanding what's the mechanism inside the black box will help us further refine, tune the algorithm and further improve care. So I think that also could be helpful instead of grasping on an unknown of why there's such deliverables. But I think in the beginning we're making good headway and I think the slope is going very, very quickly, the acceleration. And I think working with the audience here, with the innovators and inventors and trying to improve the implementation of this will definitely lead us to the next future, bring it closer to a reachable goal for us. So a lot of making the model more explainable and making it seamless in my clinical practice. So Kevin, maybe one thing I wanted to ask you specifically about it, the scenario being I develop this AI algorithm, I give it a goal, I'm going to deploy it and somehow in the middle I fail to follow up to see if the goals are matched and then AI kind of drifts and starts doing things that I didn't want it to do. Am I too pessimistic about it? Am I freaking out too many sci-fi movies? And the implications for that in healthcare, right? Like if you're optimizing for 30-day readmissions, am I going to like poison everybody and no one gets readmitted? Right. No, no. I think you really raise a really important point because sometimes in the process of development, sometimes you can lose track of the outcome of significance, right? And so I think it's really careful that we have core principles as we're developing these algorithms using machine learning in either more complex neural networks like deep learning that will allow us to understand things better. Every step of the way, we've got to ask ourselves, are we holding true to our core tenets of what we're doing, right? Because that's where we go wayward if we're not doing that and holding ourselves accountable. But first, is what we're doing transparent and accountable, right? And that's how we're going to build trust with our patients and that's how we as clinicians are going to embrace this more if we can answer that question. We have to also ask, is it trustworthy? Is it fair? Is it equitable? Is it treating all patients the same or is it understanding that the root of equity is we all come to this at different points with different challenges in terms of thinking about our health care and our lived experiences. So we've got to ask that. Cost effective. And I think the point that Jack raised about, you know, if we become more efficient, what does that mean for us, right? We're going through that with PFA right now and people are running around saying, yeah, I'm doing AFib in seven minutes. Why are we doing that? Like what are we trying to accomplish with that, right? To get our RVUs cut more, to get less things paid for, we're already dealing with that. And so as we tout efficiency as a goal here, we've also got to think about what that means for reimbursement. We have to have our payers at the table. We have to have our patients at the table to say, hey, you know, I don't want my visits to be seven minutes now. Like I want my doctor to talk to me in ways that I can understand using this tool to support it where it can, that makes sense. And so we have to do all those things. So cost has got to be at the forefront of this conversation as well. And I'm going to shift gears a little bit because I think this is important to talk about. As we think about machine learning and as we're putting data in, I'm just going to say a very practical want that I have. I want to know how to ablate persistent atrial fibrillation. I don't want a different technique that works sometimes and sometimes it doesn't. I'm putting alcohol in the vein of Marshall one day. I'm isolating the posterior wall one day. I'm losing complex fractionated electrograms one day. Like I just want to know how to ablate AFib so that my patient will have a great outcome, right? And so asking those pragmatic questions that will help us take care of patients, enhance, I know, my own personal sanity with what to do. I mean, but these are the kind of questions that we need to really move the field forward. And it's done through partnerships so that the engineers and the mathematicians that we're working with are helping us answer the right questions. And they understand what this and how this ties back to patient care. And perhaps efficiency should not be the human good that we're optimizing for all the time, right? There's other things more important than efficiency. Maybe, John, I want to piggyback on a comment that Kevin made. One of the major issues always with adoption of these models is data, right? We have lots of data, but most of it is garbage, right? So can you maybe a little bit touch on importance of data as one of the limiting key factors for getting these models to act properly? And what are the implications of that for compute? In two minutes. Yeah. That's a whole talk. So, you know, obviously the quality of the data is really important when you think about signals of ECGs and how clean they are, but also in terms of the population it's representing. So it may also cause disparities if you're with poor sampling. And a lot of these models performing poorly, like not being generalizable, is because of this specific part. And one of the things that we talked about is about explainable models and stuff. I'm actually a little bit against this idea of all the clinical models being fully explainable. I've read a bunch of papers on ECG AI models and their saliency and stuff. And it goes into one beat and then has this heat map of what it's looking at in a PUA or something like this. But there's also so much about time-based, the frequency of what's the relationship to this peak and that peak. And this kind of time frequency events actually cannot be displayed in a saliency model that was designed for images. So we're actually explaining it wrong. And so much of it is happening because that's how someone else did it in a different field of AI. Actually, we ask such a PT something and then we don't really think about why it answered this way as long as it's answering it right all the time. And I don't know, maybe it's a different perspective as an engineer, but if it's right all the time, then do we really need to know when we don't really have a proper way to explain it? And these papers are being published everywhere in the biggest journals, but I haven't really seen many papers addressing saliency as a time series model being explained through images. It's kind of weird, but yeah, that's- I understand exactly what you're saying. It's that the Greeks were always very hesitant of anyone who tried to explain everything, right? So maybe there is something there. Maybe, Tom, you can maybe touch a little bit on what are the actual payment and reimbursement issues that are stopping you from adopting these in a healthcare system like yours? I think that's a very good point. Right now, there is no specific payment for AI, as we all know. However, we've got to be able to have administrators and leaders who are facing very significant economic problems understand that it can be beneficial. And I'd be frank, I don't think every institution can lead this. I think we need to have certain institutions that have the expertise scientifically, have the expertise engineeringly, have the operational connections to be able to drive something, prove that it can lead us to where we need to be clinically, and then show how it can create value. That is by improving outcomes at a lower cost. I don't think we can do it across the board, and I think we need to be focused. Different institutions function differently, but if one or two can get together and show how that can be done and then demonstrate benefit, we can broaden it. So I'll just give an example, like guideline-directed medical therapy for patients who have left ventricular systolic dysfunction before they get a defibrillator. We all know that the vast majority of patients getting an ICD are not on guideline-directed medical therapy at the maximal tolerated component. So putting in a program that can do that, that can work with the heart failure, general cardiology, or EP docs and get them to that level and showing that maybe some patients don't need these devices, or if they do, they truly need it and they've been maximized as much as possible. So I think we need to have institutions function as a cow, that is a coalition of the willing, work together to build a program, then try it from your institution to broader institutions because private practices, large institutions, academic centers, small and large, are all going to have different approaches. So I would say start small, build something, have metrics to show it works, demonstrate that those metrics add value, and then broaden it and see where it goes. So maybe value-based models could be a forcing function for adoption on a large scale. Jag, so much of being a doctor requires tacit knowledge. You have to kind of be at the presence of the patient, interpret not only words and images and data, but how you're interacting, their overall interactions with you. How do you see that hindering adoption of these models? Or is this something that's going to... Am I going to not want to use it because I'm afraid that this is going to interrupt that sacred relationship, or there's something I'm missing there? Yeah, that's a good question. I wish I had 15 minutes to talk about that because that's an amazing topic to talk about. I think it's really important to recognize that the massive processing power we have, the ubiquitous data we have, and the unlimited connectivity we have is going to change the way we receive and deliver care, for sure. We have to recognize that it's going to be a merge of both the digital touch and the human touch, and medicine is going to transform out there. We have to, however, self-impose upon ourselves to keep technology tamed, that it doesn't overwhelm us and overtake that human bond that we have with our patients. So it's really... I know it's an important question, and you have just 25 seconds, and I want Blair to have the last question, but happy to talk about that more later on. We definitely have to talk about that some more. So what are some technological issues when it comes to adoption of these on a large scale? If you could specifically touch on the compute limitations, both on the training and also on the inference side. I think the biggest limitations, you could say, at a technical level is access to data, which in electrophysiology is much more viable, but in other fields it's a little bit more difficult. But I think the reason AI is slow to be adopted in healthcare is because of one factor that is much more significant here than any other field, and that is ethics. You can use AI in finance and train. If it doesn't work, you throw it out, try another model. Worst case scenario, you lose money, usually other people's money, unfortunately. But in healthcare, you can try and kill a few patients and say, okay, I'm going to try with the other ones. So I think that is one big consideration. Just one final application to what Chano said about explainable models. It is absolutely true. You need multiple dimensional graphics to explain things like k-neighbors and some of the machine learning techniques and the time series dependence when you generate the signal. So I don't think we need to get to that level of explainability where you can literally recreate the computer and the binary code and all that stuff. But I think there needs to be enough comfort, like it is for a mathematician when they drive through a proof that I can use a theorem that has been proved, and I know it's been proved, and if I really wanted, I could go down and understand the theories. I don't need to necessarily know the guts of the system. But we need to have models that are sort of comprehensible at the high level for physicians, and mathematicians need to understand what they're doing. It goes into a dimension beyond our sort of human intelligence, because we're talking about big data and all kinds of processing that has these dependencies that are difficult to follow. But ultimately, I think ethics, explainability, and a strong partnership between mathematicians, technologists, and physicians is what's going to catalyze this adoption of AI. And in my opinion, I think we're probably five years away from when we're going to have a much smarter hospital with AI. Thank you so much. This was a lot of fun. I learned a ton. Please, I wish we had some more time. Please go ahead and read the articles that are published in the Heart Rhythm Journal. There's a link in the description of the session. I want to thank our panelists. This was fantastic. I'd love to be able to continue this conversation on the sidebar over coffee or drinks, and thank you again. Wow. Well, that was a discussion about what we know, where we're going, and what we want into the future in digital health in cardiac electrophysiology. I hope you have enjoyed that session live from HRX, giving you an impression of how HRX occurs and an interesting format that is used here. We've learned a lot, and I look forward to inviting you to the next session of The Beat.
Video Summary
In the latest episode of "The Beat," Prash Sanders and Dr. Melissa Middeldorf moderate a discussion live from HRX hosted by the Heart Rhythm Society's Digital Education Committee. The key topic is Digital Innovation, AI, and the Future of Cardiac Electrophysiology. The expert panel includes data scientists, electrophysiologists, and AI engineers discussing the current and future roles of AI in clinical practice. <br /><br />Key takeaways highlight AI's potential in enhancing diagnostic accuracy, managing patient data, and reducing clinician burnout by streamlining tasks such as record-keeping and patient monitoring. However, the panel emphasizes the necessity of ensuring AI systems are transparent, reliable, and integrate seamlessly into the clinical workflow. They stress that AI should augment rather than replace the human touch in patient care, ensuring that efficiency gains do not compromise the quality of patient interactions.<br /><br />Barriers to widespread AI adoption include data quality issues, lack of seamless integration, and ethical concerns. The panel calls for collaboration between technologists and healthcare providers to address these challenges and advance the use of AI responsibly in cardiovascular care. The discussion concludes with an invitation for further engagement and exploration of the published articles in the Heart Rhythm Journal.
Keywords
Digital Innovation
AI
Cardiac Electrophysiology
Diagnostic Accuracy
Clinician Burnout
Patient Monitoring
Ethical Concerns
Heart Rhythm Journal
Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
P: 202-464-3400 F: 202-464-3401
E: questions@heartrhythm365.org
© Heart Rhythm Society
Privacy Policy
|
Cookie Declaration
|
Linking Policy
|
Patient Education Disclaimer
|
State Nonprofit Disclosures
|
FAQ
×
Please select your language
1
English