false
Catalog
The Use of Artificial Intelligence to Treat Cardia ...
The Use of Artificial Intelligence to Treat Cardia ...
The Use of Artificial Intelligence to Treat Cardiac Disease
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Good morning everyone. Welcome to this joint session between Heart Rhythm Society and the Great Wall International Cardiology Conference in China, titled AI and EP from Hype to Real World Practice. I'm Dr. Rong Bai from Bennett University Medical Center in Phoenix. I have my great pleasure of co-chairing this session with a true leader in this field, Dr. Singh, and welcome to this session. So as AI involves, actually, as a clinician, we need to understand how to bridge the gaps between experimental potentials and the real world practice in our daily practice. So I think this session is really, I mean, just doing that. So today we are hearing outstanding speakers, talks from different speakers in this field. So without due delay, may I introduce the first speaker, Dr. Sun Zuo from Beijing Anzen Hospital, Capital Medical University. So his topic is AI in identification, screening, and managing of atrial fibrillation. Welcome, Dr. Sun. Okay, yeah, okay. Good morning, every professor. My name is Zuo Song, and I come from National Clinical Research Center for Cardiovascular Disease, Beijing Anzhen Hospital, China. And it's my great honor here to give this presentation. And today, my topic is Artificial Intelligence in Identification, Screening, and Management of Atrial Fibrillation. And in this presentation, I will introduce my center's explanation in the field of AI. The evolution of artificial intelligence. As we know, the concept of AI was first introduced at Dartmouth Conference in 1956. And during the decades, it experienced three important waves. And in 2016, the AlphaGo's victory impressed the world and reignited the global interest. And from 2026, the Chats GPT, followed by DeepSeq, pushing the AI into a new era. And atrial fibrillation is the most important and hot topic in our field. And from this picture, we can find that during the past five decades, from 1971 to 2021, the acute myocardial infarction is always the most interesting topic in our field. But by 2011, the number of the atrial fibrillation papers significantly passed by the number of acute myocardial infarction papers for 12 consecutive years. And this is an important study that published on Lancet 2019. This paper is the first study that used sinus rhythm ECG to identificate atrial fibrillation. And in their study, they conclude that the AI-enabled sinus rhythm ECG can identificate patients with AF. And their accuracy is about nearly 80%. And this study also gave us an inspiration that maybe in near future, the AI can be promising in this field. And follow them, we began to perform some studies. And I will introduce our study from these three aspects. AI in identification, AI in screening of AF, and AI in the management of AF. And our first study is about the AF burden. As we know, the AF burden is the most famous and hot topic in our field. And we began our study using a smartwatch with an algorithm-guided PPG to test the AF burden. And in this study, we include 245 AF patients, obtained their PPG data, and cut the data into 30 seconds interval. And we can conclude that at interval levels, the PPG sensitivity is about 96.3%. And specificity can reach 99.5% for the estimation of AF burden. And the second funding is that the AF burden estimation by PPG was highly correlated with AF burden that calculated by ECG. And we perform another expansion analysis. We enrolled another 728 patients, and we received a categorization for issue of abolition. And this patient's mean age was about 62 years old, and two third of them are male. Our first funding is that at patient level, the PPG's accuracy can achieve to 99%, and the sensitivity can reach 98.6%, and the specificity can reach 99.5%, with a consistent about 98% performance across 24 hours. And from the figure, we can find that for the valid recording rate, it's different between ECG and the PPG. For the ECG valid recording rate, it's similar between daytime and nighttime. It's about 95% versus 97%. But for the PPG, at the daytime, it's a lower rate, about 44.8% versus 86% at night. So based on these two studies, we found that the AF burden can be estimate more accurately by PPG. But we found that false positive due to PAC and the false negative due to normally pasted AF are important causes of AF burden bias. So we conduct another study to use the smartwatch, smartwatch-carried ECG to correct the PPG for the AF burden estimation. So we set up a four-step correction first. The first step is data acquisition and management. The step B is model implementation, and the model under step C is feature matching and the label adjustment. And the last step is label corrections. And from these five figures, we can find that the accuracy of AF detection before and after correction has improved a lot. And here's another study about, we used a single-lead recognition of atrial fibrillation. We corrected 505 AF patients and another 1,032 normal patients. And we cut the long duration sample into 30 seconds interval and create a data site with millions of signal fragments. And we also used neural network-based classification training and achieved a segment of AUC of 0.925. And from the right table, we can see that if we prolong the mirror period from one minute to 24 hours, their accuracy can improve from 86% to 91%. And the second part, AI in the screening of AF. This is our first study. We used single-lead long-term ECG monitor to test the effectiveness of screening AF. In this patient, we enrolled 1,233 patients whose mean age about 65 years old and 58 of them are male. The first finding is that we use a long-term monitoring can get a higher AF detection rate, about 4.5% versus 2.2% by the normal ECG monitor. And the second finding is that the proxysmal AF detection significantly increased from 0.08% to the 2.16%. And the third finding is that the highest proxysmal AF detection is from the second day to the first day. And nearly half of the AF was detected in this period. And we perform another study using the seven-days patch ECG monitor in a community-based coherent in the elderly population. We total enrolled 10,000 old people in the China rural area and we found that 444 persons were diagnosed with AF using patch ECG monitoring. And the media AF burden was about 0.97%. 249 persons had the persistent AF and another 195 persons have the proxysmal AF. And among the proxysmal AF, only 16 persons was identified by the 12-week ECG. And based on about two studies, we perform a cluster RCT to test the effectiveness of single-lead seven-days ECG patch in detecting newly diagnosed AF. We randomly assign 128 village into long-term screening and routine screening teams. And we set two end point. The first one is AF detection rate as the first year. And the second one is cardiovascular event rate as the third year. And three years later, we can publish our outcome. And the last part is AI in management of AF. Last year, we launched Dr. Yingze, AI cardiologist in Greenville International Cardiology Conference. This app can provide AI personalized medical recommendations based on the case that you gave. And AI recommendations include case summary, diagnosis and risk evaluation, treatment recommendation, and lifestyle recommendations. And the last two study are the prospective coherent study. The first one is Polaris AF. This study, we use the smartwatch with PPGL algorithm to evaluate the AF burden. The study population is the persons who diagnose the AF. And the intervention is wearing a smartwatch with PPGL algorithm to monitor the AF burden. And we assess the primary outcome are all-cause mortality, cardiovascular disease, myocardial infarction, heart failure, stroke, peripheral vascular disease, recurrence of AF, and palpitations and symptoms. And the last study of Polaris AFCA. This study, the difference between this study and the former one is they have different study population. The population in this study is the patient with AF and they receive the castor ablation. And the intervention and the study outcome are the same. And in future, our center's research has two directions. The first one is AI-powered predictive models and the second one is AI-enabled wearable devices. It focuses on the AF assessment and their personalized anticoagulation decisions. And at last, please allow me to introduce our director of our center, Professor Ma Changsheng. He is the president of the Chinese Society of Cardiology, former president of Great Wall International Cardiology Conference. And Professor Ma is also the Associate Editor-in-Chief of Circulation, Editor-in-Chief of Pace, and the Chinese Journal of Cardiology. And this is me. And we hope that we can have a good relationship and have the cooperation with you. Thank you. Thank you, Dr. Zou. That was a lot of clinical protocols presented in one talk and I think there's going to be a lot of questions. So we'll keep the questions for the end of the session, specifically, since we've gone over a little bit on time. But congratulations on the tremendous work and we look forward to this collaboration with the Great Wall of China, a collaborative effort that is out here. So thank you and we'll move on to the next talk. I'm gonna introduce the next speaker who actually needs no introduction. That's Dr. Nasir Maroosh, who's gonna talk about a really broad topic, artificial intelligence in cardiology. Thank you, Nasir. This is a big topic, artificial intelligence and cardiology. Let me ask you a question before I start. Who read the book, Future Care, raise your hand. Okay, you need to read that book. Jack, it's a great book, not because he's a close friend, but you need to read it. It's a necessity for everybody involved in cardiology and medicine to read that book, how we are, where we are, where we're going. Summarizes, I wish I can give you the book and don't give the lecture, but this makes it easy. On that great book, Jack, you should really read it as a review. So artificial intelligence and cardiology, that's a huge topic. When I saw the title, to be honest, I said, I don't know what to start. Or where's Massad? He didn't know what to start either. Electrophysiology, when we brought pacemakers and we changed medicines and ablation, but I think AI will change medicine forever. And if you're not there yet, you better figure it out and jump on that train and try being serious about it. We still talk about AI and cardiology and AI in medicine as something cool and Nick Peters will debate me about it and maybe not, and maybe yes, and what we should do next. We need to get serious, Nick, you and me, and follow that future care book guidance a little bit. I want to talk about what I know. I want to talk about what my experience has been. You heard great work from Anjan, amazing work, and everybody else with his own tools that they have in their hospitals and clinics are doing a lot. This is us. We're trying to take advantage in AI and everything we touch and do. And that's why one of closest friends and partners has been over the years at our institution has been Chen Ho Lim, an AI engineer, was an accident, but turned out to be a great friendship that we built to really work together all last five years and how we move AI forward at our institution. Let me start and tell you how we change our life based on AI. This is a work from Dr. Feng, who's sitting here. He's not in Anjan. He's with us. He's from Beijing, but he's with us, and he's not going back, by the way, to Beijing. He's staying with us. He worked on something simple, using the data we have on our patient to make an AFib, not a binary, yes or no, paroxysmal persistent, rather a score. When I see my patient in the clinic today, he run his algorithm, he tell me, Dr. Maroosh, this guy or this lady's score as a paroxysmal patient would be, like this example, 0.4 or 98, a number we start with. Why this is important is after intervention, we can have a score system based on this criteria and algorithms from Dr. Feng to tell me that patient improved or not based on that data. That's AI, simple AI. He run his algorithm. Talk to him here if you want to use it in your clinic. That's an example of the age of the patient that he gave back to us, with Yongxiu sitting here, also from China, and he's staying with us. He's not going back to China. Yongxiu worked with Chen Ho to gather the age. This is sinus rhythm, but AFib patients now today, when we see them in our clinic, we give them an age. That's AI in the cardiology we know. AFib patient one, for example, his chronical age is 55, biological age is 59. The other patient is 61, based on this algorithm. These are data we started putting together and understanding AFib in different perspective in the goal that we talk about, AI making my AFib rather than AFib. AFib prediction in sinus, you see great work from China and others at Mayo Clinic, and Jack has been working on a lot of stuff as well, Nick, but we've gone far. This is work from Yongxiu sitting here, who took data from 800,000 strips of ECG, and the beauty about this data, because it was a part of a prospective trial, the decaf two strips, a single strip of ECG, and he made it, as you know, AFib changes, ECG changes as a dynamic marker, as a biomarker, changes with multiple factors. He took the seven days, within seven days, if you give one shot of ECG a day, it would tell you the great AOC, as you can tell here, if you had or would have an AFib within three plus minus days. That's great for the world we're living in when we treat our patients and keep a watch on the wrist. I don't call it a pill in the pocket anymore or a smartphone. It's really a watch on the wrist that you can record your ECG and tells you a lot. If you're going to have AFib, you had AFib, defining, especially in the world we're living in, continue to monitor our patients with high risk of AFib, continue medication, but thinners, on and on and on. That's a work from another work, we're using that strip from Chen Ho, who gave us this risk score, AOC of, I think, 0.9, and using in the clinic, Monique Yongxiu is sitting here, she's been using this routinely. We look in patients post-treatment, we run this algorithm, and the algorithm tells us this is high risk, give them antiarrhythmics for the next three months until they come and see you after ablation or not. It's a big deal for people dealing with ablations, and you agree with me, in 10 clinics, imagine this being scaled for something else, and there's a trial going on. You may like Bill Gates or not, this is recently, last week, in 10 years from now, I think maybe in seven years, based on the work you've seen from China and others, doctors and teachers will be replaced by an AI. I kind of believe that, if we have this high risk function done by a car that can kill people, then we need to start trusting this AI. This is an auto, you know, you've seen it everywhere in the world, the car drives everywhere on its own, on the street, if it goes right or left, it will kill people. It's a very high risk tool, task it's doing, so we need to trust that this is on us. We talk about this a lot, Nick and I, we need to start trusting AIs and start applying this in our patient, based on certain criteria, obviously. The way we're taking this to the next level, at least in our institution, is doing a couple of trials and the next level of complexity. This is TIP, Tulane Eye Prevent Study, where we're adding more layers of complexity, not only in the ECG itself, and by patient's history, rather, more data on imaging, everything we can grasp. In this prospective study with Samsung and Boston Scientific, we're putting patches to read an ECG with the watch, try to teach each other, but also we're adding the MRI scanning into that 300 patient trial, trying to understand what we can do within a year for progression of myopathy changes when the ventricle function flow, and so on, make it more a cardiac-specific trial. Hopefully, we can report more on this within the next, we're already into 70 patients, we started this two months ago, and hopefully we can report within a year what we find. The study I'm excited about as much as the heartbeat study, that we started with Samsung as we speak. And the reason I'm mentioning this, because I literally believe, I'm very convinced today than ever, that we're going to have the clinic on the rest. I'm going to repeat this one more time, and maybe hold the sponsor for this in a couple years, we're going to have a clinic on the rest. With the work you're seeing, and after you read the future care, by the way, maybe you'll be more convinced, but this is what we're doing in that direction. This is a Samsung watch, working with Korea and Palo Alto on this, which made for this study, suited with all this information, we can measure all this information, including blood pressure, and also we have the sensor to measure blood sugar, it's in this, we're testing this. And we're collecting data on 10,000 participants within the state of Louisiana, and we're collecting patients and hope, and this is a slide from Chen Ho, the number of data point he's excited about, 1.3 trillion data point that we're going to help us. And the key, everybody, by the way, it's nothing cool, and nothing exceptional and smart like you, and use the ablation, even ECG, everybody in the world can do this. What's important is the data we have, and the quality of data we have, as you all know, and people talk about it, and that's why we are fortunate that we have the Samsung relationship, allowing us to go to raw data, we have a deluge of raw data that we're getting at Tulane, that's going to allow us to look below the hood and see and understand how we can implement this in predicting diseases and interaction with diseases. So the way this study has been designed from day one, I think Mayanna is publishing the study design soon, already into 1,300 patients into this trial, by the way, we're taking patients with different diseases you've seen here, balanced. We want to understand interaction disease, obviously we are electrophysiologists, but we have cardiologists involved in the interaction of diseases, something we're missing, how disease one goes to disease two, and every three months we're sitting with our team, all AI team, Chen Ho leadership, and the team from Samsung, to define algorithms as we go, based on this disease and criteria. Study goals, as I'm mentioning one more here, cardiovascular disease progression and changes over time, up to three years follow up, and then we're doing something called a fingerprint, biometric fingerprint, defining the highest patients for certain diseases as we go, if you have a heart failure, what's your AFib fingerprint, and on and on and on. This is only recently, as you know, started recruiting in December, we already have some data, the patients we have. This is an AFib fingerprint on a PPG you see on the watch, and hopefully this will be, by the way, the stuff you've seen before will be on this watch for diagnosing and working with insurance companies actually on this, implementing it as a CPT code for this watch, but that's a different discussion. We can read this today with a good AOC, and it proves that we go forward as we're teaching that algorithm. This is a work from heartbeat, obviously, and also the diabetes fingerprint, which we're very excited about, knowing that you have a risk of diabetes. This just came out now, fresh, I'm showing to you the first time. So I imagine how much data we can get of this, defining our patient and starting treating them as well. I think we're ready now for the treatment, implementation, digital treatment in this AI using these watches. The last slide before I finish, going back to what I started on today in my practice, in our practice, we learned a lot about AFib patient. We never see, we don't see the AFib patient, and I hope you don't too either, as a paroxysmal persistent. When I see all these two days of lectures today, we're talking about AFib, ablation, ablation, ablation, which is great. We're excited to have this treatment implemented, but we need to think beyond that now. That AFib patient need to be seen when we abate them and treat them from what we have listed here, the age and the severity of AFib. It's really the my AFib based on AI, not the AFib anymore as we know it. And then as we go, we're going to continue defining this. Thank you guys. Thank you so much for your action and work. I think we have been learning from you over the last decades from the Utah school to this my AFib severity school. Thank you so much for your talk. So next speaker will be Dr. Narayan without introduction. I think everybody knows him. So he will bring us the topic of use of AI in bringing outcome of heart vascular disease, a scientific statement from the AHA. Thank you very much indeed, Dr. Bai, Dr. Singh, and thank you to HRS for the invitation. Just trying to get this to, there we go. Thank you very much. Awesome. So again, I think that I echo what Naseer said. What we're going to do now is just talk through a scientific statement. I had the honor to co-lead from the American Heart Association and try and build on the idea of how do we translate AI tools to care. So you've already seen my disclosures, but here they are again, funding from the Laurie McGrath Foundation and NIH amongst some others. So the key really is AI is not AI is not AI. We have to build on the best that we have, the best data, the best algorithms, and then put them into the best workflows. And so in the words of this Dilbert cartoon, I recommend adding Ricky to our AI project. He lowers the bar on what constitutes human intelligence, making it much easier for AI to succeed. I will be honored to work on this project. See what I mean? So I'm going to start with some resources. And these are just a few articles. This is the QR code for the article that we're going to discuss, but also here's some other ones. Heart Rhythm had a great series that was coincidental with HRX last year, which was very practical tips. This is one that I did with Emma Zvenberg on learning AI for the busy clinician. This was a fabulous Jack scientific statement on the promises and perils of consumer mobile tech, led by Neeraj Verma, Janet Hahn, Rod Parsman, and others. So the position paper was really something along the lines of, there's a lot of ways you can use AI, and they're very exciting, and we know that many will work. But despite enormous academic interest to date, AI-based tools have not improved patient outcomes at scale. Small studies, yeah, but not scale. And this is to realize the vision that's in Jack's book, which also is fabulous, and you should have a look at the interview he was on a major news network for interviewing and discussing his book. So the goal of the scientific statement was to identify best practices, challenges, gaps that may improve the applicability. So we did this in all these domains, and here they are. I'm going to go through each of these word by word, just joking. So I thought what I'd do is distill the essence of each of those seven or eight different domains, so imaging, ECG, wearables, through to integrated care, into four key areas that underlie a thread for all of those. The first is, how best to define the problem statement? Then how best to curate the data match to the outcome and task that you're looking at? Thirdly, transparency, opening the window on the algorithm and data. And then, you know, as a thread that runs through all of this, and we're all concerned about future proofing privacy, I will not discuss ethics and regulation. It's an enormous topic, and I'm not a lawyer, but it comes up through the discussion. So AI will always give you an answer. We'll see this. But what should we ask? Now, a question we always ask is, is it relevant? Another way to phrase it is, is it clinically or technically precise? Meaning. Okay, so classical statistics, you propose the rule. I hypothesize that the amount of PACs is related to AF incidents in the future. So Doolan and Marcus 2010, you know that that's a theme, and that would be a linear plot, and the result of the statistics would say, yes, my, your P value supports that hypothesis. You've given a rule. It's easy to interpret that, because you gave the rule. It did or didn't, and we could argue about how strong the data were. Okay, machine learning is actually somewhat different. Here, you provide the data and the outcome. For instance, ECG of AFib, we know it's AFib, and then you classify AF or no AF. We provide the data, the ECG, and the label, AF or no AF, and AI gives you the rules. This is the problem. This middle bit, which is, I don't know if you can see my, you can't see it, but anyway, this middle bit that I've put in the squiggly box is actually the output from a supervised AI model. It's the rules that come out. This is why there's an interpretability problem. Now, this is all, and of course, this gets more complicated with deep learning. You get hierarchical data and things like that, so this is all terrific if the data exactly matches your outcome. But if there's hidden biases, such as all the data was collected from a certain wearable but you apply it to a different one, it may not apply. Or all the data was collected in men and you apply it to the real world, it may not apply. And so this is all really important and affects everything we do. I'm going to talk about problems in terms of key tables in the scientific statement. This was the electrocardiography table. Develop a robust framework to apply AI for scenarios that appear superficially similar but differ in important respects. I'm going to show a few papers that got this wrong. Luckily we don't make these mistakes in cardiology, so I'm going to pull on the dermatology literature because they make these mistakes. So in this particular classic paper, this is Esteva, the original one, Nature. AI could pick up melanomas perfectly and from Nevi. But when you applied it outside the original set, even the test set, not that good. Why? Turns out that the pathologists were labeling the melanoma with dots. When you took the dots away, performance dropped off. AI was learning dots really well, not so good at learning the melanoma. The revised question could be classified skin lesions without artifacts. So again, it's a more refined question, which means you would automatically curate your data just to look at the path. Another one would be, can AI diagnose from the chest x-ray? In this case, COVID. It could be other forms of pneumonia. And again, terrific performance, not so good outside. Why? In this case, AI was learning more subtly features of high risk hospitals. So that could be New York, that could be Milan. And those scanners have, for instance, in this case, the positional labeling in the middle, as opposed from the top left. So these subtle things, you wouldn't think of it. It's only because of these studies we even think of this really. These have all been revelations. So the revised question has to take that into account. And so the guidelines said, the clinical problem is difficult. Define it not just with clinicians, but with data scientists, AI experts, imaging experts, etc. Next, data curation is the data right for your AI. This is somewhat well known, I'm sure, to this audience, that you need to make sure that you ideally curate the data. If you use an existing registry, you have to be careful that you don't introduce biases in that. So this was, again, one example. This was the wearables table and section. And the idea here was that if you train only on one group of people, only men, only people in England, only people in the East Coast, it may not generalize. This has become somewhat clear. And this becomes, there are subtle aspects of this that permeate all of what we do. This is the table on AI and genomic cardiovascular, table 5. And here you can see the promise, which is personal AI genomics to predict CV disease, predict rare events, targeted drug development. The gaps and challenges would be, are the data sourced broadly? Even if they are, most of this SNP, single nucleotide polymorphism targets, were from GWAS studies in Europeans. Problem, this is now being addressed, but because otherwise this won't be transferable. So these are best practices. They may not be easy to achieve, but this would be the best way to move forward. Transparency, make sure that versions are well documented. This is particularly true for regulatory. I won't belabor it. I'm just going to go to table 7, which was a framework for successful implementation across the board. And I've just scribbled under a couple of these things, different data sets. You've heard this a lot. Study benchmarking against current standards. We don't do enough of that. I think that's key. What do you currently use? Compare it. Involvement of a multidisciplinary team, I think is really key, and we've already heard that from this morning's speakers. And then explainability, if possible. A bit difficult, given what I said. The final section I want to talk about is future proofing privacy. HIPAA in the US doesn't always cover commercial companies, some wearables, and so that has to be considered. Now GDPR in Europe, the UK PDR is better, but still it doesn't cover the following, which is that AI can be used to infer your identity, even without any personal data. And this is an article I wrote with my colleagues when I was doing some graduate work at the University of California at the Information Science. What do I mean by this? Used to infer your identity. So this would be data leakage on the right, and left is the following. Companies who do not share your data often create their own metadata of you, and you'll find this universally in privacy statements. We collect and use inferences from personal data, identifiers, demographic details, commercial data, internet activity, geolocation data, and they can sell those because they're not your personal data. And that's not covered actually by current legislation. So there are ways around this that we have to think about when we design flows, and basically they're things like splitting up data across centers, splitting up algorithms across centers, and I won't go into it because of time. So in summary, AI-based algorithms can improve outcomes for patients with heart disease for sure. We have to try and follow best practices. The problem statement needs a multidisciplinary team. Data should be created ideally for your problem or as best as possible, and then we need ethical guidance and regulatory input. Thank you very much indeed. I thank you Sanjeev, that was great. I now call upon Dr. Mahapatra. Srijoy wears many hats in leadership, science, industry, academia. It's terrific to have him here. And Srijoy is going to talk about artificial intelligence and ECG modeling in digital health. And after Srijoy's talk, we'll all sit down and have a talk for about 15 minutes or so. So hang out. My name is Srijoy Mahapatra, and my main disclosure is I have an employee of Abbott Labs. I was also the president of a AI company. So I want to give you a framework for how I think about AI in medicine, and there's many, many ways to do it. There's no pride of ownership here. But on the left to right, I look at the left as AI trying to do something a doctor can do. So reading a chest X-ray or as you saw reading an EKG. And to the right would be trying to predict things that a doctor couldn't do, like Dr. Maroosh showed in EKG to predict who will get AFib, and others have done similar things. And then the top and bottom are different. The bottoms you mostly see published. It's AI for things in the clinic. They can use big data sets. They can have 10,000 records, a million records. On the top are procedural things. So for example, what should I do during AFib? Where should I ablate? And as was already alluded to, we say AFib, but if you think about it, there's lots of subtypes of AFib. And this is where these procedural so-called small data techniques may help. Now, I was a former hospital administrator, and I'll tell you if you look merely at reimbursement codes, because there are reimbursement codes for these, as you get to the right and to the top, the reimbursement and the value of the system is more valuable. You think about it, reading an EKG reimburses one thing. Being able to read an intracartic electrogram and say, you can reduce re-dos by doing this, would be more valuable. That's why you're seeing people moving to this direction. So let's start with the simplest one, reading an EKG. I bring this up because some of the earliest cardiovascular work was just reading an EKG like a doctor, like Sanjeev already talked about. You can feed in, you feed in the expert read, and you feed in a bunch of EKGs, and you'll get a read. And that's, I wouldn't say is, I think many people have done this. The next level that you saw Dr. Maroosh talk about, the Mayo Group Senate, is using, for example, a sinus rhythm EKG to predict AFib. And I say this because at least I, and I don't think any of us, could just look at an EKG just with an EKG and predict the likelihood of AFib in the future. There's some markers, but it turns out when AI looks at it, it might lose different markers. It may not use the P wave, it may use something else, for example. And here's a dramatic example that's shared with me by Peter Noseworthy at Mayo. There was a 62-year-old woman who had hypertension and had pre-diabetes, had an EKG during executive health. The system said she'll probably get AFib, and two years later she showed up with AFib. And I bring that up not because we know necessarily what to do with that person, but you might want to monitor that person more, for example. By the way, they're doing a study where they're looking at a consumer version of it and looking to see if they can change the time of diagnosis or even outcomes in the future. And they continued this. They worked with a company called Onamana, and they've now been able to predict 52 different diseases from EKGs, and they've even got reimbursement codes for it. It's kind of a list that they provide me of some of the diseases they're trying to predict with the EKG. Remember, they're using EKG only, not using EKG plus medical information. So just briefly, how do you train these tools? This will allude to some of my small data techniques. You know, typically you get a formal read, as you talked about, and maybe you have, say, 100,000, 200,000 patients with, in this case, the case of Dr. Noseworthy, 650,000 EKGs. And in their case, they use a CNN to read them and produce these rules that Sanjeev alluded to. But you could also, when you want to predict the future, they use the formal read plus the medical record. That's something we in industry don't have. But then they're, by knowing what the EKG was, say, five years ago, and then seeing who got, I'll say, AFib in five years, five years after the EKG, you may be able to predict future events. You can do that time and time again. Now just keep in mind the number of patients they had to use to do this. I'm going to do a little aside, because I think you all should read Jack's book. In fact, I just bought it sitting here. But if you're ever looking to just kind of play around with the math of AI, a very simple thing from Google is this TensorFlow organization where you can just literally play around and see how it makes an AI model, a very simple model. In this case, for example, the model is trying to predict what the math is, what the math formula would be for this circle. And you can kind of see it might be x squared plus y squared less than or equal to 9, for example. But it would try to sort of do it for you, and you can see how it thinks. It's just a nice introduction to way one AI model works. All right, let's move forward to AI in intracardiacs. Here's a slide from Dr. Shivkumar from two years ago, or three years ago, I guess, at this meeting, where he tried to, or his group tried to, predict what a pulmonary vein potential is. Now we all, in fellowship, learned how to read pulmonary vein potentials, but the idea was, since we all agree where the pulmonary vein potential is, he had patients with, in this case, a cryo-bloon, a 72-year-old woman, that I can read here. And he just said, well, can I automatically mark every pulmonary vein potential with purple? And you can see there's pulmonary vein potential, pulmonary vein potential. As the cryo's going, you lose it, and then eventually it goes away. And his whole point was, I could just do this automatically. That's what was really the main point he was making. At Abbott, we also, and again, a disclosure, I work for Abbott, we've been trying to do that with line of block. So we take experts, we have maps, in this case, NAVICS maps, and we have three MDs who say, this is a line of block, and then the system tries to eventually learn that and says, well, these are the characteristics of the marker, probably using colors, electrograms, various things, no one problem with AI, sometimes we don't know what it's actually looking at, and then it just draws a line of block. And that's the concept of people doing this. I will remind you, though, we don't get millions of intracardiacs in general. This is a mock-up, I was asked to show, I was asked, I sort of made this up, but this is a tool we have called Volt, it's a PFA tool. And one of the challenges with PFA is we don't necessarily know if, you can't do a 30-minute hold time with PFA, for example. Steve Michelson likes to joke, you need a three-day hold time. Well, it's not going to work. So what we do is we, you can imagine doing PFA in patients, and then remapping them in 30 days, 90 days, et cetera, and trying to predict who gets block and where they get block. And so this is a complete mock-up, but using that, you can imagine it says, well, blue, you're probably done, this person's not going to get gap there, you can come off there, but here, the white got to keep going. So it's possible, instead of just ablating extra, we may be able to ablate the right amount in certain patients, and that could be with any tool. One challenge is you're not going to do thousands of remaps in people. Maybe you get 60 or 70. So people have tried to add mathematical rules into it. For example, you could say, hey, look at the amount of time you had contact force, the amount of time you had proximity, and the number of births delivered. And for example, if you have no contact, never say there's block. So you may not need as many patients. So my point was, it's actually challenging. Well, EKG models can use thousands of patients or hundreds of thousands of patients. It's hard to even get 10,000 patients with a new ablation tool. I mean, if you get 50, that might be challenging. So I'm going to move a little bit to a concept called big data and small data. We all talk about big data. Three years ago, you almost always saw papers on big data. But more recently, we talked about small data, and if you go back to about 1998, there actually were papers that were similar. And here's some challenges with big data. You all know this, but you have large chipsets that are very expensive to do this. You have a lot of power consumption. They get hot. You may not be able to do these. Maybe I'll run the models physically on, say, a disposable tool in the EP lab. Some are relatively slow. You don't get an answer in 100 milliseconds. EKGs can take a second, which is fine in the clinic, but probably not in the EP lab. And they can hallucinate. So to step back to the conceptual side to this problem, imagine you're an oncology company using blood tests, a diagnostic test, to predict 32 cancers. I picked 32 just arbitrarily. If you have 32 possible answers, then the question of which answers do they have, if there's only two cancers, only two possibilities, when you get to 32, it's 4 billion. And you can kind of see this with linear growth versus exponential growth. If you can just cut two answers out or one answer out, you might cut the model size in half, simplistically, but that half might be 2 billion. Now, in reality, some AI models are even worse. Not worse. More complex. By matrix growth, you can get 2 times 2, 3 times 3 times 3. You get the idea. 32 to the 32nd. And that number, if you look it up, is called a quindecillion. And you can kind of see that in this graph. Here's linear growth, exponential growth. There's combinatorial growth or matrix growth. So there are people who've worked on this, and I'll just use some examples that are known. Female therapeutics early on had a bunch of options for cancers, and one rule they finally passed or have in their system that says, if someone's born a female, they can't get prostate cancer. Took one possibility out. At least you won't hallucinate that. You could imagine, I'm not saying it's a real rule, never use RF on the posterior wall if you're giving people advice. You can imagine looking at medical records and realizing that people with diabetes need less PFA because their tissue doesn't recover as quickly. These are all just possibilities. I'm not saying they're real. Similarly, if you're a car, you can have a rule like never cross a double yellow line. Well, actually, if you drive, you know you sometimes do to avoid a bicyclist. So you could change the rules. But the point is, a combination of expert system with traditional models may reduce the size of the model and make it more accurate, and may make it more understandable. I'm making this one up, but I said, imagine you have a model where you want to use short-term outcomes. So you're using one-year survival. If you use 90-day block outcomes as a surrogate, it may be more predictable. To get even shorter, one of the ideas Shiv Kumar's group did is an ask the expert model. When it says, listen, forget outcomes. We're just going to say, what would Dastir Maroosh or Sanjeev do? And it doesn't mean it's right, but we're trying to be like them, a little like a curbside consult. So that may be a possibility. And those can be done fairly easily with small data because you actually interview them, and that's how they were doing these models. Shiv showed something similar like that two, three years ago. So I'll give a concept. I'm actually not a believer that AI will replace doctors. I'm a believer that AI, doctors that don't use AI will be replaced by doctors who do use AI. That's my little bias. I do think if you're a diagnostic radiologist, you've got a challenge. But I think in many ways, AI will be a great assistant. It's already an assistant, but over the next 20 years, you're going to see more and more assistants, and our field's going to evolve, especially when you combine robotics. I think AI will make expert doctors even more valuable, because it requires judgment. It's not just, you're going to get AFib, the next question is, what do I do with that? And people with experience will be able to do a good job. I think AI is going to be first available in diagnostic areas. I think you're going to see it using more and more in the EP lab, and I think small data techniques will make these much cheaper and faster, and just easier to do for everyday clinicians. Thank you. Thank you so much for your excellent talk. Now the session is open for questions. You can come up to the mic, or you can bring up the question from the panel. All right, while we're waiting for questions, why don't we start the dialogue? And why don't we start it with where kind of Srijoy left it off? So, I do every day ask myself, what would Sanjeev Anasur do in an AI population case before I make that decision? So, there's a mutual admiration society out here. So, but my question to all four of you would be, let's take off what actually I think Nasir started with, and Srijoy, you touched upon, and I think people are really interested, is how is AI going to change our care pathways, and as a result of which, actually impact our jobs to some extent? You know, there's a lot of stuff that's going upstream. What was done by internists is now done by advanced care practitioners. What is done by cardiologists is done by advanced care practitioners with algorithms. And so, I kind of believe to some extent what Nasir said, and to some extent what you say, Srijoy, that there may be a combination effect. But I think the job description for a lot of people is going to change. Nasir, you want to start off on that? That's a deep, deep question, by the way. We deal with this every day. It's getting deeper every day, and when we walk through the hospitals, because we just said, we have AI tools to replace nurses. We have AI tools to replace techs. We have AI tools to replace primary care. And it's time for implementation. And if you ask me, I want to take the... I see Nick Peters already standing there. The other extreme, where I say, it's really time for us to start building... My daughter, I'll give you an example of my daughter answering your question. Every doctor is like you are, who wished their kids would be doctors that they are, when I went to med school 20 years ago. I'm not sure I want Samira to be a physician, because of this. I'm scared. How is this going to change the things we're doing? Scared in a good way, because I'm starting to trust the system, which I'm biased. Less emotions. I like the slide from Srijoy, where he says, small things. But even the small things will be big things. It will be AI things to deal with. The accuracy level, the lack of errors, we're going to see more and more. We've seen this already in ECGs and on. I believe it's on us now, and discussions we have every day. In fact, at Turain, we're starting officially the AI digital clinic, like you did, in a way. We need to start putting it there. Somebody has to take the first step, so people start getting on this. Do I need three nurses? Do I need five nurses? A is enough to do this or not. But, yes, this discussion, we have to start doing it. And it's a tough discussion to do with the fellows and the residents and everybody working with us. We're changing all the jobs around us from A to Z, based on these models, 100%. Thank you so much. Now we have some questions from the audience. Please introduce yourself and then ask the question. I'm Nick Peters from London, England. A number of you have alluded to the fact that there is a great deal of disappointment at how little of all this has transitioned into patient care impact, impacting our patients. So we're already behind the curve, and one thing that was disappointing about the AHA statement was the omission of the last mile implementation, implementation to impact. There is a valley of death that faces all of this and everybody and is largely responsible for the failure of transition of innovation to impacting our patients. And we heard very little reference to it today. We have a blind spot to it or a kind of willful ignoring of it because, hey, if it's good enough, it'll just happen. But it really, really doesn't and requires management, and it's becoming increasingly apparent, and we're all frustrated, and there were references to it today, but not a sense of frustration at how we're going to address it. So I'm responsible, my group's responsible for the biggest deployment of AI in the National Health Service in the UK, and this is superhuman insight AI based on ECG to use Sreejoy's AI doing what a human can do versus AI providing superhuman insight. And we have 2,000 GPs who are using an ECG-based tool to make diagnoses and able to prescribe on the basis of their result and turning to each other and saying, how is it we're able to do this? And it's being done without consenting. It's gone direct to care, and that is transition to impact, and that's what we need to focus on. That needs to be managed, and we have a framework for doing it, and we're in our third and fourth technologies achieving that. Can I interrupt you for a second, please? Yeah, of course. Because this is important, and this is really going back to, you go to the Imperial College tomorrow, Monday morning, the nurse walks to your office, or you record an ECG on your patient, and he tells you this ECG, you run it with the code, do an algorithm from Chen Ho, tells you 95% of this patient having AFib. What do you do with it today? So I'm going to break that question into two parts and take your question also. You're talking about implementation, Nick. This is going on for a year. You have the same, which is I agree with you, but what are the steps you're taking yourself in the clinic today to use that ECG? But the real challenge of my question is all of this. So we're already behind the curve because we're not implementing, we're not impacting, we're already behind the curve. This session, if you don't mind my saying, was rather last year. There wasn't a single mention of large language models. Now, my question is how will large language models in a forward-looking sense, bearing in mind we're already behind the curve, how will LLMs really impact what we do? Arguably, this is yet another revolution. So I think the points you made were great. I don't think the whole session was last year. I think we had to look at the challenges we've had, but I think there are a couple of things. When you've got all the data in front of you, AI does incredibly well. That's why it does so well on images, because all the data for an image is on the image. When we've got stuff that we don't agree with, so isn't the AFE, isn't ECGA, as Nasir said, I mean there's occasionally a little bit of uncertainty, but on the whole that is, and I think the implementation that you've managed to achieve, Nick, is phenomenal. I think everybody gives you credit for that. Some of this is therefore a systems problem, but some of it is truly at the frontier of what do we understand next. If we look at LLMs, what they're basically doing is associating across known data streams, and those inferences are only a hunch. We've seen the number of hallucinations. So I think all of this pans down to the same thing, which is what's the foundation we can build on that we know? So imaging is a great example where we know really well. Systems of care that are well worked out. ECG is a good next step. Some of the things we discussed, like, you know, Sridhar showed AF ablation. There's so much uncertainty, and LLM wouldn't do better than we do in a magic one sense. We really have to build it hand in hand with traditional translational science and clinical science and test it. So I think we might all agree on that, but I think that that's why I believe that there is this collision between, on one hand, there will always be a need for physicians and physiologists, but, of course, AI will automate some of it. I think it's different between last year and this year, Nick, that I showed you today a lecture that I'm using at my clinic, and that's why I tried to ask you the question. You need to start using in your clinic this algorithm to treat your patients, and you're not until you start doing it yourself. NHS will not follow. Okay, that's how we're going to go to the next question. Go ahead, please. Thank you. Good morning. I'm Parisa Asher, and I'm actually an undergrad at Duke majoring in biomedical engineering and have done some work in predictive modeling. My question pertains to the role of input data for whether we should be using raw images, for example, to lead ECGs or more so time series recordings. The reason being is I saw a few examples of you using, like, CNNs or convolutional neural networks on these raw images, but then if you use actual time series recordings, we can't use really CNNs because we have something known as a vanishing gradient where, since they're very long, we'll, in essence, forget the beginning parts of the input data, so we use other deep learning models like bidirectional, long-term, short-term memory. So the essence of my question is how would we delineate when to use raw images of totally ECGs versus time series recordings of ECG data in predictive modeling? Sanjeev, do you want to take that? Sure. So there are many different you probably know more about this than me by the sounds of it, but if you think about recurrent neural networks that encode time series and are time dependent, there are many different ways that it could be encoded. At the end of the day, if you only have a snapshot in time, you're always getting a probabilistic nature of an ECG recording. Even if you were to look at 10 minutes or 20 minutes, it's still a probabilistic recording. So I think the real question will be some sensitivity analysis of, as you give more data, do you get better or not against the output? Like, you only need one chest x-ray to know that there's a mass in the right middle lobe. It's a bit harder for is it AF or AT. It gets even harder for is this person going to get sudden death. And to me, that's the bigger question. And then I think there is going to be a little bit of technical discussion, which you would know far more about than many of us. But I think the big questions would be what is the exact end point of the model? Yeah, if I can just add to that. I think when you look at dynamic modeling, especially for predicting, if you're predicting AF readmissions, let's say, for example, if you're predicting heart failure readmissions, those are impacted by better predictive models are more continuous streaming of data because these are dynamic events and baseline variables or baseline digital images don't necessarily predict something that's going to happen six months later as opposed to something that is closer to the event. And exactly defining what that close period is. And sometimes it's the dynamic data stream that gives you more information. So, for example, just from your patch monitoring, you can get more information looking at a few hours of patch monitoring, looking at the data distribution of PACs and heart rate variability as a digital image with CNN as opposed to looking at just the single ECG strip and trying to derive from that data. So I think a combination is going to be important, but defining it more accurately is going to be a little challenging. Thank you. Let's take a last question from the audience. Sorry, the engineer in me actually has to answer the previous question really briefly. There's one-dimensional version of most two-dimensional models now. So you can actually apply time-series data to image classification models like models that were built for image classification. I'm Chan-Ho from Tulane, and my question is, I guess, developing AI in EP for a couple of years now, I am really confused in what direction where we need to take these AI models to. So we think about these AI models that have been built already, and some of them, like we talked about how they're going to replace nurses and replace physicians. And then I also read some interesting data about at UPMC, how every five-minute increment of the time spent with the physician during the visit, like per patient, directly leads to better readmission rates and outcomes. But also there's the data from Myanmar where every number of visits decreases outcomes for the patients as well. So which direction do we need to take in developing AI models, and do we replace what physicians do to maximize the time that physicians can spend with the patient and lead to have more frequent visits with the patient, or is it going the other way around where everything is just automated? I'm really confused how we should take the direction for this. Anybody want to take it? Yeah, I think I'll just start, and we can all chime in. I think when something is well-established, it should be automated. Okay, so if you, instead of, like Nasir spoke about multimodal sensors, that should be integrated and give a profile. So that's that. Then I don't think we set out to replace anybody, but we set out to improve workflow. Where are the pressure points in the workflow? If the next pressure point is who gets referred or not to an EP, let's say, then you could develop a set of models over what kind of patients get referred appropriately or not, and that could be a focus of model development, et cetera. And then in the EP lab, who does well, who doesn't do well. So if you break it that way, I don't think the goal should ever be who gets replaced or what are the pressure points to improve outcomes. I'm going to make an editorial comment. There's a wonderful saying by Clayton Christensen, who's from the Howard Business School, who said that disruption is a process and not an event. And unfortunately, even the adopting of AI is a process and not an event. It's not going to be an overnight thing that we find that we can suddenly change it. But I do agree that we need to be continually transforming. So I was asked to give a talk yesterday about AI being actually irrelevant or always innovative, AI, AI. And I think what we need to do is find a middle ground, continuously try to convert the irrelevance to something that's innovative with pragmatic clinical trials and try to bridge the gaps, as was alluded, to how we need to kind of figure out what the issues are, what the unmet needs are. But care will always be integration of AI with sensors, with virtual care, that need to be integrated into our care pathways. Thank you. So I love that question because it frames part of the answer to the question that I was asked but sat down before I had to answer it. It's the contextual factors, as was alluded to in the answer to that question, and the behavior change. I mean, we have an enormous amount of work to do to change behavior. You'll stand in an auditorium like this and everyone will go, well, my patients just want to see more of me. They want to see more of me and therefore we should find a way of allowing them to see more of me in a higher quality sense. But, you know, we do clinics suited and booted. Forty years ago, if I needed an overdraft, I would go to a bank and I would sit across a desk in front of a guy who was suited and booted and ask for the overdraft. We have patients traveling hundreds, thousands of miles to see us, suited and booted, sitting at a desk now. I can look at my phone and transfer half a million dollars just by doing that. We have, okay, we have, I'm bragging now, but, you know, fintech, okay, banking, finance has moved all that distance in 40 years. Healthcare has not changed one bit. We're suited and booted. We're all suited and booted here. We're having patients come to see us. So much of that is redundant. We've got to change how we do it. We could consume the entire economy of this planet in healthcare if we allow it. We've got to change our behavior, and it's us in this room. It's not our patients. On that note. Thank you so much for your comment and a promising discussion. Thank you very much. Yeah, I think time is up. Thanks for being with us, our outstanding speakers. Let me close the session. Thank you so much.
Video Summary
The session "AI and EP from Hype to Real World Practice" at the Great Wall International Cardiology Conference addressed the integration of artificial intelligence (AI) in electrophysiology (EP) and cardiology. Dr. Rong Bai and Dr. Singh co-chaired the session, emphasizing the need to translate experimental AI potentials into real-world medical practices to improve clinical outcomes.<br /><br />Dr. Sun Zuo from Beijing Anzhen Hospital detailed AI's applications in atrial fibrillation (AF) screening and management. His research involved using smartwatches and single-lead ECG monitors for accurate AF detection and management, achieving high accuracy and advancing AF burden estimation.<br /><br />Dr. Nasir Maroosh discussed the transformative potential of AI in cardiology, emphasizing integration with wearable devices to predict cardiac events. He highlighted ongoing trials involving advanced AI models and noted that the changing landscape might redefine the roles of medical professionals.<br /><br />Dr. Sanjeev Narayan shared insights from an American Heart Association scientific statement, focusing on challenges and best practices in applying AI in cardiology. He highlighted the importance of multidisciplinary approaches, thorough data curation, and transparency in AI algorithms to ensure effective implementation.<br /><br />Dr. Srijoy Mahapatra touched on AI's utility in ECG modeling and digital health, distinguishing AI models' capabilities beyond traditional tasks, potentially elevating clinical judgments and procedural decision-making.<br /><br />The session, while reflecting on AI's substantial potential, also acknowledged obstacles in data transparency, ethical considerations, and the need for cooperation among healthcare professionals for successful integration. The discussion concluded with the recognition that AI could significantly reshape healthcare delivery and redefine medical practice roles.
Keywords
AI
electrophysiology
cardiology
atrial fibrillation
wearable devices
ECG modeling
clinical outcomes
data transparency
healthcare integration
ethical considerations
Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
P: 202-464-3400 F: 202-464-3401
E: questions@heartrhythm365.org
© Heart Rhythm Society
Privacy Policy
|
Cookie Declaration
|
Linking Policy
|
Patient Education Disclaimer
|
State Nonprofit Disclosures
|
FAQ
×
Please select your language
1
English