false
Catalog
How AI Can Facilitate Diagnosis and Catheter Ablat ...
How AI Can Facilitate Diagnosis and Catheter Ablat ...
How AI Can Facilitate Diagnosis and Catheter Ablation of Arrhythmias (non-ACE)
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
All right, well, it's my pleasure to welcome you to our session today. I'm Dr. Christine Albert from Cedars-Sinai Medical Center, and my co-chair is Dr. Sanjay Gupta from St. Luke's Mid-America Heart Institute. And our session today is on how AI can facilitate diagnosis in catheter ablation and arrhythmias. If you have not already, please download the HRS 2025 mobile app from your app store. This is how you can participate in our session today. You see there's a QR code, which you can scan, and you can send questions. And we will see them up here and be able to ask the speakers. We have four presentations, 12 minutes each, and then we're going to have a question and answer period. So most likely we'll get to the questions afterwards. So our first speaker is Dr. Clement Bars, and he's going to be speaking from the Hospital St. Joseph in Marseilles, speaking to us about tailored ablation for atrial fibrillation guided by AI, the tailored AF trial. Okay, dear colleagues, dear chairman, thank you for the invitation. It's a real pleasure to be part of this session about AI and arrhythmia. So my presentation today, tailored ablation for atrial fibrillation guided by AI, the tailored IF trial. Okay, so the question is how can AI help for persistent ablation, with two sub-questions, where to ablate and how to ablate. Because one of the biggest challenges today is to detect high-drivers electrical pattern because of the high complexity and interpretation variability. So we know since the JAC publication in 2017 that visual tailored ablation is working guided by spatio-temporal dispersion and is associated with very good acute and long-term outcomes. But this complex and subjective analysis is giving poor reproducibility. So the idea was to create this kind of circle with high annotation, database creation, training of an algorithm, to provide in live maps, reproducible maps and automated maps, and to guide finally the ablation and so on. But before that circle, we need to focus on three major steps, the data, the building training and the clinical validation. So the data is the first one and it's a big challenge because international EP is a complex field with heterogeneity and a lot of variability between different equipment. So collection and annotation of the data is a logistical and technological challenge. So to address this, a platform was developed to pre-annotate in live, to anonymize, to clean the data, to store the data, and to finally can work on it. So this is how it looks like. Once the data is inside, you can run some campaign, for example, for multi-annotator campaign and to compare the results, or you can come back on some electrograms leading to AF termination. So the second step is the building and the training of the algorithm. So here we have the database composed of over 500,000 annotated electrogram samples for the presence or absence of dispersion coming from several hospitals in Europe and U.S. And the region leading to AF termination were given more weight in accordance to the cumulative effect of ablation in the training process. The algorithm itself is a binary classification algorithm consisting of an annotation of two algorithms, a machine-supervised, machine-learning one, working on data that extract and analyze by physician, and another one, a deep-learning one, directly trained on raw data and dispersion level to predict new features. And these two kind of features are used to produce a likelihood table for each electrogram to be dispersed or not, and for more simplification to the workflow, this is modelized into a color code on the mapping catheter. So this is the interface, the first one and the first algorithm used in tailored AF. At the center, the multipolar mapping catheter, hieropentary, and the color code for the presence of dispersion for a bipolar if involved. Safety and reproducibility were already demonstrated in the EVA-FIB study, but this workflow and this algorithm were still needing for a strong clinical validation. And this clinical validation was provided by the tailored AF trial, published this year in Nature Medicine. And tailored AF is a large-scale, multinational RCT designed to evaluate if targeting spatio-temporal dispersion detected by AI in addition to PVI is superior than PVI alone for persistent and long-standing persistent patients. So patients were assigned in a one-to-one ratio, tailored versus anatomical. Fifty-one investigators, 26 sites in five countries, transatlantic, US, Europe, and severe population, 55% more than six months, and 18% more than 12 months. And it's important to note here that the follow-up was stringent, blind, and independent, and that no historical authors of dispersion, including myself, were involved in the trial as operator. This is the workflow for the tailored arm, smart PVI following dispersion, in addition to dispersion area ablation, contiguous lesion and connection were needed. In the anatomical arm, classic PVI for index procedure, and for the redo, re-PVI if needed, and lines. Only RF and no more than 50 watts in the two groups, so that's the primary endpoint, freedom from IF, after one procedure in the whole population, and we observe a superiority in the tailored arm. For the secondary endpoint, freedom from any arrhythmia, there is a superiority, but not significant in the whole population after one procedure, and significant after 1.2 procedure per patient. In the subgroup, more than six months pre-specified for the endpoint, freedom from any arrhythmia, there is a significant superiority after one or one or more procedure. So this is a summary of the trial, and finally it's important to note that this is the first large-scale international RCT showing a benefit ablating beyond PVs guided by electrograms, and the use of AI was determined to achieve these results. So for sure there were some questions, the first one is ATAC recurrences. First of all, recurrence in ATAC is associated with high success rate after repeat ablation, more than paroxysmal IF, and even more than persistent IF-FIB, and secondly, it seems that SR conversion during the index procedure is associated with lower rate of ATAC recurrences. And so the point is that, for this repeat ATAC procedure, all of them, 100% of them, were terminated by ablation, and the great majority of the ATAC were macro-re-entries, simple macro-re-entries, easy to understand, easy to ablate, peritric cuspid, roof flutter, perimetral, and 93% of them due to incomplete connection during the index procedure. For localized ATAC, 100% of them were due to incomplete ablation of a previous area dispersed on the index procedure. So the ATAC question exists, but it's easily addressable, essentially by following and respecting the ablation protocol. The other question is myocardial preservation, this is the data of the trial, and the mean surface, mean area ablated, on the left, 18%, on the right, 2%. So this workflow adapted on AF substrate allows for myocardial preservation. What about the actual contractility? We don't have data in this trial, but we conduct a series of patient, 24 patients consecutively, and with an extensive workflow and anterior wall ablation, and after 22 months, all these patients recovered an A-wave. Another question is the procedure time, so for sure it's much longer than a classic PVI, and we have to find something to improve it, and it seems that QNRG, in particular PFA, seems to be appropriate to be integrated in the workflow and to decrease drastically the time procedure. Here are some series with different tools of PFA, so we conduct several series of patients with a Nafera catheter, and the mean procedure time is about 100 minutes, so this is a classic workflow with mapping and ablation with PFA application with the same catheter, organization in sinus rhythm, but after a common flutter for a very short procedure. So what can we do more to improve the workflow? Probably to improve the mapping, because it was the first version of the algorithm, and finally the algorithm is improving based on a known principle of high-resource dynamic, and the algorithm was retrained and fine-tuned to obtain a new one, and to prioritize targets based on consistency in time and intensity. We compare this new algorithm with the classic one and the tailored arm, and we observe a significant dispersion extent reduction of the target areas, and a reduction of the RF time, RF time to terminate, reduction of the procedure time, while there was no difference in terms of acute SR conversion and IF termination. After 12 months and a loop recorder for each patient, the results were comparable to the tailored arm. So with more and more data and evolution of the VoltaPlex, the platform, the data platform, we observe a lot of evolution of the algorithm, and what can we expect in the near future? Probably an improvement in terms of bipolar analyzed in live by the algorithm, probably new maps according to the dispersion probabilities, and certainly maps in sinus rhythm, very promising generation of AI-tailored ablation sets based on outcomes and attack prevention, some tools like organization index, giving in-life adjustment correlated to organization probabilities, and some help with attack diagnosis prediction before mapping during an ablation, here an example for a common floater, and the algorithm outperforming humans, analyzing the EKG sequence, the EKG and the CS sequence, but we can imagine like an attack radar to address the attack question during an ablation. Just one word before finish about Restart, Restart was present this morning in late breaking by John Newell, and is studying the workflow, tailored workflow for repeat procedure with PV already isolated, it's an open arm study, essentially US, and for this very hard population results are pretty good. So why AI for persistent AF ablation? Because it's effective, and 2D-tailored AF provides the most robust results versus standoff of care in persistent AF patient, because it's safe, it's reproducible, it's promising, and it's probably just the beginning, because we can imagine a global tailored approach with clinical data, biological, maybe imaging, maybe voltage map, past ablation set to provide the most appropriate ablation set with the most appropriate ablation catheter. Thank you for attention. Thank you Dr. Barras, excellent presentation, we'll save questions for the very end, I think. So I'd like to, next, our next speaker was the only one who got to sleep in his own bed last night, welcome Dr. Gordon Ho from UC San Diego, who will be talking about using AI to guide successful VT ablations. Welcome to San Diego everybody. Thank you for coming to my talk and not going to the beach. So there will be one polling question. If you could scan the QR code into the session, you can respond to the audience response system when we get to the question. All right, so AI has touched every part of our lives. If AI can guide you to your best clean, then it can guide us to the critical isthmus of VT. This toothbrush algorithm is actually pretty cool. It uses motion sensors and applies deep learning of its motion for every teeth. And it tells you if it thinks you missed the broccoli on the back of your left wisdom teeth. So we should learn and apply these AI principles from other fields to help us with VT ablation. And it's because it's a risky procedure with a complex workflow, there's plenty of opportunities to improve on. In Echo, deep learning segmentation algorithms have been used to automate the calculation of ejection fraction. And we all know that it is notoriously variable done by drawing manually by different sonographers. And this is a good example of how automation can improve accuracy. So even though VT ablation outcomes have improved significantly, success and complication rates are still suboptimal. But I feel that AI has the potential to improve all aspects of the VT ablation workflow, which I will cover a little bit today. Let's follow the journey of one of my patients. He's a 63-year-old male. He has a history of coronary artery disease that was fully revascularized. He had a LAD to the CTO, RCA, sorry, he had LAD, CTO, RCA disease, and left cervical disease that was all stented. High ischemic cardiomyopathy, EF of 43%, and diabetes. He was doing well, very active on GDMT, but then one day he presented to the outside hospital with syncope, showed up with this. Could this have been predicted? Could he have been protected? Should he have had a primary prevention ICD? The guidelines tell us no. So how do we better predict VT? AI may be used to better predict patients who will develop VT better than just 35% EF alone. Deep learning of MRI and clinical characteristics has been developed to predict VT and sudden cardiac death. The AUC was pretty good at 0.72. And once we decide to do VT ablation, how do we prevent procedural complications and death? Well, deep learning of preop ECGs was able to predict death within 30 days of a surgical procedure better than the RCRI score. The AUC ranged from 0.7 to 0.8 at different centers. And for VT ablation specifically, the IVTCC ablation group developed a IVT score. It's a machine learning algorithm using survival tree analysis to build a risk score to predict post ablation death and VT recurrences. But I'll focus on the death part. They found that certain characteristics such as EF less than 30% and VT storm were more predictive of death. The AUC was remarkable at 0.8 and performed better than the pain SD score. It's important to risk stratify our patients to properly identify suitable candidates for ablation. Because if they're too sick, we shouldn't take them to the lab. Some of them should just go straight to transplant. But many of these would be served with prophylactic hemodynamic support. But we shouldn't be doing that for everybody. So back to our patient. At the outside hospital, he ended up getting an ICD. But over the next year, he developed VT episodes, recurrent ICD shocks, and he subsequently underwent two endocardial VT ablations at different institutions. He had recurrent ICD shocks despite all of this, was on Sotolol, Amio, Mexilatine, and he was referred to UC San Diego, and I evaluated his candidacy for repeat ablation. So using the IVT score, he had favorable characteristics with EF greater than 30%. And he was considered less risk for post ablation death. So I decided to proceed to a repeat VT ablation without prophylactic hemodynamic support. They have a calculator online, it's vtscore.org. Now, how can we best plan our invasive approach? Is it left or right sided, endo or epi? In our lab, we developed an AI-based computer simulation algorithm to localize VT using just a 12 lead ECG using a single beat. Without going into the details, this is a physics-based generative bootstrap model. And it consists of over a million computer simulations covering the entire heart to create a comprehensive pre-labeled data set. So this system can display the VT exit onto a 3D model, and it can distinguish important clinical features such as endo versus epi or LVOT versus RVOT. So here is an audience poll for fun. Where is the site of origin of the EKG? Is it an endocardial inferoceptor LV, epicardial LV crux or slash MCV slash posterior superior process? Or is it endocardial inferolateral LV, epicardial inferolateral LV, endocardial anterolateral LV, or epicardial anterolateral LV? So, our algorithm pointed it to the inferior lateral epicardial LV in an ischemic cardiomyopathy patient. There's features on this EKG that would suggest otherwise, right? LEAD-1 is positive, LEAD-2 and AVF are positive as well, so it's not an easy EKG. But because the algorithm predicted epicardial origin, we planned for upfront epicardial axis before we started heparin. So moving on, the next step is geometry creation. And although we didn't use this in this case, there's a case that this could be used. A deep learning algorithm does exist of the atrial chambers that's been developed to segment and construct geometries into cardo. I'm sure the ventricles will be next. So the next step in the workflow is scar delineation, which is very important in characterizing the substrate. So how do we speed up voltage mapping and how can we make it more accurate? So our lab then developed a deep learning CT segmentation model to identify wall thinning regions to identify scar channels. In our patient, it predicted a large inferior lateral scar with a potential channel at the inferior lateral basal LV, right here. Invasive voltage mapping correlated the predicted wall thinning with low voltage in all the scar regions and a deceleration zone at the basal LV. So how can we put it all together to guide invasive VT mapping? It's not easy to perform activation and entrainment mapping. With our AI-based ECG and CT models, we can integrate the VT exit with the potential scar channel to localize the critical isthmus even before inserting a single catheter. In our patient, even though he had a large scar with multiple potential channels, our AI ECG mapping localized the VT exit and highlighted the relevant channel to focus on. Then this can be integrated into CARDO or ESI, ESI in this case. With this guidance, we were then able to perform focused activation mapping of the epicardial inferior lateral LV, which delineated the critical isthmus matching to the AI predicted channel. You can see mid-diastolic signals spanning the whole cycle length. This was confirmed by concealed entrainment mapping you see here. We measured it with the PPI minus TCL of zero, and best of all, we terminated VT with ablation, as you can see here. He is now VT-free at 1.5 years follow-up. So we performed a prospective study of AI ECG mapping at two centers in 30 consecutive patients, and we compared them to 30 historical controls. They were enrolled in a reverse consecutive chronologic order from before we had the technology. The results showed that AI ECG mapping was associated with significantly improved freedom from a combined primary endpoint of ATP, shocks, and death, compared to controls in Cox regression, and we adjusted for pain SD score, EF, age, and cardiomyopathy type. It was significant by a hazard ratio of 0.25. This study is one of few studies to show that an AI-based technology has the potential to improve clinical outcomes for VT ablation. In conclusion, we EP clinicians have a critical role in guiding the development and the training of AI-based tools. AI-based tools should be designed to make our jobs easier, to perform safe and effective VT ablation, and should be designed to best fit our clinical workflow and not further complicate it. There is so much potential to apply AI in more aspects of the VT ablation workflow I couldn't cover here. Now I'll see you at the beach after. All right. Thank you. I'm sure we're going to have a very lively discussion. Our next presenter is Dr. Deepak Saluha from Columbia University and is going to be speaking to us on automated prediction of isthmus areas and scar-related arrhythmias using AI. Okay. I'm trying to figure out how to make this work. Yep. Okay. Just hit the start right there. Oh. There you go. Thank you. Thank you. It's more complicated than it looks. Okay. I'm going to advance the slides. Okay. Thank you for having me. Thank you for coming to the talk. Thank you for the invitation. I'm going to start here by if I could advance the slides. Okay. I'm going to start here by showing this image. This is a atrial tachycardia, scar-related atrial tachycardia that we've mapped recently. We've mapped 28,000 points here. We've got LAT on one side. We've got our voltage on the other side, and we've got electrograms maybe on a screen off to the side. And the job now here is to decide where to ablate. Now, I suspect if I pulled the room and asked all the ablators in the room where you should ablate, everybody would have a slightly different answer. If I asked how much we should ablate, people would probably have slightly different answers. We come to these cases with our own biases, with our own experiences. Your answer might even change depending on how the information was presented to you. If you're looking at a sparkle map compared to a propagation map, that type of thing, your eyes might be focused on a different area. Now, while respecting that some of what we do here in medicine is art and not quantifiable, our group has been interested in how we might be able to quantify this process, which I would argue is inherently subjective. The way that we choose ablation targets for these cases, for the most part, is subjective or at least semi-quantitative. And why might we want to do that? Well, I think there's data here that if we take the example of scar-related atrial tachycardia is that the recurrence rates for these arrhythmias are still pretty high, and they have remained high even despite the passage of time and the development of new computer techniques. And there's different reasons for this. Some of the reasons are not going to be addressed by some of the stuff I'm going to talk about, but I would hypothesize that at least one of the reasons we're not better at treating these arrhythmias is because we rely on a qualitative visualization of substrate. And perhaps if we can quantify the substrate, we would be better at treating these arrhythmias. Another reason may be that some of the mapping strategies that we use for scar-related arrhythmias have not, in my opinion, been adequately quantitatively interrogated. And I'll get more into that in a minute. And finally, if we could put numbers to all of this stuff, we may be able to use machine learning. We may be able to train a computer to help us identify substrate for ablation. Okay. So to illustrate what I mean by quantitative mapping, this is a study that we published recently. This is in ventricular tachycardias. These are ischemic VTs. On the top left here, I don't think I have a pointer, but on the top left here, that's an isochronal map. It looks a little different than you're maybe used to looking because it's homegrown software, but that's what it is, eight isochrons. Now, if you're going to ablate this arrhythmia in sinus rhythm, you would do the map, you would visually assess where the isochronal crowding is if you're using ILAM mapping, and you would ablate in those areas. Well, one of the things that we showed was that it's fairly easy to get the computer to directly show you what the isochronal density is. The top image in the middle there shows you direct visualization of isochronal density. These are eight isochrons. And you can see that visually, immediately, you have a much better colocalization of isthmus area, which is in black here, with isochronal density. We went a little bit further and asked, well, what if we change the number of isochrons? There's no reason we have to use eight. What if we use more isochrons? What if we use 1,000 isochrons? And if you increase the number of isochrons to 1,000, you can see that there's a much better localization of isochronal density with isthmus area. That's top right. And you can also see near the apex of this left ventricle that there's a hint that there might be substrate in a different location. Because you have now quantified isochronal density, you can draw receiver operating characteristic curves and assess quantitatively how these strategies work. And our current strategy of ILAM mapping with eight isochrons in this study had a relatively modest area under the curve, 0.6. The area under the curve increases. With 1,000 isochrons, it's 0.776, which is starting to get pretty robust. You can also use this type of analysis to quantitatively pick a isochronal density cutoff that gives you an optimal sensitivity and specificity. If you're going to go further and now train a computer to isolate, to propose isthmus areas, what you have to do is you have to tell the computer, you have to describe to the computer what it is about these points that we think make them isthmus points. And of course, you have to do that with numbers because that's how computers talk. Now, we get numbers from our mapping systems. We're used to getting voltage and activation time, location, and some other things. But what we did was we took the raw data from both CARDO and Abbott cases, took the raw electrograms and used some homegrown software to calculate a bunch of features that describe important things about isthmus points, things you might want to know, such as the width of the signal, the number of deflections, how far the points are to scar, how far away they are to lines of block, conduction uniformity, those types of things. Some of the techniques we use are borrowed from EEG analysis, which is what you can see there. That's where the instantaneous energy operator comes in. And when you do that, what you have is for all of the points you've mapped, you have what you might consider to be a feature vector. So that is basically a column of numbers that quantitatively describes all of the features that you calculated. And there could be other features, by the way. There are features that we might not have thought of that could make this process better. Now, once you have a feature vector for all of your points and you've annotated those points, so in other words, you tell the computer which of these points lie on the isthmus because you've ablated them successfully, now you're in a position to train the computer to isolate, to identify isthmus areas. And we did this with a neural network, as I'll show you in a second. Now, I'll mention one more thing about this. I won't go into too much detail about this, but it's important to note that the neural network configurations that are available, for the most part, this is a simplification, but for the most part, their input is ordered data. So think of a JPEG, think of a picture. That's an ordered number, ordered pixels in a uniform orientation. The data that we're using is point clouds. They're disordered point clouds. They're three-dimensional, first of all. And second of all, they are not ordered in any uniform way. So you have to have some way of dealing with that. And the way that we dealt with that was by applying a graph representation to the point cloud. That is a mathematical representation. There are several ways of doing it, but it's a mathematical way of connecting individual points in space to their neighbors. So, in other words, the feature vector for a particular point is modified by the feature vectors of the points around it. It allows the computer to understand the spatial component of each individual point. Okay, so once you do that, you're in a position to now train a network to identify isthmus points. So we did this for 29 cases. We took 19 cases. We calculated feature vectors for all of the points. It was about 140,000 points in total. And we trained a network on 19 of those cases. The network configuration, there's many different choices for networks. We had processes for doing all of that, which I won't get into. And then we took the trained network and tested it on 10 cases. So the input to this network are all the points with all of their features. The output is a probability map. So, as you can see on the top right, the raw output from this process is a map that gives you a probability for each point. We're looking at only points that have a greater than 90% probability. Only points that have a greater than 90% probability of isthmus identity. Now, because we are interested as electrophysiologists not so much in the per point analysis, but the regional analysis, we want the computer to tell us what regions we should ablate. We took the raw output and we applied density filters. And the final output from this process were discrete region groupings, so discrete area proposals. You can see that in blue on the bottom left there. So there's two areas that are proposed. You'll notice that we allowed the computer to propose multiple areas. That was by design. The reason for that is because we recognize that a single atrium can support more than one tachycardia. And to assess the effectiveness of our isthmus proposals, we compared the distance between the centroid of the proposed isthmus with the centroid of the true isthmus. The centroid is the geometric center of the area. We also used the dice coefficient, which is a measure of overlap. I'll just show you briefly the characteristics of the patients. They were roughly, maybe I'll show you the next slide. They were roughly 50 percent left atrial, 50 percent right atrial, roughly 50 percent surgical related, 50 percent post-ablation related. Here's a figure of the results. If you focus on the open circle there, on the left panel A, we can see that the open circle tells us the performance of the network on unknown data. So this is data, cases that the network has not seen before. And it identified the isthmus area within an average of 7.3 millimeters. That's a medium. There were, on average, 2.5 predicted groups per case, and the dice coefficient was about 14 percent. So the overlap is about 14 percent. To show you what this looks like in some other cases, these are four other examples. I'm showing you the activation maps on the left and the raw per point output on the right. And we've grouped the discrete isthmus proposals in blue on each of the images. The true isthmuses are in red. So what do we take from all of this? Well, the conclusion here, I believe, is that it is possible to more quantitatively visualize features. There are perhaps novel features that will help us ablate these arrhythmias better. It is possible to train a neural network to identify isthmuses, to propose isthmuses. This is preliminary work. I think that we'll do better with more data, which is the big problem in machine learning. The more data you have, the better it goes. So the next steps for us are to gather additional data. We've started a multicenter effort to gather data for atrial tachycardias as well as ventricular tachycardias. We're refining our network architecture. We're identifying additional features. And what we're hoping is that with some regulatory approval, we'll be able to start using this visualization and a machine learning isthmus proposal in live cases. So I'd just like to thank the very talented and smart people who are involved in this project, and thank you for your attention. Thank you very much, Dr. Saluja. Fascinating work. Our next presenter is Dr. Joseph Barker from Across the Pond and Appeal College London, talking about can AI use an ECG to predict whether someone is at risk for a lethal cardiac arrhythmia. Brilliant. So thank you so much for that introduction. I have no disclosures. I think we've heard from all our colleagues all the very clever things that we can do once we're in the lab. But the key thing is that patients need to survive their ventricular arrhythmia to make it into the lab for us to be able to actually do these things. So I've been tasked to discuss can AICG predict who is susceptible to lethal ventricular arrhythmia? Why ventricular arrhythmia matters, well, worldwide, there's 5 million deaths per annum. For the U.S. context, that's about 1,000 people per day. CHAT-GPT assures me that that's two times the daily death toll of the American Civil War. And it's a tale of two halves, really. So half of those that experience lethal arrhythmia actually present with lethal arrhythmia as their presenting feature of their cardiovascular disease. And then half of all cardiovascular death actually terminates in lethal arrhythmia. In order to contextualize the literature, I think the sort of most sensible way of doing that is this temporal clinical framework that I've come up with. Historically, you know, over the last 50 years as an EP community, we've been trying to develop tools to predict who is susceptible to lethal arrhythmia. And that's in a phenotype-driven heuristic with disease-specific markers. So ischemic cardiomyopathy of ejection fraction, QTC, and channelopathies, and cardiomyopathy scar burden, et cetera. But AICG models are very data-hungry. These events are rare and, for that reason, require long follow-up. Therefore, the current literature is positioned in all comers and often retrospectively and sort of almost clinical exhaust data, billing data. But ultimately, it doesn't align with model deployment and the immediacy of risk that is relevant to the patient. So in terms of imminent inpatient scenarios, there's this landmark paper by Chin Lin et al from the National Medical Defense Center, which shows that an AIECG rapid response system developed for clinicians to go and see people at high risk of mortality results in a 20% reduction in all-cause mortality during their admission. They've been in touch with our group to retrospectively analyze how these algorithms are working. And all I can say at this stage is it may or may not be to do with ventricular arrhythmia. In terms of imminent outpatient, I know technically these are EGMs rather than AIECG, but averting unnecessary ICD therapy in self-terminating VT, I think, is a reasonable application predicting things within sort of minutes. And this is something that our group have done with ICD EGMs. And there's a lot of discussion in the literature around low positive predictive values, but actually this is one instance where a very high negative predictive value is of utility to the patient. Then you have near-term outpatients. So this is predicting ventricular arrhythmia within days to weeks. And this is a paper I will go over during this talk. It's a Parisian paper by Fiorina and Carbonatti et al. And it provides the opportunity to escalate pharmacotherapy in high-risk individuals. And then finally we have sort of the more traditional long-term outpatient to ICD or not to ICD that has plagued MDTs for a long time. And this is something I'll also discuss today. So starting with near-term outpatient, it's always nice to sort of contextualize with a clinical vignette. We have Mr. Jay. He's a 58-year-old trucker. As you can see, he's an American trucker. He's had palpitations, dizziness, and hypertension, no syncope. He's had a Holter monitor from his primary care practitioner, 5% PVC burden, first degree, a little bit of left bundle, some couplets, but nothing of interest. And then five days later after monitoring, out of hospital cardiac arrest and died. So the question is, could AICD have learned hidden patterns, not visible clinicians, to change this outcome? So this is the paper that I think is relevant here. It came out last month, but presented actually at HRS as a late breaker two years ago. It's a near-term prediction of sustained ventricular arrhythmia applying artificial intelligence to single-lead ambulatory electrograms. It's a retrospective multi-center study across six countries, 250,000 patients. As I say, very data-hungry, these things. They have 14 days of ambulatory ECGs. They take day one as their training data set and then predict ventricular arrhythmia within days two to 14. The primary endpoint was adjudicated sustained VT and VF. This is a good outcome compared to a lot of the literature. As it's clinically adjudicated, they have seen the rhythm, but still technically it's not lethal ventricular arrhythmia. The models that they, the model design, the inputs that they use for the model are age, derived ECG features, everything that you would expect to find on your Holter report, plus also HRV measures, a heart rate density plot, and raw ECG waveforms. These are three-branch architecture that leverage CNNs and transformers to produce a near-term VT risk probability score. And overall, really exceptional performance, so externally validating at 0.95, but as I say, positive predictive values in the territory of about 10%. You actually have to dig into the supplement to see what the contribution of the AI ECG is alone, and as you can see, there is an equivalent, I don't have a pointer, there's equivalent performance between the fully explainable Holter-derived measurements, the AI ECG and the heart rate density plot, but obviously overall, when combined, there is incremental improvement. Within the paper, they went on to perform some interpretability analysis to determine exactly how these things are working, using grad cams showing PVC burden within the heart rate density plot is significant, as well as some degree of QRS fractionation being relevant. These are post hoc measures of explainability, and the usual reservations around association rather than causation stand, even in AI ECG research. They also looked to see how the performance worked with less data going in, and the models deteriorated when less than six hours of data go in, so this means that there is some degree of autonomic signal being picked up by these models, but we really don't understand how these models are working, and certainly work we've done in our group suggests that there is some cost to explainability that exists. So for Mr. J, our dizzy 58-year-old, five days later after his Holter monitor, we could have actually provided a rapid response, brought him into a safe area. He might have been invited to stop driving. We might have started on a beta blocker, for example, or escalated any pharmacotherapy that he's already on. Ultimately, this would have triggered investigation, and he might have had a temporary wearable ICD to avert his outcome. So moving on to the more traditional to ICD or not to ICD question, this is covered in the literature much more widely, but all these papers are still within the last year, and I'm going to discuss the AI risk estimation platform developed by my group. It's a 12-lead ECG-based technology that takes just the pure raw 12-lead ECG into a convolutional neural network with a discrete time-to-survival loss function, providing a mortality over time at risk. So ultimately, we're not predicting whether or not someone, if they're capable of having ventricular arrhythmia, but rather when they're likely to have ventricular arrhythmia. In this case, these are mortality plots at the bottom, but what you can see here is that over time, there is a dynamic risk change, and this is the likelihood of survival over time up until the red line where they do, in fact, die. The red dots are inpatient ECGs, blue dots are outpatient ECGs, and it shows that these are dynamic. So specifically for ventricular arrhythmia, this is a retrospective multicenter study, 250,000 patients again, but with 1.3 million ECGs that are 12-leads. It's a deep learning model that's trained on 12-lead ECGs alone, and the primary endpoint is death, but then ultimately tuned to VT and VF ICD-10 codes. Our five-year predictions, which is where we have an external validation cohort, show good performance. Just remember, this is not two-week prediction, this is five-year prediction. And our AUCs of around 0.8, and then externally validating to around 0.7, but you can see the vastly different positive predictive values dependent on the underlying prevalence of ventricular arrhythmia within the two datasets. So on the top line, you have a secondary care, Austin-based cohort, and then the bottom line, you have a UK voluntary cohort. One of the things that we've been able to show is specifically the contribution of AI-ECG. So in the blue there, you have the AI risk estimation performance, which outperforms all the traditional measures, including ejection fraction in the sort of red there. But when added all together, there is an incremental benefit in predictive performance. So what's next in the literature? So just going back to the fact that this is a tale of two halves, that half of people present with lethal arrhythmia, and then half ultimately die of lethal arrhythmia. We have the option to both widen the application of these tools, and that involves population screening, ultimately designed to trigger investigations. They'd probably be deployed best on wearables, and ideally be robustly applicable to sort of global contexts. As you can see from my immense PowerPoint skills, I'm leaning towards narrowing the applications of these technologies, and actually reverting back to the traditional clinical heuristic and deploying them within small, well-phenotyped datasets, ultimately to refine decision-making within a Bayesian sequence of investigation that we do, so in ischemic cardiomyopathy, after their ejection fraction, what's their AI-ECG score, and in their MDT, should they have an ICD? 12 leads are absolutely fine, and we need to make sure that these are deployable and contextualized to the setting in which they're being deployed. But ultimately, now it's time for the studies to move from focusing on how well they predict, to actually moving the needle and showing that there are mortality benefits to this. So some dirty washing from the literature, sometimes, I mean, not these papers in particular, but it's not clear end-to-end AI-ECG contributions, and these should be brought to the forefront. This is a difficult topic to study because of the scarcity of events, which results in unclear target populations, as in both the studies that I've described today, often in retrospective cohorts because of the time delay to accrue these outcomes, and surrogate endpoints, the lack of lethal ventricular arrhythmia. I know we had clinically adjudicated sustained VT in it, in our case, ICD-10 codes. But ultimately, there are problems that are not going to be solved within Python, and within the sort of technological realm, and the acceptability of integrating these workflows. The positive predictive values are around 10%. Are we comfortable that we alert nine other people, telling them that they're going to die within two weeks, bring them into hospital to save Mr. J? And there is a real lack of explainability that exists in these models, and we need to decide as a clinical body, with our patients as well, and so if there are any survivors of ventricular arrhythmia or patient groups, we formed a public and patient advisory group at Imperial, and we'd be very happy to get in contact. So can AI-ECG predict lethal arrhythmia? The answer is yes. It's time to demonstrate moving the needle. The technologies here and prospective evaluations are limited by the scarcity of outcomes, but the fidelity of the outcomes really matter. And I just want to end on a thank you to all the clinicians in the audience here, because ultimately, these labels that you're providing are being used to develop these technologies, so please just keep doing what you're doing. Thank you. Excellent presentations. We now have time for some questions. So those of you who'd like to come and ask questions, there's a microphone in the back here, otherwise we have some questions that submitted by the app, and you can continue to submit those, and we will answer those. So our first two are kind of very similar, and I think they're different from Dr. Baer's, and so they're kind of looking at the use of pulse field ablation and asking, like, how well can you target total ablation with the PFA catheters with your software? Yeah, it's, we are working, we are beginning to work on it, because at the beginning, it's only modalized for small electrode and a specific distance in the electrode, and for example, far away are not the same characteristic, but we can find some way to apply on it, because it could be a way to maybe to avoid some unnecessary application, and to adapt an ablation set, because probably it's too, with PFA, it's too easy, and too safe, and too fast to ablate, and maybe AI mapping could be a way to avoid some application and to adapt a set. So yes, we are working on it, and I think it's possible, but that would be a different way to use it. And the next, Dr. Saludji, the question was for you, is how many cases did you use to train your prediction model, and how many more do you think it will take to enhance the model? I don't have the exact number of the cases, I just have the number of the samples, it's, we try to have a representative of patients, that's why patients are coming from U.S. and Europe, but we don't have other countries for now, but the fact is you have to freeze your version to be approved, so we have to perform iteration, and to improve the database successively, so it takes time, but yes, I think it's never the end to improve the database. And a follow-up on that, what you said, you know, like the Tesla is constantly improving, right, but we have to freeze what we do, and you're learning. We could do it, but we can't, because of a regulatory concern, we can't, but it will be possible at CPT. Some point. Dr. Saludji, do you mind talking about, for your model? Sure, yeah, the training set was about 140,000 electrograms in 19 cases. How many do you need? This is really the sticking point with machine learning, is that the more data you have, the better things get. I think you ideally need hundreds, if not thousands of cases, and the reason that we used a per-point strategy as opposed to a per-case strategy is that you can leverage the large number of points in a case, and you can use a per-point input as opposed to if you take the image of the map and feed that into the network, then you need tens of thousands of cases as opposed to tens of thousands of points. So we are up to about 100 ATs that we have annotated and analyzed, and I'm hoping that we'll get better results in the order of 200, 250, and I think that would be a good start. Very good, and then Dr. Barker, so for the lethal arrhythmia ground truth, what was your definition of ventricular arrhythmia that you trained your model? So in our work, in training, we initialized the weights based on mortality, and that's because mortality is a reasonably robust outcome. It's very difficult for these things to be incorrectly labeled within these sort of retrospective data sets, but within the tuning aspect, we use ICD-10 codes and nine codes for VT and VF, which have their shortcomings. So technically, non-sustained VT is in there, but subsequent work that we've done and doing, not published yet, suggests that because we have this pipeline that involves mortality, these are robust ventricular arrhythmia outcomes, it seems. Very good, and then for Dr. Boers, again, for the tailored AF study, what was your ground truth for volatile classification, and could you also comment on explainability versus interpretability? Excuse me, explain what? So what did you use as a source of truth for the classification of the arrhythmias as sort of the absolute truth, and then can you comment on the ability of the software, can you explain for each classification, is it explainable, or is it just taken at face value as a dispersion? You are talking about the, how do we explain the nature of the driver of the dispersion? I'm not sure if that person could come and clarify their question. Yes, because in the historical paper published in the JAC about dispersion, we perform simulation, numerical simulation with water, and with Pantaray at the center of the water, and we observe, yes, some continuous activity between different bipolar, like continuous electric activity in a defined area. So yes, that's an explanation, this numerical simulation. I don't know if it was a question. I wonder for both Dr. Ho and Dr. Bars, are you learning something that, you know, because machine learning is sort of a black box, but you're doing ablation based upon what you're finding, so are you learning some kind of physiology that maybe you didn't know before about persistent AF and ventricular tachycardia? Any insights from the work? Yes, that's a great question. I've learned so much from this work because we have the ability to really characterize the critical isthmus with this data, and what I've noticed is that, I've learned a lot about the EKG, so we've always thought that, we always hypothesized that the ECG is the exit site of where it exits from your critical isthmus, and it's actually very interesting. We've seen some cases where it's like, we have like two different VTs, one is like a superior axis, and then we have an inferior axis EKG, and they share a critical isthmus, so on our AIECG map, you know, we get these two different exits far apart from each other, and when we actually look at the wall thinning, and you see this channel in between them, then you know that that's really going to alter our ablation strategy where, you know, it's not, for scar ranching VT, I mean, it's pretty established that you don't just go for the exit site, but it's really important to understand and delineate the scar, because that's going to be where the money is. And in our side, we learned a lot too, especially, yes, in terms of physiopathology, because we observed that, finally, ATAC, localized ATAC, and if it could be the same, two faces of the same coin, because we cardioverted some patient, and finally, we observed that a dispersion area in AFib could be, a few minutes later, localized re-entry with dispersion too, and so that's one thing that we learned, the link between IF driver and localized tachycardia, and we learned too to make the difference between fractionation and dispersion, for example, area of collision could be fractionation, but they are not sustained in the time and not consistent, so this way to understand what target to have in mind, give us some information about fractionation and real IF driver. Fascinating. Well, thanks again to all the presenters for a really fascinating session. We have to bring the session to a close, but thank you again.
Video Summary
The session focuses on the integration of AI in cardiac health, specifically in diagnosing and treating arrhythmias via catheter ablation. Dr. Christine Albert from Cedars-Sinai and Dr. Sanjay Gupta from St. Luke's introduce the session, which includes several presentations emphasizing AI's potential in electrophysiology. Dr. Clement Bars discusses the Tailored AF Trial, highlighting AI's role in guiding atrial fibrillation ablation and illustrating AI's capability in identifying high-complexity electrical patterns to improve treatment outcomes. The trial underlines AI's effectiveness in enhancing the accuracy of detecting arrhythmia sources, although challenges like data heterogeneity and procedural timing persist. Dr. Gordon Ho demonstrates AI's potential in improving the procedural outcomes of ventricular tachycardia (VT) ablations, showing that AI can effectively predict VT origins and critical isthmus points using ECG data, thus enhancing patient outcomes. Dr. Deepak Saluja focuses on quantifying and interpreting electrophysiological data using neural networks to train AI models, aiming to streamline and enhance isthmus identification for better mapping accuracy. Dr. Joseph Barker discusses using AI to predict susceptibility to lethal arrhythmias, emphasizing the need to validate AI's clinical utility in predicting imminent risks. The overall discussion emphasizes how AI can transform cardiac arrhythmia management, highlighting its capabilities in automating complex diagnostic procedures, though noting the ongoing challenges in data collection, training, and regulatory approval.
Keywords
AI integration
cardiac health
arrhythmias
catheter ablation
electrophysiology
atrial fibrillation
ventricular tachycardia
neural networks
electrophysiological data
cardiac arrhythmia management
Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
P: 202-464-3400 F: 202-464-3401
E: questions@heartrhythm365.org
© Heart Rhythm Society
Privacy Policy
|
Cookie Declaration
|
Linking Policy
|
Patient Education Disclaimer
|
State Nonprofit Disclosures
|
FAQ
×
Please select your language
1
English