false
Catalog
#HRS2025 YIA Competition - Clinical EP Finalists
#HRS2025 YIA Competition - Clinical EP Finalists
#HRS2025 YIA Competition - Clinical EP Finalists
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Good afternoon, everybody, why don't we get started. My name is Ravi Ranjan, and I welcome you all to the Young Investigator Award Competition part. This is the clinical EP part of that award competition, and I welcome you on behalf of all the other committee members, Mikhail Kailu, Faisal Saeed, Stavros Stavrakis, and Anna Feniger. Let me first start with congratulating all the finalists. You have done great work, that's why you are here. Round of applause for everybody, all the finalists. You had some tough competition, but you're here, so your work really stood out, so you should feel good about it. It is a competition, so we have to follow some rules. You all have 10 minutes to present. That will be followed by 10 minutes of question and answers. The questions will be asked by the judges on the panel here. In the years past, we just haven't had time for the audience to ask questions, so we'll mostly keep it with the judges up here. Please do keep your presentation to 10 minutes. We don't want to go over. There will be a timer that starts when you start talking. It turns yellow when you have two minutes and turns red when you have one minute left. With that, why don't we get started? The first presenter is Burman Zaidabadi from Imperial College London. The title of his presentation is Artificial Intelligence Enhanced ECG Platform for Comprehensive Cardiac Screening and Risk Prediction Using ECG Images. Thank you. Welcome everybody. My name is Burman Zaidabadi. I am excited to discuss my work on an AI ECG platform for comprehensive cardiac screening and risk prediction using ECG images. Why are we interested in risk prediction? In cardiology, risk prediction can profoundly affect lives. By identifying those at high risk earlier on, you can allow for timely and potentially life-saving treatments. Here are two examples of widely used risk stratification tools. CHADS-VASCO, which assesses stroke risk in patients with atrial fibrillation, as well as a left ventricle ejection fraction that can be used for primary prevention ICD. Now these can significantly improve outcomes. However, if you look at the model's discriminatory power, as measured by C-Statistic, it's actually relatively poor, ranging from 0.58 to 0.69. And this is where AI ECG comes in. We've already seen AI ECG to outperform expert clinicians in a variety of tasks. In blue, diagnostic tasks, such as detection of left ventricle dysfunction and even sepsis, and in red is predicting future events, such as mortality and future cardiovascular diseases. Now last year, we saw the first trial in any domain that showed artificial intelligence can reduce mortality. The team in Taiwan conducted a randomized control study, where the intervention arm received an AI ECG mortality score, as well as those in the high-risk group, creating an alert for the clinicians. And in the control arm, they received usual care. And the results were quite staggering. So for all patients, the cardiac death rate was significantly lower in the intervention arm compared to control, with a hazard ratio of 0.27. And this drops to 0.07 hazard ratio in the high-risk group. Again, the first trial in any domain that has shown artificial intelligence can reduce mortality. And we as a team have gone beyond this by developing the AI risk estimation platform. So now when someone's identified as being high-risk of mortality, we have a suite of models that can predict future cardiovascular diseases with actionable outcomes, such as prediction of future heart failure, future age of fibrillation. However, there is a fundamental problem. All of the models I've described up until now, and actually over 90% of models that currently exist, use natively digital signals and input. So it's a numerical input that's fed into these models. Why is that an issue? Well, in order to use signals, you need to be able to store them. And for that, you require digital infrastructure. However, globally, the majority of institutions store ECGs as paper-based or images. So putting that all together, there's a lack of digital infrastructure globally. The majority of models require that infrastructure, and so that simply creates this barrier where we can't use the models. And this is where image AI ECG comes in. What I'm proposing is to use images directly as an input into these models. So rather than the gold standard currently, which uses a signal and a numerical input, what I'm proposing is to take an image of an ECG and input that into these models. And this can potentially open the door for earlier adoption in these institutions, the lack of digital infrastructure. So first, I need to establish if this would work. Would an image model be comparable to a signal model for the task of predicting mortality? So in terms of data sets, from a derivation data set, I use the Beth Israel Deaconess Medical Center, which is a USA Secretary of Care cohort of 190,000 patients. And for external validation, I use two Brazilian cohorts, the Samitrop Chagas Cardiomyopathy cohort, the Primary Care Cohort Code, as well as the Relatively Healthy UK Biobank, which is a volunteer cohort. And then finally, the Shanghai Secondary Care Cohort. So you take an ECG image, you feed it through a neural network, and the output is this discrete time survival curve that can predict an event 10 years into the future. In my case, it's mortality. Now in internal validation, we see on the y-axis the two inputs, so image versus signal. On the x-axis, the model's performance measured by c-statistic. So here you can see in internal validation, comparable performance between image in red and signal in blue. And this is similar in first external validation and Samitrop, comparable performance between the two inputs, and then again in the code cohort. So up until now, I've shown that image and signal can be comparable for the task of predicting mortality. The next consideration was the impact of image resolution on model performance, as this significantly impacts the computational cost. So to experiment this, I created 11 different image resolutions of the same ECGs, from 310 by 868 pixels all the way down to a single pixel. And the results were quite impressive. So as the resolution decreases, indeed model performance decreases. However, it's not a linear relationship. And even a 27 by 76 pixel image has a c-statistic greater than 0.7, showing incredible capability for such a blurry, pixelated image. And more impressively, even a single pixel provided some predictive power, as at a c-statistic greater than 0.5. But it's more important to see if the model can perform well on poorer quality photographed images, because in reality, images are not likely to look like this. In a clinical setting, they're likely to have these various distortions applied to them through taking a photograph of them. So to replicate this, I went and printed out 1,000 ECGs, created these various distortions to them, took images of them, created a data set of 1,000 images. And to improve my model's generalizability to these images, I applied various transformation techniques, such as applying Gaussian noise, overlaying on scrunched up paper, rotating the images, and in this final format, even shuffling the leads around. Altogether, there were 1,500 unique transformations in my training set. So now, a model's been exposed to various different transformations, but a task is remaining to predict mortality. And for these transformations, you see this comparable performance being maintained, but this time in the poorer quality photograph data set between image and signal. So you can take in various quality ECG images, even lower resolution, feeding through a neural network to predict mortality with high accuracy. Next, could I go beyond mortality prediction? Could I detect disease, as well as predicting future cardiovascular diseases? So first, for detection of structural heart disease, in this case, this was no longer a survival curve. It was detection of if the disease is present or not, so it's binary classification. And so for detection of low left ventricle injection fraction, internal validation shows a strong AUC of 0.896, and actually a slight improvement in the Shanghai cohort, with an AUC of 0.92. Similarly, for detection of moderate or severe valvular heart disease, which consisted of detection of moderate or severe aortic stenosis, tricuspid regurgitation, mitral regurgitation, and aortic regurgitation, you see strong performances in internal validation. And actually, once again, a slight improvement in external validation in the Shanghai cohort, but this time, the strongest performance going from aortic stenosis detection to detection of tricuspid regurgitation, potentially affecting the higher prevalence of this condition in that cohort. Next, we're going to predict future cardiovascular diseases. In my case, I predicted future complete heart block, heart failure, ventricular arrhythmia, and atrial fibrillation, all of which achieved a C statistic greater than 0.75. If you go back to the start of the presentation, I showed that existing risk stratification tools have C statistics from about 0.58 to 0.69. And this actually generalizes well for prediction of future cardiovascular disease in the UK Biobank, which is a healthy volunteer cohort, but this time, the strongest performance being prediction of future complete heart block. And the wider confidence intervals reflect the lower incidence of these diseases in these datasets. So you can take in various quality ECG images, even lower resolution, to predict future cardiovascular disease, as well as mortality, and detect undiagnosed structural heart disease. Next, I applied saliency mapping. So what saliency mapping does is it looks at areas of the image that the model focuses on the most when it's making its predictions. So for the same ECGs, I looked at prediction of mortality. And you can see here, more global feature focus when it comes to predicting mortality, predominantly focused on the whole QRS complexes and all waveforms. In contrast, for the same ECG, when the model's predicting future atrial fibrillation, you see contrast, where it focuses predominantly on the P waves, as you'd expect a clinician to look at. So finally, could I put all of the work I've described up until now into a single platform that can potentially be deployed and implemented in clinical trials? So here's a demonstration, where you select the patient type. Are they inpatient, outpatient? You upload a 12-lead ECG image. And it goes through preprocessing, quality checking. And as an output, you get structural heart disease screening. So in this case, no signs of valvular heart disease, normal left ventricle ejection fraction, prediction of future cardiovascular diseases, low risk, meaning the first and second decile for future cardiovascular diseases. Then you can look at the preprocessed image, see if it looks OK. And if interested, apply the saliency maps to look at what the model's focused on the most when it's making its predictions. Again, prediction of future atrial fibrillation has its most precise focus on the P waves. So in summary, I've shown that image and signal can be comparable. I've shown that performance can be maintained on lower resolution images, as low as 27 by 76 pixels for mortality. I've shown that performance can be maintained on poorer quality photographed images through the transformation techniques I applied. I've then gone beyond mortality to predict future cardiovascular diseases with actionable outcomes, as well as detecting undiagnosed structural heart disease. And then I've validated these on four transnational cohorts, which amounts to over 800,000 patients combined. I've combined this all together into a single platform, as I showed earlier, that can be used for future clinical trials. So potentially, I've allowed for earlier adoption of AIECG for institutions that lack a digital infrastructure, and democratized this for institutions that lack this. Finally, I'd like to thank my mentors and the team at Imperial College London, as well as the National Heart and Lung Institute and the British Heart Foundation for their support. And thank you all for listening. Anna, you want to start off with questions? Thank you so much for this presentation. Outstanding work. My question for you is maybe take it the next step. So what are clinicians going to do with these results? And how do you think this should be implemented? Should this be applied to all ACGs, select ACGs? And what are clinicians going to do with it? Thank you for the question. I think that there are evidence-based therapies that currently exist for all the actionable outcomes. So what these models can do is bridge that gap for that late detection of when people present acutely. And what these models can do is provide an added data point to complement and augment clinical decision-making. But the question of what to actually do when there's a high risk for a patient, that requires prospective validation, as well as cost-effective analyses to identify when to monitor the patient and for whom it's most beneficial. That was outstanding work. Just a quick question. You showed the single pixel had statistics of more than what was expected. So are you picking up some noise there and you're presenting it as real? How do you comment on that? Thank you. Very good question. It's quite alarming initially to see this. So in this slide, you can see that I propose that potentially single pixels are picking up just heart rate in QRS amplitude. Because as you downscale the image, the faster the heart rate, the greater the QRS amplitudes, the darker that final pixel will become. And so I evaluated this and applied a Cox model. I looked at heart rate in QRS as a predictor of mortality. And actually, it does OK with a C-sys at 0.65. But even this incredibly blurry image has a stronger performance. And I think what this suggests is that the model can pick up these subtleties, these changes in the waveform, the spatial resolution variation, that we as humans cannot comprehend. But potentially, there is some beneficial noise to that as well. You presented training of the model on high-quality EKGs, right? And that had certain accuracy. And then you presented a model trained on lower-quality EKGs, and that had a certain accuracy, slightly lower. So in practice, how would you train the model? Would you train it on a combination of good quality and low quality? Or what model would you use as a baseline? And then you would input EKGs of different qualities. And how would that affect the accuracy of predictions? Thank you for the question. I think it all depends on the use case. If the good-quality PDFs are available, then it's worth using the more accurate model. But in the validation sets that I've been exploring with, the model that's trained on all these various distorted images actually performs relatively well on the high-quality PDFs and the poorer quality. So I think the best approach is to combine and have a bigger training set that has a variety of all the possible combinations. And to allow for reliable predictions on all formats that's fed in, rather than have separate models for different images. I have a question. So what more can you do to improve the prediction of these images? Do you think a larger data set, other modalities that sort of is routinely available to a clinician, how do you feel about incorporating those to improve the future prediction of these models? Thank you for the question. So what I've shown is using a single ECG, and actually, in fact, the first ECG per patient. And from there, the next step is potentially to use sequential ECGs. We have a wealth of ECGs available per patient. Alongside that, you can combine that with other electronic healthcare record information, other image modalities. But already, there's a substantial amount of information on the ECG alone. So it will be interesting to see the next steps, how that varies with different input data. Congratulations. Great work. Thank you. I really enjoyed the website that you created. I think it's fantastic. Thank you. Are there any ethical implications or considerations of people loading up their own waveforms, which has predictive analytics attached to it, and potentially identifiable data within the waveform? Have you considered that? And what are your thoughts about that, and how would you mitigate the downstream implications of that? Thank you. Very, very good question. The website was predominantly for demonstration purposes. It's not available for people to upload their image. I think it requires very precise quality checking at the upfront, where you ensure that the image doesn't contain any patient identifiable information. That requires a separate model. From then, actually seeing the effect this can have on clinical decision making. Perhaps there's too much information available on what I've shown clinicians perhaps would prefer selecting a model first, and then making a prediction. But when we have the robust data to show that it's HIPAA GDPR compliant, and that it's acceptable and doesn't cause any adverse harm, then I think a platform like this can potentially be in the pockets of clinicians, where they can, if they have a clinical suspicion, take a photo of an image of an ECG, and in a matter of seconds, it can guide their clinical decision making. Thank you. Thank you. I would take the ethical implications a step further. What if you make it available directly to patients, and if they upload their own EKG? Is that the consideration? Thank you. I think it's important both parties be involved in the developmental phase. Taking into account what a clinician would want to see from the models, but also the patient, because potentially the patient might not even be happy for the ECG to be uploaded to a system such as this. But to understand the outputs of these models requires education as well. So I think only select people should be allowed to use these models, and if possible, conduct focus groups and patient-public involvement to see what everyone has to say about this. Great. Thank you very much. A round of applause. Our next presenter is Dr. Zhang from Johns Hopkins University, and the title of her presentation is Genotype-Specific Digital Twins for Accurate VT Ablation Targeting in ARVC. Hello everyone, I'm Yingnan Zhang from Johns Hopkins University School of Medicine. It is a true honor to be selected as the HRS Young Investigator Awards finalist, and I'm very thrilled to share my research with all of you today. My work focuses on developing genotype-specific digital twins to improve arrhythmia ablation targeting ARVC. So first of all, what is a digital twin? According to National Academies, a digital twin is a set of virtual information constructs that mimics the structure, context, and behavior of a natural engineering social system. There's consistent data exchange between a physical system and its virtual counterpart, which enables the digital twins to make predictions of important features that go beyond existing data. In a field of precision medicine, digital twinning is also emerging technology that mimics the temporal and spatial characteristics of a patient's organ. By simulating the effects of treatment on the virtual replica of patients, digital twins enable predictive modeling and treatment optimization. And these simulations allow us to make more informed clinical decisions to perform more custom tailored therapies and to minimize patient risks. So building on this concept, our team developed Heart Digital Twins, which provide a virtual replica of the heart's electrical activity. And we have been using Heart Digital Twins to identify VT circuits and guide clinical catheter ablation and particularly in ischemic heart disease. So in my study, we focused on ARVC, which is an inherited heart disease with many genetic variations. And this cardiac condition leads to VT and SCD in young adults. One of the primary treatment for VT and ARVC is catheter ablation. However, finding the exact location to ablate requires extensive substrate mapping, which is a very time-consuming process. Also, patients who are not hemodynamically stable enough may not tolerate VT induction, which makes ablation targeting even more challenging. So here, Heart Digital Twins can address most of these challenges by firstly predicting the exact location to ablate based on the circuits that are digitally induced, even for those patients who are contraindicated for VT induction. And secondly, all these simulations and predictions happen before the procedure, which means that the clinicians have more time to plan and optimize. Also, the predicted ablation targets can be incorporated seamlessly into EAM for real-time visualization during the clinical procedure. Most importantly, the nature of simulation allow us to consider additional patient-specific factors in our prediction, such as the genetic factors. So in this study, we aim to present a novel technology named GenDirect that non-invasively and pre-procedurally identifies the optimal ablation targets in ARVC cohort. And notably, this is the first time that patient-specific genetic profiles get incorporated into organ-scale computational models for translational research. And we also aim to demonstrate the predictive capability of GenDirect by comparing its predictions with clinical ground truth in a blinded fashion for both the index and any redo procedures. Here are the inclusion and exclusion criteria of my study. We included patients with confirmed diagnosis of ARVC and ablation history. We also required LGMRI to only exhibit right ventricular structural remodeling and genetic testing results to confirm either PKP2 or gene-elusive genotype. For validation, we required comprehensive clinical ablation data, including the map surface of EAM, the ablation lesion points, and the EP report. We excluded patients with poor image quality, lab ventricular involvement, absence of VT inducibility during procedure, and incomplete ablation data. And finally, we retrospectively included 30 ARVC patients, 15 PKP2 and 15 gene-elusive. 25 out of 30 had a single index ablation, while the other five required redo ablation due to VT recurrences. And all the redo procedures occur within 12 months after the index ablation, with a mean time interval of only 7.5 months. And this short time interval suggests that all these recurrences were likely not due to regular disease progression. We constructed a genotype-specific digital twin for each patient in the cohort, and these are the two clinical data we used. From clinical images, we first performed 2D myocardial segmentation and identified three tissue types on the right ventricle, the non-fibrotic, diffuse fibrotic, and dense fibrotic, based on which we reconstructed the 3D biventricular geometry. We also identified nine uniformly spaced pacing sites on the right ventricle, from the basal to apical planes. We then referred to the genetic testing results and populate each digital twin we constructed using a corresponding genotype-specific computational cell model. And you can see that PKP2 and gene-elusive cell models have different EP properties, and this means that the genetic information has already been incorporated into the digital twins to affect the final ablation prediction. We then perform in silico rapid pacing on each pacing site we selected, and identify all the possible VT circuits this digital twin can harbor and their corresponding ablation targets. Next, I would like to show you how we use GenDirect in pinpointing ablation targets. So in this patient case, by pacing from the apical lateral pacing sites, we are able to identify a figure of a re-entry at a basal anterior RV wall. And from this high-resolution map, we can easily identify the critical components of this VT circuit, based on which we performed the in silico virtual ablation, which is the orange tissue on the right side. We tested it using the same pacing protocol to see if there's any meandering or emergent VTs. If non-inducible, we finalize this virtual ablation, and this set of targets should be ready to be exported to clinical EM system. However, in some cases, after a round of virtual ablation, the digital twin remains inducible, and there exists emergent VTs. For example, in this patient case, when pacing from the start location, there are two small VT circuits happening at a base of the RV. And after a round of virtual ablation to address them, which is shown in the middle, there's a new VT emerging in between the two islands of ablated tissue, as shown in the middle. In order to address that, we added additional tissue to the previous lesion set, and this new targets render this digital twin non-inducible. And then we compared our gene direct predict targets with a clinical ground truth by co-registering the clinical EAM surfaces with the corresponding digital twin. The gene direct targets are in orange, and the clinical ablation lesions are in dark red. And for the 25 patients with only a single index ablation, we found a high degree of concordance between the two sets of lesions, with a mean overlap ratio of .85, and this highlights the predictive capability of gene direct in non-invasively pinpointing effective ablation targets. We also compared the volumes between the two lesion sets, and we surprisingly found that gene direct targets are highly significantly smaller than the clinical ablation volume, which suggests a more precise and tissue-preserving ablation strategy. We also found that gene direct can capture all the possible VT circuits and ablation targets at once. To potentially reduce the need for repeat procedure and rehospitalization. For example, in this patient 18, gene direct pinpoints two clusters of ablation targets, showing orange, one on the posterior wall, and one on the anterolateral wall. And these coincide with ablation lesions found in both the index and redo procedure, which is only one month from each other, showing dark red and purple respectively. And we can see that for all the five patients in our cohort with VT recurrences and redo procedures, the gene direct targets largely overlap with both the index and redo ablation lesions. Which means that they can be addressed at once with the help of gene direct. And this is largely because of the nature of simulation in which we have a luxury to pace from a number of pacing sites, and test iteratively after any modification to the cardiac tissue's EP property. So lastly, to summarize, the study introduces gene direct, which is a novel digital twin-based technology that non-invasively and pre-procedurally predicts optimal VT ablation targets in ARVC cohort. And the technology can not only effectively address the clinically manifest VTs, but also anticipate those VTs emerging after ablations. Our study does have some limitations. Firstly, we only focused on RV-dominant ARVC, so future work should expand to patients with LV or biventricular involvement. Also, gene direct currently only models PKP2 and gene elusive so broader inclusion of additional genotypes is needed as more cellular EP data become available. We also anticipate to have a prospective clinical study to fully validate the power of gene direct and assess its utility in real world clinical workflows. Lastly, I would like to express my deepest gratitude to my research advisor, Dr. Natalia Tranova, for her invaluable mentorship. Also huge thanks to the leaders of the ARVC research team at Hopkins, Dr. Hugh Calkins and Dr. Cindy James for their expertise and guidance. Also, I would like to express my appreciation to other members in the ARVC research team and my collaborators. Also, thanks to audience and judges, thanks HRS for inviting me to this talk. I look forward to your questions. Thank you. Congratulations for very interesting work in a difficult to study population. What this method brings is the incorporating genetic information in your model. And I was curious if you have done modeling with and without the genetic information to understand what's the incremental benefit of incorporating genetic information in your model. Yes, this is an excellent question. Thank you for asking that. We have actually published it in eLife in 2023 about the necessity of incorporating correct genetic information and EP property into the computational modeling when we are handling ARVC patients. So it has been proved that it is necessary and important. Yeah, excellent work. Was the ablation strategy including epicardial ablation or does your model predict epicardial ablation? Because we know that up to 50% of these patients require epicardial ablation. Yes, this is also an excellent question. I have been asked this question more than one time. So actually it is doable to model an RV as a shell without a thickness, but this is not in our case. In our clinical image segmentation, we did delineate endocardial and epicardial surface. So our right ventricle is 3D. We can depict 3D wave propagation. And the region in between the endo and epi are where the transmural wave propagation occurs. So from our 3D activation map, we are able to tell where this VT originates. So when we are choosing the virtual ablation targets, we do know where the access, like where we access. As similarly done in one of our clinical trial, we did report all this information to the clinician before the procedure so they can decide where to access based on our prediction. So we did differentiate epi and endo ablation. May I ask a follow-up question? Yeah, so do you think that this technique can be applied to other VT? Because ARVC is a rare disease, but there is way more prevalent ischemic VT. So and that would increase your patient population and the clinical implication, it will strengthen the clinical implication? Yes, this already in a clinical trial at Johns Hopkins about ablation target prediction. It's called AVIR-VT, it's for ischemic patients. And also this skeleton, the host skeleton can be applied to any kind of diseases if there is enough genotype-specific cellular epi properties available. For example, for HCM, it is also a very good field of applying this platform. Thank you for this great talk. I had a question. So you have two patient populations. You have the PKP2, patients with PKP2 variants and the one with gene elusive ARVC. Did you notice any difference between those two groups in terms of the concordance of your model with a clinical ablation set? Yes, actually in the paper I included this information, but here due to the time limit and include that, they perform similarly well between the two lesions. Even for the ablation volume, they are both significantly smaller than the clinical ablation volume. Great work, thanks. Does your model have any potential applications for prediction of VT in patients who are not undergoing ablation or, I mean, risk of VT, risk of sudden death, and also the characteristics of the VT, the cycle length and so on? Have you explored those endpoints through your modeling? So for risk stratification, yes, because it is the first step that we do before we dive into the ablation. As we mentioned, it has been published already, and we have also used this digital twin platform on risk stratification for RTOF on HCM and on ischemic patients. And also for the question about cycle length, we haven't validated that part yet, but we will definitely dive deeper into that in the future. Really nice work. My question is a little bit more technical. A lot of these models depend a lot on how you model the border zone or the scar zones in terms of conduction velocities. How did you, what values did you choose? The reason I ask is, depending on what values you choose for those, you can really affect the outcomes and induce lots of different VTs. Some of them could be way more than what you would clinically observe. So how do you think about that and address that problem? In ARVC, all these changes to conduction velocity conductivities were we just used the same parameters as in experimental literatures to reflect the genotype-specific alterations to the EP property. And did your model ever predict VTs that were not seen clinically? Yes, of course. That's why for the five patients with redo, like ablation, most of the allergen direct targets can overlap with both the index and redo procedures. That's because when we run the simulation, it definitely induced more VTs than is clinically manifesting in one procedure. Can I ask a follow-up question about that? So the patients who had two procedures within a year, do you think, looking through those cases, that those patients would have not needed those additional procedures had they had those additional ablations or was this part of the modeling that was done that led to that sort of result? What are your thoughts about that? So let me rephrase. Does your model identify potential future VTs accurately enough that you can then target those even if they're not clinical or inducible areas in an EP procedure? So if I was to do an ablation, do you think that this is strong enough prediction that you should target those areas even if you don't are able to induce VTs from those areas? Yeah, that's a good question. So that's why a prospective study is needed in the future. Since this is only a 30 patient, let's say it's a very novel study. So we are anticipating to have those kind of prospective study to see. Yeah, okay. Great, thank you for the excellent presentation. Thank you. We'll move on to our third and final speaker, Carmel Asher from University of Colorado. And the title of his presentation is Direct Oral Anticoagulant Management and Outcomes Following Cardiac Implantable Electronic Device Placement. Okay, let's get you set up. Okay. All right, so yeah, I'm Carmelo Shore. I'm a second year EP fellow at the University of Colorado. Real privileged to be a part of this competition. Great talk so far. We're gonna take a little bit of a 30,000 foot view here at clinical cardiology, electrophysiology, and talk about this paper called Direct Oral Anticoagulation Management Outcomes Following Cardiac Implantable Electronic Device Placement. So a little bit about some background. So 1.3 million cardiac implantable electrical devices implanted each year. Of these patients, about 35% of them require being on anticoagulation, which then brings up the clinical conundrum that we often face about perioperative anticoagulation management balancing the risk of stroke and perioperative bleeding. So this question was first addressed with the BRUCE control trial in 2013, which looked at 668 patients with a high thromboembolic risk score on warfarin for any reason, and they randomized them to continued warfarin versus interrupted with heparin bridging. And the results of this trial were pretty clear that there is a significantly higher rates of pocket hematoma in those in the heparin bridging arm compared to those on continued warfarin, and this held true among all different subgroup analysis. So it's been pretty clear and consistent, class 1A guideline recommendation in patients on warfarin undergoing device placement that they should either be continued on their warfarin or minimally interrupted without heparin bridging. Now in response to the rising utilization of DOACs in 2018, the study was repeated, now looking at DOACs and perioperative management. And again, similar cohort, patients now with atrial fibrillation undergoing either new device or generator change, and they randomized them to continued versus interrupted DOAC. And again, the results of this study were pretty clear, no differences in significant pocket hematoma, no differences in stroke rates. However, both groups resumed DOACs within 24 hours, and follow-up was generally limited to one to two weeks. So this trial was more designed to look at the incidences, incidence of pocket hematoma and safety of doing this procedure on uninterrupted anticoagulation, specifically DOACs. Which then translated into a 2A recommendation in these patients that either an uninterrupted or an interrupted DOAC strategy is reasonable. Now all of this can be summarized here in a recent Jack's state-of-the-art review that looked at these patients, and in those with a high thromboembolic risk, either an uninterrupted or minimally interrupted strategy is what is recommended. So now I'd ask all the implanters in the audience to kind of take a moment and reflect on your own practices. How do you manage your AFib patients, DOAC following CIED placement? That's the question that we were hoping to answer with the study. So we have two major objectives here. Objective number one, understand the extent to which practice guidelines are applied in the real world. Objective number two, evaluate long-term downstream consequences of post-CIED DOAC management choices. So to do this, we tapped into the NCDR registry, specifically the EP device implant registry and linked with Centers for Medicare and Medicaid Services. We looked at between the years of 2016 and 2019, notably we were limited at the upper end of this by the availability of CMS linkage. And we included all patients over the age of 18 with atrial fibrillation, a CHAZ-2 vest greater than equal to two who were undergoing either initial placement of a device or generator change. And we excluded those discharged on Warfarin that had a leadless pacemaker. If we didn't know the details of their discharge history or they could not be linked to CMS data. So our study exposure in this paper was DOAC prescription at the time of discharge. We included all four commercially available DOACs and we basically said, were they discharged on a DOAC or were they not discharged on a DOAC? And we looked at the following outcomes. One, we looked at temporal trends of annual DOAC prescription rates over the time of the study, stratified by their CHAZ-VASc score. And two, we linked with CMS to look at short, 30-day or long one-year term clinical outcomes. We looked at pocket hematoma, major bleeding, need for blood transfusion, device infection revision, stroke TIA and re-hospitalization. Now to try to account for confounding variables between the two core hodes, we used propensity matching via inverse probability of treatment waiting. And we used the following characteristics. We looked at patient characteristics here, a lot of medical comorbidities as well as the type of device that was implanted. And this is what we found. So here's a breakdown. So we had 191,000 patients sort of met our initial inclusion criteria. Of those, we excluded a fair number, largely because of inability to link with the CMS dataset, but we had nearly 60,000 in our final analysis. Of those, 32,000 were discharged on a DOAC and 27,000 were not discharged on a DOAC. And then we had nearly all of them in the 30-day outcome and 48,000 had one-year long-term data available. So in terms of baseline characteristics, you can see relatively age of 75, predominantly male, pretty sick cohort, I would say, in terms of high CHADS-VAS, close to 4.2, 4.3, and ejection fracture around 30%. You can see the breakdown of initial generator implant versus generator change and the time from the procedure to discharge, which was not different between the two groups. In terms of AFib characteristics, I think this is notable here in that in those with paroxysmal AFib, a higher proportion of them were not discharged on a DOAC, whereas those with persistent, long-standing persistent or permanent, a large proportion of those patients were discharged on a DOAC, which I think is logical and is consistent with the presenting atrial rhythm where if they presented in sinus rhythm, a higher proportion of them were not on a DOAC and similar to AFib where if they were in AFib, a higher proportion of them were discharged on a DOAC. We looked at antiplatelets on discharge and then the type of DOAC on discharge and the predominant DOAC used in this group was apixpin. So this is our first figure. So this is our objective one, our outcome number one in terms of temporal outcomes over the study period. The black line represents the whole cohort. You can see less than half of patients discharged on a DOAC in 2016 and over 60% by 2019. And then we stratified by CHADS-VASc score where you can see those with a higher CHADS-VASc in pink going up to those with a lower CHADS-VASc of two. And interestingly, there's an inverse relationship where those with a higher CHADS-VASc are less likely to be on a DOAC, which you could maybe rationalize by the relationship between CHADS-VASc and HasBlood and higher comorbidities in that population. So then via CMS linkage, this is our stabilized IPTW data. We looked at 30-day outcomes and there was a significantly higher rates of pocket hematoma in those who were discharged on DOAC, although the overall absolute rates are low. And this did not correlate with differences in device infection or device revision at 30 days. And then when we looked at one year and those who were not discharged on a DOAC at discharge had higher rates of a stroke at one year. All right, so we can launch into the discussion now. So three take-home findings. One, despite an evidence of safety, there are overall low rates, about 54% of DOAC prescription at the time of discharge. Two, those prescribed a DOAC at discharge had higher 30-day incidence of pocket hematoma without pocket infection or need for device revision. And three, there was an observed reduction in long-term risk of stroke in those discharged on a DOAC after device placement. So there's clearly some, what we've uncovered here is there's a level of equipoil. I asked you guys to reflect on your current practice and I would imagine that there'd be a variety of different answers in the audience when it comes to deciding on post-procedural anticoagulation management. About 50% of patients in our cohort, these are high CHAS-VAS patients who are followed by a cardiologist or an electrophysiologist are on anticoagulation compared to about 65% in the general population. This brings up the need to balance the modest 0.2% increase in short-term pocket hematoma without long-term consequences against the smaller, albeit somewhat relevant, reduction in long-term stroke rate. I think this point brings up the need for personalized assessment based on patients' preferences, comorbidities, their presenting atrial rhythm, and procedural considerations. So finally, this brings up the idea to have a plan to ensure appropriate DOAC compliance. So for example, you can hold or resume the DOAC 48 hours post-procedure versus having it completely uninterrupted. With the idea that there's some component of clinical inertia here, the unintended downstream consequences when anticoagulation is held, not appropriately restarted following a procedure. Now our study has some limitations. So DOAC status was only known at the time of discharge. The no DOAC arm could represent not on a DOAC, interrupted DOAC versus interrupted Warfarin. This is gonna obviously underrepresent our raise of DOAC prescription at discharge, but however, we would expect that partial DOAC resumption in the no DOAC arm would drive results even more towards the null. And of course, there's selection bias and residual confounding not accounted for by a propensity matching such as frailty, clinical judgment, et cetera, and this concept that you're more likely to withhold anticoagulation in a quote sicker patient. So in conclusion, DOAC use after CID placement in AFib patients with an elevated thromboembolic risk is low but rising. Implanters should weigh the marginally higher risk of pocket hematoma without clinical sequelae against the higher stroke rates when DOAC therapy is withheld at discharge. And finally, improving compliance with guideline recommendation for uninterrupted or minimally interrupted DOAC strategy is an opportunity to ensure that high risk AFib patients are appropriately anticoagulated. I wanna thank the group at University of Colorado, especially my mentor, Neet Sandhu, and everyone at the Yale NCDR group who helped us out and Dr. Al-Khatib at Duke and Dr. Bradley at Atlanta Health. Thank you. Yeah, that was an excellent presentation. Thank you. Quick question. Why did you include only patients with CHA2DS2-VASc2 or more? We know some patients with CHA2DS2-VASc1 also can be on anticoagulants. I guess the question was in those patients with AFib and with, in which anticoagulation is indicated per guidelines. And at the time of this trial, that was what the guideline recommendations were. I think a separate study would be to look at those who were on anticoagulation and compare their long-term outcomes. But I think one of our purposes here was to say, hey, we have this bruise control data. It's pretty convincing that it's safe to do this uninterrupted. How good are we as a community of following these guidelines? And so the best way to do it within the limits of our database was to just include those in which they are guideline indicated for anticoagulation. And how would you discern that anticoagulants were not held because of a sicker population? Yeah, that's an excellent question. And I think that's definitely one of the limitations here in that, as you can see with, where I showed the rates of prescription rates related to CHA2DS2-VASc score, and that inverse relationship, I think was initially surprising. And then on greater reflection kind of made sense and lends to your point that I think if you take a patient who has multiple comorbidities, they're more likely to have reasons to have the DOAC withheld during the procedure. And so I think that's something that we found and it's an observation. Great talk, thank you. You excluded all patients on warfarin, and that's a pretty large patient cohort, right? You have about 30,000 patients who were on warfarin at discharge. Have you considered comparing those patients who maybe are less sick than your control group, where we don't know if they know it's held because they're sick or because of other reasons, basically comparing your warfarin group, which is the gold standard, to the DOAC group? That's an interesting concept, and I think that would be something we could explore in the future. I think, at least in our practice, the concept of not using heparin in the perioperative time with warfarin is pretty clear, and so, whereas the use of DOACs around time of procedure is much variable between providers, that's why we eliminated those patients. But yes, we excluded them and did not include them back in the analysis, so it could be a group of patients that could serve almost as a control group to compare, yeah. Go ahead. What was the definition of hematoma? Was hematoma that required evacuation, or? So we use ICD-10, yeah, that's a good question. Is there a way, can I show hidden slides? No. We use ICD-10 coding to look at billing for pocket hematoma that was broad and not specified to requiring evacuation or surgery. We also look separately at ICD-10 coding for need for device revision as sort of a surrogate for device infection and complication that was a little more, required more intervention. The follow-up question would be, from a clinical perspective, if you have a hematoma that does not require evacuation or a vision of the pocket, and there is no high risk of infection, so what? Why would you even stop the work? Does it have any implication? I think that's an excellent point. And I think that is actually one of the conclusions we drew from the study, was that we have a higher rates of pocket hematoma, which is slightly different than was shown in the bruise control two trial. But without clinical sequelae raised against higher stroke rates at one year when the DOAC is withheld, there is still that need for balance and using clinical judgment. But I think, to your point, without long-term clinical sequelae, that might be the lesser of two evils in this challenging decision that we have to make. And then, in terms of the stroke at one year, did you look at stroke very operatively? Because that's pretty far away, if you take anything, let's say, more than three months out. Even if they weren't started on DOAC post-procedure, they may have been started three or five days later. Yeah, that's an excellent point, and here's stroke rates at 30 days as sort of our earlier surrogate, which there was no difference between those two groups. I mean, it's a numeric difference, but not statistically significant difference. The reason for looking at one-year outcomes was to say, is there some component of clinical inertia that we're not capturing where withholding anticoagulation at time of discharge might have these long-term consequences that we can't predict? So yes, you're correct that in 30 days did not lead to any differences, but then when you look ahead at one year, those differences became apparent. Excellent. Thank you. Strong work. That concludes the session. Thank you all for coming. The basic science part will be at 4 o'clock.
Video Summary
The Young Investigator Award Competition featured presentations on groundbreaking research in electrophysiology (EP), primarily focusing on innovative techniques and technologies. Ravi Ranjan chaired the session with panelists inquisitively engaging with the finalists. Burman Zaidabadi from Imperial College London presented on an AI-enhanced ECG platform for cardiac screening and risk prediction. He discussed the successful application of AI models that use ECG images instead of signals, which can significantly enhance accessibility in facilities lacking digital infrastructure. The second presentation by Yingnan Zhang from Johns Hopkins University discussed digital twins in arrhythmia ablation for ARVC, incorporating genotype information into heart models to predict optimal ablation targets effectively. This pioneering method promises more precise and personalized treatment strategies. Lastly, Carmelo Shore from the University of Colorado explored DOAC management post-cardiac device implantation. His study found rising yet suboptimal DOAC prescription rates in clinical practice, highlighting higher stroke incidence when anticoagulation is withheld. Overall, the session showcased the potential for AI and personalized modeling to transform cardiac care and emphasized the necessity of adhering to evolving guidelines in perioperative anticoagulation management.
Keywords
electrophysiology
AI-enhanced ECG
digital twins
arrhythmia ablation
cardiac screening
personalized treatment
DOAC management
cardiac care
anticoagulation
Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
P: 202-464-3400 F: 202-464-3401
E: questions@heartrhythm365.org
© Heart Rhythm Society
Privacy Policy
|
Cookie Declaration
|
Linking Policy
|
Patient Education Disclaimer
|
State Nonprofit Disclosures
|
FAQ
×
Please select your language
1
English