false
Catalog
AI-enabling Diagnostic Tools
AI-enabling Diagnostic Tools
AI-enabling Diagnostic Tools
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome to this GE Healthcare-sponsored session entitled AI Enabling Diagnostic Tools to Support Your Clinical Decision Making. My name is Foo Siong Ng. I'm an electrophysiologist from Imperial College, and I'm joined by an expert panel today. The first talk is from my colleague, Dr. Dawit Darbar, who is Chief of Cardiology at UIC. Over to Dawit, please. Just press it forward right there. And there is a pointer, if you want, over there. That's it, yeah. Yeah. Hi, everybody. Thank you for that kind introduction. First, I'd like to thank the organizers for giving me the opportunity to talk about some of our work on AI-enabled ECG screening for atrial fibrillation in diverse populations. This slide summarizes the areas I hope to cover in the next 12 minutes or so. I'll first talk about AI-enabled ECG and why it's a powerful tool for pattern recognition and early detection of cardiovascular disease. Then I'll discuss a bit why convolutional neural network based ECG models have been able to detect silent atrial fibrillation reliably from a standard ECG. And then finally, talk a bit about our work on AI-enabled ECG and how we can detect atrial myopathy that's associated with titan mutations, enabling early prediction of a genetic form of atrial fibrillation. So why is the ECG an ideal substrate for deep learning AI applications? Well, there's several reasons, actually. First, it's widely accessible, standardized, and digitally portable data source. It supports fully automated interpretation through AI-enabled algorithms. Another reason is it enables detection of subtle, hidden ECG patterns that are beyond human recognition. And then finally, CNNs have revealed latent disease signatures, particularly silent atrial fibrillation, but also asymptomatic LV dysfunction, hypertrophic cardiomyopathy, amyloidosis, and very recently, also looking at aortic stenosis as well. This is a busy slide. And what I've tried to do here is summarize the major contemporary ECG databases that have been used to develop AI-enabled ECG applications. I'm not going to go through them all, just highlight a couple of important ones. First one relates to the Stanford ECG. And initially, they used a one-lead ambulatory ECG to predict rhythm types, 12 different rhythm types. They've subsequently gone on to use the 12-lead ECG, as well as some mobile ECG technologies to predict disease prediction as well. And of course, we have the UK Biobank. A particular strength of the UK Biobank is that not only is the ECG data associated with clinical outcomes, but it's also linked to genomic data. And that's certainly a major strength of the UK Biobank as well. And then a lot of the pioneering work on AI-enabled ECG work has been done by the Mayo Clinic. They initially started looking at developing an AI-based automatic ECG interpretation. And then subsequently, they've gone on to look at asymptomatic LV dysfunction, obviously silent AF, as well as hypertrophic cardiomyopathy, but also age, sex, and race and ethnicity as well. Now, how exactly does CNN-based ECG models detect silent atrial fibrillation from the standard ECG? It's important to understand first that deep learning is a sort of subset of machine learning using multilayered neural networks. And this is just one example taken from the Mayo Clinic. On the right-hand side, what you see is initially the analog ECG recording is then converted into a digital recording, which generates a list of numerical values, in this case, corresponding to the amplitude of the signal. These numerical values are then convolved into network-based weights within each lead, but also across the leads, finally giving rise to a model, a final model, which then allows you to predict the particular disease you're interested in as well. Important thing to also realize that the Mayo group clearly showed that a snapshot of a 12-lead ECG in sinus rhythm can actually act as a surrogate long-term monitoring for atrial fibrillation. They showed in their study that an ECG can predict development of atrial fibrillation in the next month or so as well. The strengths of this approach are it detects complex patterns without human-defined features. It learns directly from the raw input data. It's agnostic. And it develops nonlinear models that are trained on larger data sets. And herein lies one of the limitations of a CNN-based approach, in that it results in models that are often unexplainable to the human eye, functioning very much as a black box. And this has raised concerns about the clinical application of a CNN-based approach for clinical conditions. Now, again, this is a busy table. What I've tried to do is summarize all those studies that have tried to assess AI-enabled ECGs for the detection of silent AF. What do I mean by silent AF? I mean asymptomatic, undetected AF, essentially. So the first study that used AI-enabled detection used a smartwatch-enabled PPG. And while this was the largest evaluation of AF screening, the yield was only 0.52% in over 420,000 subjects. Out of these, a third of them actually had documented AF on ECG patches. So what this study suggested is mass screening or general screening of the population is not feasible in any way, as well. Subsequently, the Mayo Clinic group used over 125,000 patients with AI-enabled ECG in sinus rhythm. And they showed an area under the curve of 0.87 with good sensitivity and specificity. What they also found, as I mentioned just before, is that you can actually use the 12-bit AI-enabled ECG as a long-term monitoring ECG, essentially, to detect individuals at increased risk. Noseworthy, also from the Mayo Clinic, then used an AI-enabled ECG to detect silent AF in individuals at high risk of stroke, as well. And they showed, while they didn't report the sensitivity and specificity, what they did show was that AF was detected in 7.6% of those individuals that were categorized at high risk of developing AF, as well, as compared to 1.6% in the lower risk. Obviously, they're now doing implementation studies as to how, knowing that a patient is at high risk of developing AF, how does that impact their management of their stroke risk, as well? And then, finally, the final study I want to highlight is a meta-analysis of 31 studies, which showed very good sensitivity and specificity for both PPG, as well as one-lead ECGs. And what this study concluded is that, certainly, AI-enabled detection of silent AF has good sensitivity and specificity. Obviously, the next stage is really implementing this in a clinical practice in order to identify individuals at high risk. Now, coming on to our study, which looked at how an AI-enabled ECG can detect atrial myopathy associated with titan mutations, thereby enabling early detection of a genetic form of atrial fibrillation. Over the last two decades, really, there's been tremendous advances in understanding the genetic basis of atrial fibrillation. Genome-wide association studies have identified over 140 AF loci. And then, very recently, whole exome and whole genome studies have identified rare genetic variants in sarcomeric proteins, like titan, nuclear proteins, like laminae, junctional and structural proteins, like desmosin, and then, obviously, ion channels. The first documentation of genetic AF was a mutation in a cardiac potassium channel. So this is the most recent meta-analysis of gene-based testing of rare and ultra-rare coding variants in over 50,000 individuals with AF and 275,000 controls. What I want you to really focus on is that the strongest gene that's been linked with atrial fibrillation is actually titan. And you look at the p-value is 10 to the minus 45. The other genes, like laminae, are in the teens or in the single digits. So really, the strongest link with early-onset atrial fibrillation are mutations or rare variants in titan. So what we wanted to do in this study, before I get to the study, increasingly, there's interest in AF, not just as a secondary phenomenon, secondary to risk factors or hemodynamic strain, but atrial myopathy as a primary disorder of the atrium as well. And there's increasing genetic evidence to support that there are some forms of atrial myopathy that are genetic on origin. The first study that identified mutations in myosin like chain 4, causing an atrial-specific myopathy and early-onset AF. But what was really interesting about this study was that those individuals did not have any ventricular cardiomyopathy. Also, mutations in laminae can drive atrial remodeling and early-onset AF without independent of ventricular disease as well. And then just last year, one of my MD-PhD students, what we did here in this study is that we deleted nine amino acids in the A band of titan in both zebrafish as well as in iPSCs. And what you'll notice in panel H is that those zebrafish where we knocked out nine amino acids, they had massive atrial enlargement and an atrial myopathy. But the ventricular size was completely preserved as well. So again, this was more evidence to suggest that some forms of atrial myopathy that give rise to atrial fibrillation are primary genetic defects in the atrium that then subsequently go on to develop atrial fibrillation as well. What you also notice in panel H is that there was myocardial disarray in the atrial myocardium as well. So based on this, we wanted to do a study examining clinical and ECG features of titan mutations in predicting AF-induced atrial myopathy in our diverse patient population. The rationale I've just explained to you is that loss of function variants are linked to atrial myopathy and early-onset AF. And understanding titan's role may reveal not only identify high-risk patients, but also guide precision medicine. This is the hypothesis. I won't read it for the sake of time. Our cohort consisted of 579 individuals with paroxysmal AF who underwent whole exome screening. We identified 17 likely pathogenic or pathogenic variants, most of them in titan. And then we went on to collect serial ECGs during sinus rhythm. We segmented them into individual beads and trained a CNN model, as shown on the right-hand side. And again, for the sake of time, what we showed was we achieved a balance accuracy of 0.85 to 0.95. What this showed is preliminary findings suggest that there are ECG features of titan-related atrial myopathy that extend beyond the loss of function. Now, the next stage of the study is actually to validate it in the UK Biobank. And obviously, we want to assess the clinical utility of a non-invasive CNN-based tool for screening the genetic risk of AF. Ultimately, we envision that we want to combine not just AI-enabled ECG, but combining with clinical risk factors, but also imaging, as well as biomarkers, in order to develop an AI-enabled ECG AF risk model. So in conclusion, AI-enabled ECG is a powerful tool in the detection of cardiovascular disease in at-risk populations. CNNs have been developed to detect silent AF, asymptomatic LV dysfunction, cardiomyopathy, amyloidosis, as well as aortic stenosis. And these approaches might apply not only to 12-lead ECGs or single-lead, but also multi-lead mobile variable ECG technologies as well. And then finally, the utility of an AI-enabled ECG AF risk model, which includes clinical risk factors, imaging data, and circulating biomarkers, requires validation in large, diverse populations to ensure accurate risk assessment. And finally, none of this work would be possible without all the research fellows in my lab. The ones in red, whose data I presented today, my collaborators at University of Illinois, as well as the funding agencies. And I'd be delighted to take any questions. Thank you very much, Dawoud. We've got about a minute for questions. So if any questions, please come up to the microphone and ask Dr. Darbar your question. If I could start. So really nice work with the AI model with the titan myopathy. How are you planning to use that in your clinic? How will it actually change how you practice if you could apply that model? I think the most important aspect, identifying an individual who carries a titan-likely pathogenic variant or pathogenic variant, I think what we'll do, and we've already started to do that actually, is that we will monitor them much more closely. We will provide them with the halter monitors and event recorders. So at the first sign of any symptoms at all, we actually do intense monitoring. And then obviously, the most important aspect of this is that their risk of stroke as they go on as well. So I think really, obviously, those are studies that will need to be done. But really, in terms of how we impact a patient, it's going to have a tremendous impact to be able to identify these individuals at a young age who are at risk of developing a sort of titan-associated myopathy, and then subsequently stroke and the complications of AI. And one of the problems with these models is the PPV, the positive predictive value, is never that high. So what you might do is have a lot of false positives that you're following up. I wonder how you would handle that. Yeah, no, I mean, again, I think you're exactly right. That is one of the limitations of this model or whatever. But I think that's why I suggested having a comprehensive model that doesn't just include genetics, right? It includes clinical risk factors as well as imaging data, particularly sort of atrial strain, left atrial size. But then also, we've done some recent work looking at biomarkers as well that can also be integrated into this comprehensive model. Great. Thank you very much. I think we'll move on in terms of time. Next up, we have Dr. Stavros Boutantakis from Lenox Hill Hospital. And his talk is entitled, Mapping in Atrial Fibrillation, If Only Signals Could Talk. So I'll set your slide set up. Thank you very much. That was really insightful talk, getting us from the EKG we get at the office to the basic lab. We're going to change gears completely. And I'm going to get you to the EP lab and inside the left atrium, where most of us, those of us who do ablation of atrial fibrillation spend a great part of our time. So there's a lot of signals and a lot of information. So there's a lot of information. A lot of signals and a lot of information. So therefore, I named this, If Only Signals Could Talk, or If Only We Could Listen to Them. And the previous speaker discussed about the importance of getting an EKG, just a 12-lead EKG, to predict atrial fibrillation or to predict the presence of atrial fibrillation. These are the studies that were mentioned before. I want you to see the numbers. The number of EKGs that were required to build this prediction model also to validate it. We're talking about half a million EKGs. This is the sinus rhythm EKG to predict atrial fibrillation. This is the paper that shows sinus rhythm EKG to predict new onset atrial fibrillation in the future. So the name of the game is data, a lot of data, a lot of EKGs to be able to extract this information. And since then, the basic concept, since the simple 12-lead EKG at the office can predict presence of atrial fibrillation, how about if we input in these algorithms, in the predictive algorithms, data from the electrograms we see the patient history to predict the recurrence of atrial fibrillation? Fortunately, we're early on in the stage of actually being able to do this. This is the first study that was published in CIRC-MP 2022 where the investigators tried to incorporate predictive model EGMs. And you can see that the area under the curve, area under the curve is the predictive value of the model. One being perfect, 0.5 being pretty much not working. Just off the EKG. So despite the fact we're inputting important information and more data, we're not able to predict to increase the predictability value of our algorithm. But having said that, the number of patients we used were only 156. So we go from half a million EKGs to a million EKGs, we go to only a couple of hundred patients. And as we said before, AI algorithms need a lot of data input. And then there were other iterations of trying to be able to actually use the madness of the EGMs with the electrograms we see in atrial fibrillation to predict either rotational activity or do use the EKG and change it to a phase mapping way to predict rotational drivers in atrial fibrillation for target for ablation. Both these efforts were not actually proven to be effective in a clinical setting. Subsequent studies did not show superiority over what we typically do. And then there's a concept of what we call temporal spatial dispersion. The idea that in the madness of atrial fibrillation with multiple signals happening in a chaotic way at the same time, there's a part in the heart that drives the rest of the activation. And the investigators started from Marseille describe the definition of temporal spatial dispersion where a central part in the left atrium drives the activation of the rest. And they came up with a tool, with an algorithm to detect those areas of temporal spatial dispersion that led to, I think, the only clinical trial, randomized clinical trial we have to date to that showed superiority. Superiority of identifying areas, patterns of activation in atrial fibrillation or what we do. The only thing I would argue, this is pattern recognition and not a classical definition of an AI algorithm that it's live and it feeds on its own. So, and that's an everyday case we have in atrial fibrillation in the lab. We see signals all over the place. It's difficult to kind of organize our eye to be able to interpret what the signals might wanna tell us. So, just paying a little more attention in the lab and this is, again, this is the left atrium multipolar catheter in the posterior wall. Try to organize, reorganize the signals. Oftentimes, and with the Volta, we're able to see it more, we see or identify a core of rapid activation, rapid activation that's surrounded by areas where the signals appear to be less frequent and of a higher amplitude. But of course, you would argue with me, the low amplitude continuous signals that I've marked here are pretty tiny. So the main question is whether the signals are within the definition, the resolution of our ability to extract the noise or whether this is just noise we see. And I think that's a big game changer. The signals in nature of fibrillation, typically low amplitude, they occur in frequencies that very much resemble the noise we see in the EP lab. So, and that's our thing where it's important to have a system to be able to subtract the noise that we often see and that's different from a patient to a patient, from the lab to a lab, or as a matter of fact, from a case to a case to be able to extract the noise so we can see the fractionation that's really important and input this information to a predictive algorithm that can identify, again, patterns that are very important in nature of fibrillation. These are a few, these, one of the systems that is been going towards this direction where there is AI algorithm is built to identify baseline noise and extract it from the active electrograms. And of course, these are beyond our definition and our ability to identify this noise. And I think this is an effort that needs to continue and that's a bare minimum if we wanna be able to input the signals in nature of fibrillation in building predictive models of ablation success. And just to give you an idea, this is actually not six seconds, this is 60 seconds. This is a digitalized amplitude of a signal in sinus rhythm on the left occurring in one minute. So every single spike is the amplitude of signals we see by one single bipole. And you can see even in sinus rhythm, the number of signals we collect from one single bipole. So while the same bipole, if we were to record in nature of fibrillation, we gives us something like this. So you can now understand the bulk and the quantity of data that needs to be digitalized, stored, analyzed. Most of the signals are low amplitudes. So you wanna make sure you remove the noise from what has been recorded. And imagine, I told you, this is data, this is the quantity of data from a single bipole. So imagine this happening in nature of fibrillation with multipolar catheters scanning across the huge, scanning through the whole left atrium. Again, the point is the EKG or the PQRS-T wave in a digitalized form can predict presence of atrial fibrillation or history of atrial fibrillation. The point of this is for us to understand and grasp the idea of leveraging the huge amount of high quality data we need to acquire in order to come up with a method in the madness of atrial fibrillation. And again, the recording system you have by GE can do this. This is just a rhythm in nature in a sinus rhythm that in the background that we're using it, all this data is digitalized. And you can see all those zeros and ones that we see here that are simultaneously recorded, how different would it be if on, we're incorporating noise. So to sum up, this is more like the status quo of what's happening right now. We're in a very early, I would say primitive stages of being able to leverage the data quantity we record with atrial fibrillation. But I can argue that mapping in atrial fibrillation is truly next frontier in ablative treatments of persistent atrial fibrillation. Applying AI methods is the appropriate, I could not think of another way to do this and high applicability in this due to the huge quantity of data. Results of any analysis requires some quality of the input data and therefore recording high quality sickness is very, very important and excluding noise from it. No system currently allows for live and continuous analysis of sickness in atrial fibrillation. So there is a big gap in technology at this point. But at this point, you, most of us are using a system that's be able to record, store, but we're not analyzing or there's no way to analyze the data we currently have. Thank you very much. Thank you. Great, thank you very much for an excellent talk, Stavros. We got a few minutes. Any questions for Dr. Muntan Takis on his talk? If not, I guess I'll ask question. I think you were a co-author on that nature medicine paper. I noticed. Are you using that system now in your routine practice and how do you use it on a day-to-day basis in treating AFib? So the truth is that now with the technology changing to PFA, the use of the system is less, but it's still there for the difficult cases where there's really no protocol or no algorithm to direct the operator what to do. On this conference, we also posted the results of, we present the results of the repeat trial. Those who had ablation for atrial fibrillation, the mainstay is pulmonary vein isolation. They came in with pulmonary veins isolated and so that this particular technology increased significantly the long-term efficacy of the second procedure. So I believe in the technology. There's some technical issues for the day-to-day application of this technology in our lab, but the concept is there. What's the ceiling? How good can we get with AFib ablation with AI? So we know we're stuck at 50% for persistent AF with conventional. Clearly, we're not gonna get to 100% because there's some which I think will be fixed with ablation. How high can we get with AI in terms of it? Yeah, very difficult question. I think the atrial fibrillation is very heterogeneous disease as we all agree. So a tailored approach, a tailored, that is why the name of the study was pretty ingenious, a tailored atrial fibrillation ablation, a tailored approach based on each patient is very, very important. And I would argue that there's a lot of potential for a tailored atrial fibrillation ablation. So the atrial fibrillation is very, very important. And I would argue that what has been done with identifying myopathy, myopathy on a cellular level, I could argue that the expression of myopathy, the expression of cellular dysfunction is the signals we record. And maybe, maybe even more sensitive like MRI. So I think this is an opportunity that we're not leveraging enough, the expression of the cell function through the signals that are being generated. Great, thank you. I think in terms of time, we'll move on. Thank you very much. So, thank you. Next up is actually myself. Okay. And then I think you're next. So if I could have the pointer. So I think we're gonna go back towards ECG. So, Daoud started off with some ECG. We're gonna come back to that a little bit. I'm gonna talk a bit about some of the work of the group, many of whom are in the room here on using AI ECGs for diagnosis and risk prediction. So we all know what ECG looks like. We think we're all experts. We can all read an ECG in 10 seconds and give you a diagnosis. And actually, we've got pretty good at it over 100 years of looking at these things, but still nowhere near as good as we should be. And one of the reasons is we have a system to read ECGs that we've learned over 100 years. So we look at the P wave for HO depot, we look at the QRS for ventricular depot, then the T wave. And we teach our fellows that. We say, look at this, have a system, and then you'll have a diagnosis. And that's been useful to some extent, but I think it's trapped us at looking beyond what the ECG can offer us because we're so trapped by this conventional system. In fact, what AI does, it frees us of what the shackles of our framework, and it looks at the ECG in lots of different combinations that we would not have thought of because it doesn't make physiological sense. But it does. It doesn't care about physiology. It will look at all of this stuff in a slightly agnostic way. We think when we talk about AI ECG, we really refer a lot to deep learning rather than some of the more conventional machine learning as shown there. So what is the difference between just traditional programming and deep learning? It comes down to who defines the rules. So in traditional programming, the human sets the rules. So based on what we learn and what we know about physiology, we define the rules. So we say to the program, look at the QRS because we think it's important, and find out when the QRS is longer than 120, and tell me when that is, then we'll call that abnormal. So that's not bad, but it depends on the human understanding what the rules are. Machine learning doesn't depend on the human. So in machine learning, the machine learns the rules. It doesn't care what the human knows, what a human does not know, or let's learn over the last 100 years. It takes all the answers from the ECGs, feeds into the model, and it learns new rules. So it has the benefit of coming up with new ways of associating the data with the answers beyond what the human understands about the ECG in the last 100 years. So we heard from Daoud about CNNs. How do they really work? A CNN is a method that is quite common in computer vision, and here's how it really works at its core. So if you want to train a model to differentiate a circle from a cross, what you do is you give the model 100 circles and tell it these are circles, give the model 100 crosses and say these are crosses, and that's all really you need to do. You can tweak a few things, but it's not gonna really make that much difference because what the model will do on its own without any human input is to break down the image into different components. So in this case, it will break it down into horizontal, vertical, and diagonal lines, and it will learn without you telling it or teaching it anything that circles will contain all these components, but the crosses only contain diagonal lines without any input. And after a while, the model will know how to differentiate a cross from a circle if you show it a cross or a circle. And that's really how it works. So that's a very simple task. Beyond that, you can show it left and right bundle branch block. Again, you don't have to tell it anything. Over time, it will learn that left bundle branch block looks a bit negative in V1 and looks a bit positive in V1 for right bundle branch block, again, without any human input. You can make that more complicated. You can ask it to learn normal from abnormal LVEF. All you do is label the data and the machine will learn the difference. And more complex, you might even learn to predict future disease, things that are not currently on the ECG. But it all comes under the same concept. You give it a lot of ECGs with the labels and you let it do its own thing and it will come up with the rules that associates the ECG with the labels. And that's how all CNNs work. So I'm gonna show you some of the work done in our group led by Dr. Aaron Tsao, who's in the audience here, who's done a trade recently published on a model about using ECG to predict time to mortality. So not just whether someone's at high risk of dying, what Aaron did was to train a model to predict when someone might die. So he took a million ECGs from Boston and as I showed you before, the label is the time of death. So every ECG has a time of death associated with the ECG. And the model learns to recognize the features of those who are about to die versus those who may not die for many, many years. And it spits out an individualized survival curve here that predicts your likelihood of dying or surviving in the next 10 years. And here are a couple of examples. So these are two patients who died and you can see here where it crosses the 50% line, they're more likely to be dead or alive. And the predicted time of death is actually not that far off from the actual time of death. So with a lot of ECGs, with millions of ECGs, the model can learn these very subtle features that tell you whether someone's at risk of dying soon or not at risk of dying for many, many years. Here are two further examples of patients that haven't died in the 10-year follow-up. And you can see the predictions are very flat. The model says, yeah, this guy has a 90% chance of being alive even at 10 years. So the model gets it right for those who are at low risk as well as those who are at high risk. And obviously we got quite a bit of press interest in this when we published it, including some slightly negative press here. So this is a tabloid in the UK, which said UK hospitals now will use a death calculator and will tell all patients when they are going to die, would you actually want to know? But of course, that's not the reason why Aaron developed this. This is really about using it to track changes over time so we can monitor patients with ECGs to track their health. So these are two individuals with hundreds of ECGs over 15 years. You can see 15 years ago, the prediction was very good, 100% chance of survival. Over time, that drops. And what you can even see is when they're admitted to hospital, that drops and then it recovers. So you can track quite dynamic changes over time that allows the physician to potentially change the way they manage their patients based on these AISG outputs. If you take the half a million patients in the test set and break it up into risk quartiles, you can see if the prediction is the highest risk, you have a eightfold chance of dying in the next 10, 20 years so it's very good at stratifying the low from the very high risk individuals over time. Even in patients who we have said have a normal ECG, so cardiologists have signed off as normal, the model's still picking up some subtle signatures that we are not aware of and still picking up the higher risk individuals in this group. One thing we did was to test the model in lots of different populations. If you train a model, it might learn something specific to your training data set. So what we've done is to externally validate in patients from Brazil with cardiomyopathy, in primary care, in two volunteer cohort, and what we found is the model works very well in all these cohorts. It's still picking up the highest risk group in every single case. And interestingly, these models work very well across the sexes and all ethnicities. We were concerned that there might be underperformance in some ethnic groups, but the models perform very consistently across the major ethnic groups. We already use risk calculators, so how do they compare to conventional risk calculators? If you compare that just to age and sex and risk factors and ECG, the model seems to beat all of those combined. So it's doing something more than what we can do, and in fact, if you combine them all, it's the best with a C index of about 0.8. So then the next question is how can we apply this? When is it useful to know when someone might die? Would you use it in certain conditions to treat or change how you treat the patient? So one condition is primary hypertension. Knowing someone's risk of death allows you to change the way you manage them, be more aggressive with disease treatment. In this cohort, you can see AICG beats all the conventional ways of predicting risk of death. Another condition is aortic stenosis, where knowing someone's risk of death might change your threshold for TAVR and risk and valve replacement. Again, compared to even conventional echo parameters, apparently there's information in the ECG that allows you to predict someone's risk of mortality in this group of severe aortic stenosis. We talked about the black box earlier. I mentioned the black box. It's crucial to look under the hood, as they call it, to understand what the model's doing. So you can look at some explainability analysis and work out that our model's actually looking at the QRS here. The red traces are the highest risk. You can see the broad QRS are worse. The narrow, upright, blue traces are low risk. The odd T waves are bad. The odd ST segments are bad. So again, that's what we understand. That's what we know. So it's comforting to know the model is doing things that we understand. And what IRIS has done subsequently is to extend beyond mortality. We took the mortality model and tuned it to a number of different features. So now we can not only predict future mortality, but predict future heart failure, arrhythmia, complete heart block, AF, even non-cardiac conditions like diabetes and CKD. And these are the C indices. They're not perfect, but they're certainly a lot better than conventional risk scores like TRASVAS, where it's only 0.65. So we're looking at much better C indices than risk calculators that we currently already use in clinical practice. Another thing I want to mention quickly is some image models. Bormann was in the Young Investigator competition yesterday, and we will await to hear the results this evening. But he presented extending that work to images. One of the problems is not everyone has a GEMU system with digital x-analysis, ECGs. In the UK, we work a lot with images. And what Bormann has done is to try and extend that to not just taking a digital signal, but taking a picture. Can you just take a picture of an ECG and do the same as Aaron did and make predictions about risk of death and risk of future disease? And we know ECGs don't look perfect. They look a bit like this most of the time, partially scanned, crumpled, folded paper, certainly more challenging for the model to handle than a nice digital signal. So what Bormann did was to take 2D CNN with lots of different types of images to feed it in through that to try and predict the range of outcomes, much like what Aaron has shown in his paper. The image model actually works pretty well if you have a good enough quality ECG. This is a classification task to pick up those with low EF. And you can see that the AUC is respectable. If you want to pick up valvular heart disease, and here are the different valvular heart disease, again, very respectable AUROC curves here. And here's some other labels that Bormann has tried to predict, including not only mortality, but a range of other outcomes here. You can see with PDF images, which is a nice, clean image, those are the best. With photograph images, potentially there's a bit of a range in terms of the performance there. We did some explainability resiliency mapping to see what the model's looking at. And reassuringly, you can see the mortality models look more at the QRS than the whole ECG. The AF models look a bit more at the P waves. Again, things that we would, as clinicians, potentially look at. So how would we use this? I think the aspiration is to have this in our sort of hospital system, where everyone who comes in with an ECG has these models run. They then have predictions output for each patient, and this will maybe guide the clinicians as how to manage the patient. Here's an example of what we would do. Again, aspiration, this is a beta version of IHG platform. So you have a patient come in, you can put your image onto that, it analyzes the ECG, it sort of thinks about it for a bit, over a few seconds, and it gives you the prediction. This patient is high or low risk of currently having these diseases. Here's the five-year risk of ASCVD, atrial fibrillation, complete heart block. And then I guess it's down to the clinician to work out what they wanna do with those information. It shows you a saliency map, so it allows you to look at what the model's looking, give you a bit of confidence as to what the model is doing. We've learned that clinicians need confidence in the model in order to use them. So I always finish off with this slide for any AISG talk. It's an editorial to an AISG paper from a few years ago, and it talks about trusting magic. AISG does feel very much like magic at the moment. How can I predict someone would die in the next two years from an ECG? But the models can do that. And the challenge for clinicians is whether or not we wanna embrace what feels a bit like magic in our practice and what we need to have that in regular practice. And there's another. So again, I've got a couple of minutes, so we can invite any questions from the audience to ask a question. I am the moderator, so I can't ask myself questions, but potentially some of the panel might want to ask me a question or two about any of this before we move on. I think there's a... Oh, there's a question. Yes. There's a microphone here if you want to come up to it so we can actually hear your question. Maybe I can ask one. Oh, yeah. Yeah. Yeah. So maybe I can... So practical-wise, how have you implemented this in the clinic? So what we're doing now is testing them in a series of clinical studies. In order to implement them, what you really need to do is to have regulatory approvals, to be rolling them out in a sort of wide way, like the Volta system. So what we don't have is that set of approvals. So much of this is still early testing within the hospital system under research studies. Research studies. Thank you so much. I think it was a great presentation. I wanted to ask about some minor things that can happen with lead connections. For the same patient, you can have a nurse who is connecting the leads in different ways. How is that going to translate on the AI model? Yeah. So interestingly, sometimes... So one of the things that Burman did was to train the model with lots of... Sometimes with shuffle leads, just to confuse the model a little bit. I think what you're saying... If the model... I think if on the testing ECG, a single ECG, the leads are entirely reversed, I think you'll probably not make the right diagnosis. What we're doing a bit currently now is understanding how variable these things are. So sometimes lead positions are not perfect. So if you do it from day to day, V2 might not be where V2 is, and you're not expert, you might put it near V3. How variable that is. We're understanding that there might be some variability, but potentially not so huge. But I think if you mess up the lead entirely, the model is not going to be able to... the model will probably not see a lot of them in its training. Yeah, so what I've shown you is having millions of ECG to train. There are now moves to developing foundation models so you can train a model that knows about ECG patterns in general, so it's learned all the subtle features and then if you want to transfer it to a much smaller training data set, like long QT of a few hundred, it might be able to work. But you're absolutely right, these are sort of data hungry type models, but there are now approaches that people are doing to try and handle much smaller niche data sets, like long QT, ARVC, where you are never gonna get a million, but those approaches can work if you have a sort of bigger foundation model. A great session. I have a question for Stavros. I think it's important to match the tool to the purpose and, correct me if I'm wrong, but the purpose of your work is to identify non-PV sources? No, the talk was about mapping in nature of fibrillation. No, but how is that going to impact our patients? Is it about identifying non-PV sources to ablate? Yes, that are functionally important in the perpetuation of atrial fibrillation. Not triggers, but perpetuation. Okay, but if it's going to impact our patients, what is the moonshot here? It's very important and very interesting and very cool with the use of AI, but what is the purpose? All funding bodies these days want to know how their dollars are going to impact patients. Identify areas where ablation could work in preventing atrial fibrillation from happening. By identifying non-PV sources. When you say non-PV, PVs have been implicated in the initiation of atrial fibrillation, triggering. That's why they work in 70% of the time. Now I think what we don't know is what reinforces perpetuation, continuation of atrial fibrillation. I think everybody agrees for paroxysmal atrial fibrillation, PVs are important, but what happens in a patient who has comorbidities where the triggering is not only triggering the initiation? Okay, so I think the answer is yes, because if electrograms could talk, they'd put their hand up and say, I am where you need to ablate because I am the source of something. Fortunately, of course, we have wavefronts and we know that electrograms are generated by wavefronts and what they do, and what they do individually and collectively. In these days of high-resolution clinical mapping, we can of course use wavefronts to understand and direct us towards PV sources and we've heard some technologies presented at this meeting that enable us to do that. That doesn't require AI, and so I think to focus on the electrogram rather than the wavefronts that generate them is to make life too complicated. So, I would argue that the tool is not appropriate for the moonshot, because if the moonshot is identifying non-PV sources, I think there are simpler ways of doing it that actually don't require AI. Anyway, that's a rhetorical point. Well, the only issue though with that is that it's a moving target. It's a moving target. The waveform, it's a moving target. It has a specific wavelength, it has a specific directionality. No, but that's... Sorry to interrupt, but that is to make life too complicated. The sights to ablate are not a moving target. I mean, maybe philosophically... Look, I mean, you come from... Your origins are from the land of philosophy, but, you know, if it's all about ablation, we have to assume that there are sights to ablate. And... But, again, to use a term of relevance to your origins, it's a rhetorical one. I think we should leave it hanging. Thank you. Time for one quick question. You've been waiting very patiently. Yeah, sorry, this will be an easier question. Do you... Often on ECGs, you have some patient information printed. I'm just curious if you have any plans to take any further information into account within the AI model in terms of analysing ECGs going forward? So, at the moment, these are all signals. Are you saying, in terms of the age and date of birth, that's on the... Yeah, like, are there possibilities for the AI to be able to look at those patient demographics and take that into account when reading the ECG as well? That would be a good thing to do. So, all we've done so far is using AI. So, you can add age and sex to the model and it improves it. But there's not much else often printed in our ECGs. Also, there's issues with having patient-identifiable information. So, at the moment, we've not gone there yet. Great. Thank you. So, on to our fourth and final speaker, who is Michael Shihata, who is the Director of EP at SIDA Sinai Medical Centre. Over to you. Thank you. So, I'll try to walk us through maybe a little different path here and maybe lighten the mood a bit. This is really to kind of look at the EP lab space, right? We've walked from what's happening in the world of ECG analysis and the world of intracardiac analysis, but maybe taking a step back and just saying how are novel tools and potentially the issues of AI and machine learning going to impact what we do in the EP lab and how we function on a regular basis in the EP lab. Sorry. So, what are the current challenges that are in the lab space? Well, obviously, we treat a lot of complex arrhythmias, but we're limited a bit by disparate pieces of equipment. We obviously are a very technologically advancing field. So, we have a lot of various inputs that don't really communicate with one another and we use them all in their kind of siloed space. We also have EMRs that we don't really talk with our systems. So, the ability for us to... Again, we're gathering such a huge amount of imaging data, data in general, that typically EP labs are kind of confined to these different silos. Maybe the incorporation or the bringing together of those systems would be better as we think about the future. And why do we need to think about the future? I think everybody in the room recognizes that the field of EP is growing and it's growing substantially. And so, from a procedural aspect, whether it's in this country or other countries, this limited workspace that we have, we're going to have to do more with it and we're going to have to develop new spaces and be a little more creative about how we think about space. But this is just projected growth in a small catchment area around the hospital that I work and what the projected growth rates are out to 10 years from now. So, it's growing. In terms of the fundamentals, a couple of things that I just wanted to cover, and I think a lot has been discussed already about the AI and machine learning aspects, but what is the utility maybe in other fields? I'll touch on hardware and infrastructure design. I don't have time to really go over data integration, but we'll talk a little bit about some workflow improvements. So, obviously, we know that there's plenty of existing uses of AI and machine learning in the space. These are really the analyzing of complex data in innovative ways in order to help drive improvements in how we function clinically. So, they've already had a significant impact, as you've seen, in the world of EKGs, in the world of wearables, and future prediction and diagnosis of arrhythmias. In terms of other places where they've been very useful, cardiac ion channel function for the mapping of atrial fibrillation. So, this is the question that was asked earlier. I think this is really the holy grail of AF. What can we actually get to a higher level of impact in terms of our patients? So, the idea of looking at spatial patterns of dispersion, which Stavros really covered nicely, the detection of AF drivers and other things for treatment. So, as we move forward, though, I think one area that we haven't touched on today is the world of imaging, right? And so, cardiac imaging is a very rich environment for AI, and there's a very rapid movement in this field. So, with over a thousand cleared AI models in terms of imaging, and you can see on the panel on the left, this is from a company that's doing work in the echo imaging space with the Addis software, the ability for you to get now transmurality, the indices in terms of MRI, and overlay that with imaging technologies during ablation. So, the more sophisticated imaging is coming, and the ability then to integrate that imaging into our everyday workflow is going to be increasingly important. So, modalities in the TE space were very well adept at this. There's been the issue of reconstruction into 3D models, but that's also translating now into the intracardiac space. So, the ability for us intraoperatively now to use 2D planar imaging and create multiple 2D planar images that are then reconstructed into a 3D model, and then obviously even more sophisticated with 4D and time. And so, how is that really being done? Well, the same issue as was shown with ECGs in the imaging space, it's saying that the generation of these topographical maps by looking at an echo image that's fixed in 2D, the ability for the machine to then see areas and actually annotate areas of chamber anatomy, and that annotation is really very, very close if you were doing it manually to what an AI model or an AI predictive tool could be used. So, the integration of imaging into the EP space is actually going to be very, very important as we move forward. And so, this already exists today, right? The ability for you to gather a two-dimensional, multiple two-dimensional images with a current existing system and reconstruct that into three dimensions. What's coming, though, I think is that this is going to be enhanced and be available for all aspects in a real-time way of chamber morphology. So, this is just an example of the AI cardio algorithm that I showed earlier, basically looking at ventricular models, right? So, the ability to reconstruct a ventricle with an outflow and do that all real-time within a very short amount of time. There's no mystery that PFA has really, obviously, dominated the ablative workflow in most of our labs, but how could some of the future directions look in terms of the ability for us to look at signal, right? So, signal quality, Stavros kind of covered also very nicely that it's all about the quality of signal that comes in. We're now becoming used to this concept that, well, with PFA, there is no signal, but that isn't always the case, right? So, is there something that can be seen in the signal that we currently look at on a regular basis when we're doing ablation? And post-PFA, you'll see many cases where you've got this far-field signal off the majority-used catheter, the fair-wave catheter, and post-ablation, there is no signal. But you also find areas where you've ablated numerous times and you'll still see very large, dramatic far-field signal. Obviously, this will be refined with more near-field signal acquisition, but the ability maybe for some of these predictive tools within the recording systems to be able to hone in and say, this would be an area where potentially you need to reablate or you need to reapply in order to get durability of lesions. So, lesion durability, I think, will be a big area. Advancements in modern recording, obviously, are going to make this possible. So, in the current space, obviously, there's been a big push in terms of reduction of environmental noise, signal fidelity improvements, resolution, the bandwidth in terms of sampling rates for signal has improved, processing speed has improved dramatically. All of that is going to help inform the future of how we do ablation. Another thing that I think in the EP-Lab space that we are very procedurally dominated, it's dominated by us doing procedures on a regular basis, is learning from our surgical colleagues. So, this was a recent publication, nice review in the surgical workspace of how artificial intelligence tools and machine learning can be incorporated, whether it's in laparoscopic procedures, robotically guided procedures. So, moving from that into our procedural areas, because they are so technologically driven, I think that there's going to be huge advancements. And I wanted to just highlight some of the things that I think are actually going to be useful. And this is from a mixed reality or an augmented reality. And what is this? This is really just an overlay of sort of digital tools into the real-world space using augmented reality or these type of virtual reality goggles. But this is just an example where now the operator can interact with the maps that are created, with head movements. They can actually use their fingers to guide and move image quality in front of them. So, you see the image here. So, not to be too futuristic, but I think that this is going to come very, very quickly. Something to facilitate how we do our procedures on a regular basis. This kind of ability to add mixed reality into the workplace. And with head movements, again, to be able to rotate maps, to be able to refine on areas of anatomy. And this, I thought, was an interesting... This was just published about a month ago in CircAE, looking at this augmented reality and the use in an EP lab, right? And so, the operators here use these VR headsets and the ability to guide or move to catheters to a particular area with that digital overlay onto their screen was very useful in terms of the ability to guide within better precision, right? To a spot within the chamber anatomy. So, this was just kind of first, I think, proof of concept that these systems may be useful in the EP lab space. Another area where I think improvements are going to be made are really, remember that the world, it's really global in terms of the EP space. And there are a lot of places that don't have similar resources, right? So, some places are very resource-heavy, others are resource-poor. So, through the use of some of these virtual platforms, can we actually improve the experience in the lab environment with education, with interaction, with other expert guidance if you don't have that all in one place? And this is just an example of a type of system that intends to sort of integrate all the pieces of a functional EP lab workspace into one sort of visual modality, right? And so, this can be broadcast, this can be done remotely, but if you've got all the inputs that talk to each other, it really makes a big difference. As opposed to in current workflows, we've got, again, so many disparate pieces of equipment. So, as we kind of start to think about the future, the ability for there to be interconnectivity between these systems would be extremely helpful. The other thing I think that's going to be big is the issue of remote support, right? If all of these ambulatory surgical centers around the world are popping up, if mapping procedures are going to be done in those places, with the level of sophistication of maps today, the ability to have remote clinical support may also be of use, right? So, you can imagine if everybody's looking at the same thing, but one person's in a different state or in a different part of the world, these systems can actually be quite powerful in how we guide and do our procedures. The other piece to this is just thinking, taking a step back outside of the AI world, but just in terms of room configurations and infrastructure and hardware, that more and more spaces are going to have to be multifunctional. So, the ability for us to think about creating work environments that are adaptable, right? To both EP, to the interventional space, to the operating room, in one shared space is an important point. Environmental benefits, I think, are another thing to think about. So, the use of glass walls, other modular ceiling platforms, the ability to sort of interchange, to take a room from being an EP room to another type of room, to another part of room. I think with limited resources around the country, there's obviously a big movement in this space. So, from an aesthetics point of view, this is really quite nice, right? You can create these beautiful labs. And this is important, I think, both for patients, for operators, for the staff, and then also from a functional perspective. The ability to create spaces like this, I think, is another area where we need to start thinking, especially as more and more surgical centers pop up. So, I put this image up. This is one of my fellows and I, we created this for a publication, just looking at the EP lab of the future. So, the idea of potential incorporation of virtual reality or augmented mixed reality, the ability for you to create spaces in which they're highly adaptable to other types of, different types of procedures, I should say. And then, obviously, the integration piece, the ability to bring together different types of imaging modalities, recording modalities, all into a central area. So, with that, I will finish and just say that, kind of the future of EP labs, I think, really is, it's important to take all of these novel artificial intelligence machine learning type tools at the same time, just sort of think, take a step back and think about how we create our infrastructure and our spaces in order to basically be able to tackle the issue of the growing arrhythmia burden, right? So, this is, not only is we're talking about, can we get better at what we do, but just the sheer number of patients that we need to treat is growing and growing and growing. And with the identification that you've heard with AIECG and everything that's going into this, you can imagine that AF detection is just increasing. So, the population that we're going to be treating is increasing. So, it's important to leverage all that technology to improving patient outcomes and improving both also our workflows and staff environments. That's all I have. Thank you. Finished on time. Great. Thank you very much. Again, we have a minute or so for additional questions from the audience, if anyone would like to ask a question. Maybe I'll ask you about the future of the EP Lab. And so, in 10 years' time, will we be doing ablations the same way, NSVT, do diagnostic manoeuvres, or will we just know where to ablate from the start? How will it change what we do? No, I think it will change. Just as it has changed, I think, in 10 to 15 years that sort of I've been in this field. I think, you know, I heard something at the plenary session, the opening plenary session, that in terms of our interaction with patients, right, and the documentation aspect that we go in and we, within the next five to 10 years, we won't be documenting anything. It'll all just be voice activated and we'll talk. Well, the same you can imagine in the EP Lab space, right? Why aren't we already there? There's so many things that require input from somebody or, you know, a nurse or a tech. I think our work environments will change. They'll simplify quite a bit. That's my hope. Excellent. On that positive note, I think we should wrap up. We're about a minute over. So thank you very much to the panel for all the excellent talks. A round of applause. Thank you.
Video Summary
The GE Healthcare-sponsored session on AI in diagnostic tools, chaired by Foo Siong Ng, an electrophysiologist from Imperial College, featured several expert talks. Dr. Dawit Darbar, Chief of Cardiology at UIC, discussed AI-enabled ECG screening for atrial fibrillation (AF) and related cardiovascular conditions. He emphasized the potential of convolutional neural networks (CNNs) in detecting silent AF, asymptomatic LV dysfunction, and genetic forms of AF associated with titan mutations. Despite the promise, he noted challenges like the model's "black box" nature and false positives.<br /><br />Following Dr. Darbar, Dr. Stavros Boutantakis elaborated on mapping atrial fibrillation using AI, emphasizing the importance of understanding electrograms in identifying ablation sites. He highlighted the challenges of signal noise and the need for large datasets to develop effective AI models.<br /><br />Foo Siong Ng discussed predicting mortality timelines using AI models from ECGs. The models provide personalized survival curves by learning patterns from labeled data. They perform well across diverse populations, surpassing traditional risk assessments in predicting outcomes like heart failure and arrhythmias.<br /><br />Lastly, Michael Shihata focused on the future of the EP lab, emphasizing AI's role in improving workflow and integrating imaging data and patient management tools. He also envisioned advancements in virtual reality to enhance procedural accuracy and efficiency in electrophysiology.<br /><br />These presentations collectively underscored AI's transformative potential in cardiovascular diagnostics, though challenges like data quality, interpretability, and integration into clinical practice remain significant obstacles to widespread implementation.
Keywords
AI in healthcare
diagnostic tools
atrial fibrillation
convolutional neural networks
electrocardiogram
cardiovascular diagnostics
predictive modeling
electrophysiology
virtual reality
clinical integration
Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
P: 202-464-3400 F: 202-464-3401
E: questions@heartrhythm365.org
© Heart Rhythm Society
Privacy Policy
|
Cookie Declaration
|
Linking Policy
|
Patient Education Disclaimer
|
State Nonprofit Disclosures
|
FAQ
×
Please select your language
1
English