false
Catalog
Current and Future State of Arrhythmia Management ...
Current and Future State of Arrhythmia Management ...
Current and Future State of Arrhythmia Management Registries: Defining Endpoints and Leveraging Large Data
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
of Arrhythmia Management Registries, Defining Endpoints and Leveraging Large Data. With me is Luigi DiBiase from Einstein, and we have four outstanding lectures upcoming. The first is Dr. Bunch from the University of Utah, and the title of his talk is Current State of Outcome Measures and Its Limitations. Chairs and ladies and gentlemen, it's a pleasure to be here. Usually I have the last talk of the session, not the first, so this is a new one for me. Those are Moose's disclosures. They're more important than mine. So it's a pleasure to talk to you on this topic. I just want to give you a general overview and some of the challenges I see before we really launch into the meat of where outcomes measures are being used specifically. These are disclosures, really not any necessarily relevant, but I am going to talk about Heartline as illustrative of a certain way we can approach outcome studies. So what are types of outcomes measures? Well, we have patient-reported outcomes. These capture the patient's perspective on their health and treatment experiences. Oh, should I wait for that? All right, I'm back. Clinical outcomes, these focus on objective measures of health status, such as disease activity, response to therapy, functional ability, et cetera. And I think we're evolving into this realm that we can separate, which are digital outcome measures, where we utilize wearable devices and other technologies to collect data on physical activity performance, other healthcare metrics, sensor technologies, et cetera. And I think the Heartline trial is an interesting example of how we can use these in synergy. And if this trial be reported as a late breaker at American Heart Likely, and these were patients that were enrolled through their smartwatch, where there was an active intervention where if atrial fibrillation was detected in those over 65, they were sent information regarding how to optimize their health and what treatments they needed. And this was linked to the Medicare database, so you could measure objective outcomes in healthcare utilization, so involving all those things. And these patients were also sampled through their Apple Watch, using disease-specific metrics for how they felt and their symptoms. Wow. And this gives you some of the examples of what it looks like, the Heartline app. And patients can also be rewarded using this app. If they complete everything, they would receive Apple bucks. But this will also help guide their therapy. So it's an outcomes study that really involves all three of our prior treatment tools combined and simplified into an app with, again, app consent of patients. So I think it's an interesting study design, and I'm anxious, as have you, for the results on how it performs. So what are the benefits of measuring outcomes? Well, it provides data to support clinical decisions and intervention. It helps with quality improvement. It helps identify areas for improvement in healthcare delivery. And I think it's really critical now for identification of rare events, and you'll see a shift in this session, as well as others, where we're approving technologies early. And so we have to have these long-term outcomes registries to understand rare events. And it helps with patient-centered care. We need to incorporate the patient's perspectives and preferences in treatment. I want to highlight that, and this, as well as rare events. And I think we've seen this with Aripulse, that this was the pivotal trial, one year, showing a 70% effectiveness with PFA, or the, and in this, there were two strokes. But as these patients, both in Europe and the United States, were followed, then we'd notice there's an augmentation in brain events. So we have to marry not only the pivotal trials and make them pragmatic and be able to execute on them so we can have access technology, but we have to harness the power of long-term data to really understand events and safety events. Why do we need to understand the patient's perspective? Well, we've never been really in line with the patients with all our atrial fibrillation trials. We've had this arbitrary endpoint of 30 seconds of AFib-defining success. But when you ask patients what they would consider successful with their atrial fibrillation ablation, you can see here, in the light green, less than 30 minutes. But some patients are okay if they have a few minutes. Some are okay if they have a day. Some are okay if they went from continuous to paroxysmal. So the patients didn't necessarily see 30 seconds as relevant. We have to include what they feel is successful because they'll ultimately define who engages in these treatments and who seeks their care. And you can see, as well, frequency, again, once per year, in yellow, continuously being very rare, even up to a few times a week, people felt that that was still a successful treatment. This is completely different than what we're doing with our trials. So what are the comparative benefits and limitations to outcome measures in medicine? Well, survival rate, it's clear and objective. It helps with treatment effectiveness, but doesn't capture quality of life. Quality of life captures the patient perspective. It's relative to chronic conditions, but there's subjectivity to it, and it may not correlate with outcomes. Disease-free survival, we see. It's most useful with certain specialties and therapies, such as oncology, and it directly implies treatment efficacy, but it focuses solely on recurrence and not overall health, and we struggled with that with AFib recurrence, right, and may not apply to all diseases. Progression-free survival, I think this may be very helpful in AFib. We want to keep people from progressing to persistent or more frequent atrial fibrillation. It can be used to show benefit in new therapies, but it may not correlate with significant hard-term outcomes. And there's different ways to interpret progression. We were talking about that briefly before we started. Clinical response is easy to measure, but it may not reflect long-term outcomes and can be influenced by a placebo effect. Adverse events we need for safety, and they can guide our treatment approaches, but they can vary in reporting and interpretation. And if you wonder why in the Cabana trial where there were a few strokes, well, if you get 20 neurologists in the room and say agree upon what a stroke is, they never agree, and so you don't adjudicate that event. And so interpretation and how you define the adverse events is critical. Mentioned patient report of outcomes are important as we understand the patient's experience and satisfaction. They reflect the impact of disease and treatment on daily life. They can be influenced by bias. They can also be influenced by repetition, and they require careful analysis and interpretation as to what's meaningful. I've struggled with that, when you take all this data from the patient and define what's meaningful. Composite endpoints can provide a more comprehensive view. They're useful in complex interventions, and they can help with power. Our studies are so expensive. Only way to get the power down is either to enrich the population with a sicker population, or make composite endpoints to make it pragmatic and one that you can execute on. And again, they can complicate interpretation. They may mask important individual outcomes. For example, if your composite is completely carried by heart failure hospitalization, you may not understand as much about stroke or myocardial infarction, et cetera, for an AFib trial. And biomarkers are objective and measurable. They can provide early indications of efficacy, but they're not always easy to translate to clinical outcomes, and they require validation in clinical trials and perspective evaluation, and they can vary on the assay used, which can make it difficult from going from one trial to another. As we consider all these things, I think we also have to be careful of our bias, and I think this is really interesting in these recent studies with cryoablation and our confirmation of bias about what we know or hope is true. And in this early ablation trial of patients 60 years of age, 63 were male sex, cryoablation success at one year was 74%. In the ADVENT trial, cryoablation shown here in green was about the same with an age of 62 and early onset. This was funded by Medtronic, but the cryoablation, this was funded by Ferripulse. We had a recent study in which similar population, similar male sex, and now cryoablation performs at high 40s, low 50s compared to pulse-filled ablation. This could be a true signal. It could be the monitoring we're using, but it could also reflect a newer technology and confirmation bias where this is now the outlier, not on how well PFA did, but how poorly cryo did. So what are the future directions? Well, we can improve standardization. We can development of universal definitions. We're working on that with a document for the, an ARC document with the FDA, so we can make our trials more relevant to you and your practice. We can enhance data capture. We can leverage AI and big data analytics. We can engage people from onset, our patients, and so we select relevant outcomes so we don't be adherent to 30 seconds of AFib that was meaningless to our patients. We can focus on real-world data, collecting data in natural settings rather than solely relying on the clinical and hospital visits, and the Heartline trial will inform on that, and if we can do that with smart technologies that are commonly used in our community. We can integrate the integration of patient performance data, combining patient-reported data with objective performance data from wearable devices, and use that also as an endpoint of interest of our patients, our therapies, and also, hopefully, outcomes. Thank you very much. Thank you. Okay, thank you very much. We'll move for the next speaker. My pleasure to invite Dr. Mansour from MGH to talk about real-life performance of arrhythmia registry and their limitations. Thank you, Luigi, for the introduction, and thank you, and Dr. Calkins, for chairing this session, and thank you all for attending this presentation. All right, so I'm going to talk about real-life performance of arrhythmia registries and their limitations. So for topics I will cover today are the following. What are the objectives of an ideal registry? Talk about factors affecting the success of registries, and then we'll give kind of an overview of the performance of some arrhythmia registries. So we'll talk about the objectives of how best a registry could be. So the first objective is to enhance quality of care related to complications, clinical outcomes, and patient-related outcomes. The second one is to facilitate research and innovations by providing a robust data collection platform, and provide tools and reports that enable participating hospitals and clinicians to benchmark their performance to national data. And finally, it could be used potentially for post-marketing regulatory outcome studies. So now we talked about the objective, let's see what are the factors affecting success of the registry, and how can a registry meet the objectives that we talked about? So one important thing is the ease of data collection and entry, because if it's difficult to collect data, you're not gonna have full data sets on the patients. Ease of data analysis, so you need a place where this data is storage, and analyzing it using advanced tools. Participations, if you don't have hospitals and physicians and patients participating, you're not gonna get to the goal that you're aiming for. Accessibility, participating centers should be able to access their data and potentially contribute to scientific research. And also, an ideal registry should have measurable effects on the quality of care. So now we talked about this, let's look at what kind of registries that are available, and then we'll look how they perform. And the most used registries are the one from the NCDR program, which is the National Cardiovascular Data Registries. It's a comprehensive registry program managed by the American College of Cardiology, and it contains a number of registries designed to help hospitals' practices measure and improve the quality of their cardiovascular care. And when it relates to arrhythmia, there are more than three, but the three most important ones are the left atrial appendage occlusion registry, the AFib registry, and the EP device implant registry. For the sake of time, I will talk about the two highlighted ones, which are the appendage registry and the AFib ablation registry. So we start with appendage occlusion registry. It was initiated in 2016 after the Watchman FDA approval, which occurred a few months before that. And the aim was to better understand the utilization, safety, and effectiveness of left atrial appendage device closure in real-world clinical practices, and it was a collaborative effort between ACC, SCI, CMS, FDA, and Boston Scientific, which is the manufacturer of the device. So let's see what this registry did. This is the cumulative procedures of patients enrolled in the registry, and as of July 2024, there were a large number of patients, 425,000 patients, so indicating good participation. That's, again, one of the parameters of success I talked about earlier is participation, and here we have good participation. These are the numbers of physicians and number of the hospitals participating in the study, also large numbers, so 3,222 physicians and 884 hospitals. And if you look at the percentage of appendage closure device performed, 98%, so it's a very large number, 98% of left atrial appendage procedures are captured, and probably the most important reason for this high level of participation is this is a mandatory registry, and CMS mandates for you to get paid, you have to participate in the registry, and the fee to participate is $15,000 paid annually. So we talked about participation. Let's talk now about the effect of quality of care, and then one way to measure that is to look at the scientific productivity of the NCDR Watchman Registry, and here are the number of publications, two in 2021, eight 2020-22, six in 2020-23, and seven in 2020-24. So it is a decent number. You probably would have, we would hope for more out of 400,000 patient registry, but this is what we have. And what about enhancing quality of care? This is one example where the NCDR left atrial appendage registry helped enhance quality of care based on the result of the study, which showed that discharging patients on DOAC alone or waferrin alone was associated with lower rate of adverse events compared to DOAC plus aspirin. This is important. That helped many of us change our practices because this antithrombotic regimen was not studied in the IDE studies of the devices. So what about data entry? Data is collected at 45 days, six months, one year, and two years, and the data consists of 220 points collected at baseline, and 60 collected at six, 12, and 24 months. We talked about participation. We talked about the high number of patients, but if you look at how data is collected, that's a problem for this registry because it's all manual. Some of you who participate in this, you would know that it takes about, in the best day, about 30 minutes to enter the information at implants, follow up significant variability. It could be as short as five minutes, as long as more than one hour, depending on the number of events per patients, and also whether the patient is, data is in the same electronic medical record or something else. So let's say someone uses Epic, and the patient went to another hospital that don't use Epic, you know, good luck finding the information on that patient. You may end up spending one hour to get any information. And also, it's all manual, so to know when the six months follow up for this patient, you have to create your own calendar. You don't get any automatic reminders to tell you, okay, this is a window for when you, it's a six months follow up for the patient. So these are all disadvantages for this way of data entry. What about accessibility and data analysis? So to get, if you're interested in applying for a project for the NCDR, you have to apply through a research proposal portal, and it's a process, you have to do it, but the problem with that, there are limited resources, and the limited resources, the limited funding to this database is limiting the number of projects that are approved every year. So for example, in this 20, 25 years, I think there's only one or two projects approved, and this is a limitation knowing that you have 425,000 patients with data sets on them. So if you wanna grade, and this is, I came up with this looking at all the data that I showed you. If you wanna grade the overall performance of the NCDR registry for left atrial appendage closure, a plus is the lowest, three pluses is the highest. Participation is very good. I don't think it can beat that, 98%. Enhancing quality of care, and we've seen some publication. We'd like to see more. Accessibility is not very accessible, mostly related to limited resources, and ease of data entry is also limited because it takes a lot of manual effort to do this, and probably what would help improve this is some type of automatic data entry system that will automatically populate this database. So I will, the second registry that I will cover is the NCDR, a fib ablation registry. Planning for it started in 2009. One of the problem, it has limited follow-up on patients which made this registry poorly suited for evaluating effectiveness. So it's mostly a safety data set. And the stakeholder support was insufficient which delayed the launch of it until 2016. If you look at scientific publications, it's very small. Two in 2017, one in 2020, 2021, and one publication in 2023. Small number of publication for the amount of effort placed in such a registry. Other major limitation is voluntary nature. So there's limited incentives for centers to participate. Only 162 hospitals, which is less than one third of the 495 included in the mandatory left atrial appendage that we talked about earlier. There's also limited or no information on many important items because either the questions were not asked, such as ablation energy source, mapping system, or the questions that were infrequently answered, echo measurement, quality of life, and other things. So significantly limited database. That's why we saw very few publications out of it. So in conclusion, I believe that registries are critical for enhancing quality of care related to complications and the clinical outcomes and patient reported outcomes. It's also critical for facilitating research and innovation by providing a robust data collection platform. The current arrhythmia registries are limited by lack of resources and deficiency of advanced technological tools, which ideally would have allowed fast and complete data acquisition and analysis. And thank you for your attention. As Dr. Mansour will leave, do we have any question for him? He has an overlapping session. If no question, one quick question for you and you go. So you clearly explain what are the current limitation and a second limitation that for me exists is the fact that this is self-reporting. People may avoid reporting you. Do you think that AI platform will eliminate the fact that you can avoid self-reporting or you think that people can still avoid not reporting to you what they don't want to report? Yeah, I think having an automated, as I mentioned, having an automated data reported system will overcome that limitation because you will have a data extraction platform that automatically get access to the database and takes whatever it has access to. It's not gonna pick and choose what to extract. So I think having automated population of the data is critical for success of the registry. Yeah, I think this is very important because in addition to the main power that we don't have, I think one of the reasons is people tend to report complications. And you know, an automatic system, you have scheduled three cases yesterday, there are three cases of data that need to be extracted and somebody will review them. So I think this is very important. I don't know if you have any other questions. No, the striking thing is the difference enrollment in the two registries you reviewed, one where it was mandatory, one where it was optional. I mean, that I think says it all, but... Okay, anyhow, thank you, Musa. Thank you. So our next lecture will be by David Frankel from the University of Pennsylvania, and his title of his talk is EP Collaboration and Standardizing Outcome Measures in Arrhythmia Research. David, thank you. Okay, thank you, Hugh. Good afternoon, everyone. Reload my slides, please. Okay, so I'm going to go ahead and get started, and I'm going to start with a little bit of background. So I'm going to start with a little bit of background. Okay so I'll be talking about standardizing outcome measures in arrhythmia research and it's a short talk so only two items to cover. The first is the need for more relevant endpoints and Jared and Musa both touched on this in their talks and then I want to explain what HRS is doing to meet this need. So this is the traditional endpoint for a VT ablation trial and for an AF ablation trial. Okay it's data from the IVTCC, International VT Center Collaborative, looking at 2,000 or so patients with structural heart disease who underwent VT ablation and we see VT-free survival over the following year. Of course there's a lot more richness in the data than just VT-free survival. So we all get that these patients, okay my, I can't point, let's try this. Okay so we all get that these patients had a large benefit from ablation, no recurrences afterwards. But I think that these patients here who had 10 episodes, 20 episodes, maybe as much as 100 episodes of VT in the six months before ablation who then have one recurrence in the six months after ablation, maybe it's even pace terminated, were still tremendously helped by the procedure but on the traditional Kaplan-Meier curve, they're a failure. We know that ablation can be effective for VT storm which is a very traumatic experience for patients. We know that successful VT ablation is associated with improved survival and we know that medications can be stopped after a successful ablation procedure and some of those medications in and of themselves have significant side effects, significant long-term toxicities and that's an important thing you're doing for patients as well. So I think all of these are important outcomes to think about when we're judging the success of a VT ablation procedure. And of course, the exact same factors come into play for atrial fibrillation. As Jared was talking about, it's about a lot more than 30 seconds of AF defining failure, right? So we do AF ablation to improve patient quality of life. This is data from Cabana showing that ablation improves quality of life by far more than drug therapy and that AF burden is extremely important for quality of life, for healthcare utilization. So this is circa dose data showing that AF burden ranging from 0% to greater than 5%. There's a graded increase in the likelihood of visiting the emergency room, being hospitalized, all sorts of healthcare utilization. And of course, we know from CASEL-AF and other studies that in patients with reduced ejection fraction, there's a mortality benefit to AF ablation. So there's so much more to think about and consider in terms of what are the best endpoints rather than just a simple AF-free survival, VT-free survival. So part two, what is HRS doing about this? So the EP Collaboratory, which Jared is participating in, is a joint initiative from the Medical Device Innovation Consortium, Stansford Center for Arrhythmia Research, and the FDA. And the goal is to enhance the efficiency and effectiveness of early feasibility studies in EP. And among the first projects, the Academic Research Consortium is writing a document about endpoints for AF ablation. So why is it important that we standardize this? Well, for several reasons. Number one, with the deluge of pulse field catheters, all these new innovations, it allows us to compare apples to apples. If everyone is using AF burden as defined by continuous monitoring, then you can really compare the results of one study to the next and make judgments about what might be best for you to use for patients. Number two, we're helping the FDA develop a set of meaningful, relevant endpoints that they should use to evaluate new products. I chair the Scientific Documents Committee, and one of the documents that we recently launched is the Clinical Trial Methods and Endpoints for Treatment of Ventricular Arrhythmias that we're fortunate John Sapp will be chairing. And again, this is the same concept to define what are the most meaningful endpoints for VT ablation trials, so that these can be used in studies going forward. And we've got a great writing committee with a broad range of expertise in ablation devices, we have pharmacology of anti-arrhythmic drugs, et cetera. So in conclusion, we really need to standardize meaningful outcomes in EP research, both for regulatory purposes and for our own purposes in terms of judging the effectiveness of new interventions, and HRS is playing a large part in this effort. Thank you. And the last speaker, my pleasure to invite Dr. Lucky Reddy. He will talk about how to leverage leapfrog computing and strengthening clinical registries. Thank you, Chairman. So this is actually a talk that I combined, Dr. Ellenborgen couldn't be here, so I basically combined both my talk and his talk together that was initially listed on the agenda. So leveraging the leapfrog computing to create a HRS-BFA registry, a step forwards in democratizing quality, access, and performance. A pretty long title to what it means is we don't know what the heck we're doing, we need to do something better. So when you really look at the conventional registry constructs, I mean, NCDR really comes to everybody's mind as being one of those popular registries that has been created, relying on the structured data capture from EMRs and oftentimes there is manual abstraction by clinical teams to really fill in that data and then that data goes in after quality checks, goes and sits in a central repository and then over a period of time, upon requests from individuals, institutions, organizations, you end up getting bits and pieces of the data getting analyzed by statistical teams and then you look at patterns and trends and those really become papers and also sets in place a lot of the, sometimes reimbursement, sometimes health policy and also enables us to really do some gross observations over trends in how clinical medicine evolves. So the typical registry data elements in a structured registry like this typically involves patient demographics. If you take atrial fibrillation as an example, you have classification of atrial fibrillation. We all understand how problematic this whole classification of paroxysmal, persistent and permanent classification is. There is a lot of esotericity to it and many times when you really dig back and dig deeper, you realize that these definitions perhaps need to be tightened up a little more than what they are today. Then you can get information on medications, comorbid conditions and then clinical outcomes that we look for, stroke, bleeding, mortality. A lot of this are manually abstracted at this point in time because the continuity of automated data extraction does not happen in any of the given registries today. So we entirely depend on these database registry managers in every facility and that's a very cumbersome process and you multiply that with the heterogeneity of the EMR systems across the country or when you scale it up across the globe, then what you're dealing with is a virtual nightmare and that's where the old adage comes from which is garbage in and garbage out. Oftentimes people shrug their shoulders and when data gets published from these large registries and everybody says, okay, take it with a pinch of salt. Can we change that paradigm? So the new paradigm is this ever exciting multimodal large language learning models which can drive the EP data sets for AFib management. That's the reason why when Dr. Mansour uses the word platform, I think platform is a better word to use because what all information that we put into this massive data repository should have the flexibility and the ability to really enable us to look at this data in any number of ways. So consider the following multimodal data integrated automatically by modern EMRs and AI platforms. For example, if you really look at the clinical conversations, you have the automated transcription now that can generate your notes. I mean, you have the MedPalm multimodal GPT, charge GPT force. You have surface EKGs and variables where you have the Apple Watches, a live course, the vivos of the world that can actually spit out a lot more information today than what was available a few years ago. When you look at the intracardiac electrograms, I mean, this is something that we all are very familiar with and you can do a lot of things with this piece of information. The EP mapping data, the imaging data, the genomic and biomarker data. So you have silos of this information on any given patient for a given clinical condition. Then how do you really integrate these multiple modes of information that's flowing into electronic medical records that is available to us, but then we unfortunately don't have a way of actually stitching these pieces of information together. So when you take the example of a patient that's undergoing AFib ablation in a typically EP lab, how does a traditional versus multimodal registry really compare against each other, right? So a patient comes in for AFib ablation. There are clinician manually documents the EMR structure fields. We in your HMP refill, you say it's a persistent AFib or paroxysmal AFib, which most of the times, that definition of paroxysmal or persistent is, I would say in about 50 to 60% of the times, it's a wrong diagnosis. And everybody is a paroxysmal atrial fibrillation and in reality, a lot of these patients happen to be early persistent AFib. I mean, how many times have you not really looked at it and you smile at it and you just move on because you really don't have time to think about it twice. That one thing that you ignore at the time of you re-evaluating the patient's initial diagnosis actually gets perpetuated into these massive national databases that really has a humongous impact in many different ways, right? These are the basic fundamental things that we really don't think about twice, but life goes on and that's the reason why I say garbage in, garbage out. Post-procedural outcomes are coded manually. That is success, success, isolation, complication rates. There is really, there are not many web crawlers or there are not many EMR crawlers that are very effective at this point in time that can pick up that nuance from an operator notes. And if you really look at the heterogeneity of EMRs, you have DOS-based EMR systems like Meditech, which I have the unique privilege of working with in our system, to the most advanced, or at least they claim to be advanced, the EPIC EMR medical systems. And with this amount of heterogeneity, how in the world do you expect anybody to create a system that can uniformly capture the data for us to really be able to get to where we need to get to? The abstractors later extracts structured fields, they compile into a registry, and the data largely is numeric, static, and it lacks detailed EP signals or waveforms. There's no imaging data. So from a clinical research perspective, oftentimes it's retrospective, and it is limited by these rigid structured fields and missing lots of vital, detailed EP and imaging context. And thereby, our understanding into the evolution of disease or other finer points that we really need to dig out of this oftentimes get lost in translation. When you take a new multimodal EP registry, this is aspirational, and I think with the type of tools that we have, I think this is something that we can really get to. Clinical interaction using a multimodal LLM model, where you have a significantly improved accuracy in the capture of the conversation that goes on, and does the documentation of the points or the morbidity profile or the patient-specific clinical profile can accurately be captured. And then the data, the EP mapping and ablation, that is AI-driven analytics, that will enable AI-driven analytics, I think would be significantly valuable in this area. Surface EKGs and wearable data that comes in during follow-up of these patients to really see the recurrences of arrhythmias. Is it 10 seconds of AFib recurrence, or is it 30 days of AFib recurrence, or is it 30% AFib burden? The nuance of really looking at time to first recurrence too, and AFib burden, I think, is an important one for us to really sort it out, and when we talk about definition, so when we talk about how do we really measure the outcomes, I think refining those things really become important. Multimodal imaging, like the cardiac MRI, I mean, data on myocardial scarring, fibrosis, I think these are important things to think about. And then how do you really automatically update these registries, I think really becomes a point of important conversation. So as the semiconductor revolution or evolution took off, this really drove the deep learning advancements in many ways. Between 2008 and 2025 today, in less than 17 years, the incredible amounts of the computational capacity that the world has seen is over 2,000 full times, right? So there is this word called the floating point operations per second. I didn't know what this word exactly was up until two weeks ago when I was putting this talk together. So flops is an important word that really tells us about the computational power of existing computers. So as you can see here, from 2008 to 2025, we saw about 2,000 fold of improvement in the computational capacity. That really changed the way this data could be handled, right, that really enables us to really do a lot more things that we were not able to do in the past. So from EMR to multimodal clinical registries, like when you really look at the computational power improvement, then you look at the computing milestones that happened, that took place in the computing world, the EMR milestones, and then the inflection points. There are several inflection points that I think really helped us to set in tone the value of electronic medical records that did not exist in the United States in a mandated fashion when the rest of the Western world like the National Health Services or the Danish registries and the German registries never far advanced to us compared to us up until the EMR as a concept has been mandated in the United States. And that I would definitely see as an inflection point, which is the HITECH Act accelerating the EMR adoption. There were several subsidies given. A lot of EMR companies really came into existence and the rest is history. I mean, Epic is one of those examples of that. The GPU explosion enhancing the EMR analytics and then somewhere around early parts of 25, you have this whole generative AI transforming the EMRs, right? So automated transcription, automated summation, collation, and you have these dashboards that you can actually sit in front when you're seeing a patient that gives you a collated summary of the patient's entire history showing trends of their ejection fraction, showing trends of their rhythm control. Percentage pacing, right? So the incredible revolution that took place in handling this much amount of data and then being able to really present this data to us in a form that's very easy to read and easy to navigate, I think has changed the playing field to a great degree. So what does this futuristic HRS pulse field ablation and field registry look like? AI and machine learning is a buzzword. As much of a buzzword it is, I think it's a ground reality we live in. I think a lot of things that we do is in machine learning and a lot of things we need to really harness this as a power. And so this AI-powered next-gen solution is what I would call this HRS pulse field ablation registry where you could actually democratize the data, the quality aspects of it. The quality of data input is poorly regulated to document source point. How could you really change that, right? Anytime you have a manual involvement, there is always an element of data corruption. Impacting the reliability of outcomes data, I think really laying down the foundations of how we define outcomes is very important and enhancing the optimization and minimizing manual input. Access is an important point, right? Open access to institutional data is very difficult. Right now, we as Health Corporation of America with 198 hospitals, we spend several millions of dollars in actually inputting data into these NCDR registries with zero access to the global data that we input. I mean, yes, we have access to the data that we input and that in a very roundabout way, but we have no access to anything else despite being the most studious soldiers that follow this NCDR registry in many different ways. And so access to the data, creating a level of transparency to the process of data requests and enhancing the collaboration amongst institutions, I think is very key. This was one of the foundational problems with many of the registries that are in place today and we got to fix that. And how do you really enhance the performance and enable institutional quality metrics so that the institutions are able to really look at their own performance and employ or deploy process changes that would enable appropriate improvements in patient care, I think is very critical. That's the whole point of actually having these type of clinical registries. And then how do you really do a deeper dive into disease evolution? I think this is an area that the existing registries haven't really helped with, right? How does a patient that comes in with PACs and simple paroxysms of atrial fibrillation over a period of 10 years really develop this really bad persistent form of atrial fibrillation with significant atrial dilatation, functional mitral regurgitation, and non-ischemic cardiomyopathy, and everything else that unveils later is an important piece of this disease evolution that we got to get our hands on. And post-approval device performance, I think is another area that these things could help with. And short and long-term outcomes are equally relevant. So the HRSPFA registry should be innovative, it should be powerful, and it should be flexible enough that it can remain adaptable to the ever-changing competing power and the science behind it. So the core objectives are obviously enhancing the clinical practice and quality of care related to complications, clinical outcomes, and patient-reported outcomes. We should be able to facilitate research and innovation in a very transparent way by providing tools and reports that enable participating institutions and clinicians to be able to really benchmark their own performance against the national data and access insights from across institutions. And also support post-market surveillance as we talked earlier. So operationally, yes, this is a pretty big lift for an organization that hasn't done these level of big registries. But everything has to start at some point. And so the way I look at it is the electrophysiologists are members of the Heart Rhythm Society. The field is pulse-field ablation, treating atrial fibrillation, and who's better suited and positioned than Heart Rhythm Society to run such type of registry? And thus, this is a right opportunity movement for HRS to really embrace this and move forwards with this. And we should also enable longitudinal tracking. This is actually one of the biggest limitations of pretty much every registry that has come into play. Integrating diverse data sources is very important here. And then generating actionable information that could actually help us to improve the quality. And last but not least, price for affordability, right? I think these things cost a lot of money. In an ever-shrinking budget of every hospital, the last thing that a CFO of a hospital wants to do is spend yet another $100,000 on a database manager and the fee that it takes to really be part of one of these registries. We got to keep them price-competitive. And considering future adaptability, that I think is the key here. So when you really look at this multimodal, generative AI platforms that can enable rapid development of software as medical devices, this is a new concept, the SAMD, software as a medical device or software as a therapeutic intervention, I think is a concept that is very important, right? When you look at the incremental improvement in the outcomes of a disease, when you move from the molecule to the cell, to the tissue, to the organ, to the region, to the overall central nervous system, there are several interventions that come along in this pathway with a very small incremental improvement in the overall improvement in outcomes for patients. If you really change the process, if you can improve access to care, if you can really have a better understanding of how disease evolves in these patients, the impact that you can create on disease outcomes is significantly longer, I mean larger, and also much more sustainable than the little things that we think about. Not that the other things are not important, but again, how do you really change process flows and management of patients on a bigger level? So this is a classic example of how multimodality AI, generative AI could be really helpful, starting from EHR to EKGs, the external monitoring, to cardiac imaging, transthoracic echo, fluorodata, the EGM data, voltage data, and many other things that go into the mix. Really being able to put together these pieces of the puzzle, where you can learn from the past, anticipate for a live intervention, perform the live intervention, and follow the outcomes for these patients to really optimize their care in a big way. So this is a good example of, this is again aspirational, an example of application enabled by multimodal registry, this is kind of what I call the electrophysiology co-pilot, like the Microsoft 365 co-pilot, you have electrophysiology co-pilot. Doctor walks into the room, you have these multiple modalities of information that pops up, and you want to know what the patient's left atrial size is, and is there a scar on the MRI, what is the ejection fraction, does the patient have HEF-PEF, HEF-REF, and how many procedures did the patient really have, did the patient have family history of atrial fibrillation. All of this information being able to collate into a singular platform that enables you to really change the strategy that you're gonna take in terms of your offerings in an ablation strategy, I think would really become a ground reality at some point down the road. And we want Heart Rhythm Society and the Heart Rhythm Society-driven, AI-powered, HRSPFA registry to be a foundation for that. So thank you very much for this opportunity. Thank you. Okay, we have 20 minutes for questions. Yes, go ahead. Hi. Thank you. Is the mic on? Okay. Hi, I'm Alex Kushner. I'm an EP at NYU, and I'm part of the Heart Rhythm Society Research Committee. We met this morning. Ken presented some of the ideas here. I had a couple of questions. First question is regarding ablation outcomes, which is, shouldn't the outcomes be different for the trial design? You know, someone who, if it's an invasive versus a pharmacological treatment, the bias of a patient being, you know, being unblinded to what they're getting can have a large impact on a quality of life outcome versus an invasive procedure where you're randomizing patients to posterior wall versus veins only, where it's obviously a quality of life has more meaning. Question one. And then the other question is for Dr. Lacoretti about these registries. You know, the biggest concern I have with a registry is what we saw today. We have the left atrial appendage registry with hundreds of thousands of patients and nine papers, and the reason probably is because there's only a limit to what you can collect, you know, ahead of time, predicting what the questions will be years later. And what you end up with is these large biorepositories of data, and then you come to ask the question only to realize you're missing something that you should have collected at the beginning. And so you invest this huge amount of effort into collecting data only to realize it's not what you need. And then to make matters more complex, a PFA registry, Boston Scientific, everyone's using Farrapulse. We designed a whole registry about that. Two years later, Volt is the newest thing, and we don't really have anything about that. So that's a question of how do you even come up with the design of a registry in an era of technology? And the third is, as an investigator, I'm at NYU. I have data. I publish papers. Now all of a sudden I start contributing my data to a registry. And now I'm a 17th author on a paper, and what happens to my career? So how do you encourage people to want to participate when ownership and abilities to get recognition for your contributions become an issue? Thank you. Great questions. I think the first question about how do you really standardize the definitions of outcomes and the differences between a drug trial versus a device trial, I would probably ask the senior most clinical trialist on this panel, Dr. Calkins, to weigh in. I mean, Dr. Calkins, you've been part of many of these amazing trials that you have done over your career. What's your take on it? Well, I mean, what I'll say is I'm really excited about the new efforts that are being put in to sort of revamp clinical trials, and I know I've always been criticized for the 30-second endpoint for AF ablation recurrence after these FDA trials to get devices approved. But the reality, you know, is that it's worked beautifully in terms of the tools that have come to market. These trials have been done. People understand the rules. Everyone agrees on the endpoints. They aren't perfect. I think it's very exciting now, the big shift to AF burden and some of the other more meaningful outcomes. But the more you discuss it and think about it, the more challenging it becomes. Is it paroxysmal? Is it persistent? How do you monitor AF burden? Do you have an implantable monitor to use a two-week halter or something? Anyhow, the more we dig into it, the more complex it gets. I'm delighted that the Heart Rhythm Society is really stepping up and leading some of these efforts, and the idea of a PFA registry, I think, is very, very exciting, but then the issue becomes who's paying for it, who's putting in the time, how do you get access to the data, what motivates people to participate? All these questions are very real questions as you try to get it launched. Clearly, if you tie it to reimbursement, we've learned from the defibrillator registry and the Watchman registry, that's the way to get everything, you know, to get every patient accounted for, and I think it's sort of curious that the FDA, you would think they'd be more rigid and say, we're gonna approve this new catheter, whether it's a PFA catheter or whatever, but there needs to be a mandatory registry to really look at outcomes. We've heard about a higher rate of ACE lesions, asymptomatic strokes with PFA catheters. Well, what does that mean down, you know, in the long term? And the, you know, anyhow, it's sort of a fascinating discussion, and all I can say is, DJ and all of you on this panel, David, you know, I'm glad you're working on it. I'm sure you're gonna come up with some brilliant solutions to a really tough problem. Thank you, Hugo. That was an amazing answer. You know, the main reason of doing a registry, or, you know, HRS wants to create this PFA platform is, of course, to gather, you know, non-industry-oriented data, data that can be utilized by maybe smaller institutions at NYU that have no way to report their data, or do not report their data. And the data needs to be, you know, type of patient enrolled into the procedures, complications, originally research idea, you know, could be requested to the registry, but the data belong to the institution, so you will have the data from NYU that are utilized by NYU for any form of research you want to do. Some complication may come, not for that part, but if you want to use the full database registry, and for that, of course, a committee for publication of, you know, the data registry will need to be created, and, you know, you need to submit an idea. I see challenging there, I understand, but you have to consider that your own hospital data will be at your disposal, disposable through this registry for any type of research you would like to do, and that will answer just the last of your question. There are others that will be addressed by the following speakers. Sure, I can address one of the other ones about the registry working for you. You have to invest in it and get the right data elements in there, otherwise it won't be useful. So, for example, our PEN AF registry, which we used to house internally, is now at AHA Get With The Guidelines AFib, and, you know, what was the data elements originally were not very useful from that purpose, and we basically redid the entire registry. Now it has how much isoproteranol did you give, where did the non-pulmonary vein triggers come from, what energy sources, how many lesions? I mean, it's all of the data we used to collect internally is now part of Get With The Guidelines. So the point is, you'll get out of it what you put into it, and that's an essential step is building the right data collection. Yeah, I would agree. We struggled with that with the AFib registry, and when everybody put in their input, thinking in a forward manner, and they all, of course, had their interest, you know, personal interest in outcomes, it was like 20 pages long. And then they're like, well, we need to make this pragmatic so people will use it, cut it to two pages, so that everybody tried to make the font smaller. Yeah, and you ended up getting very straightforward things that you think are too basic. But it does lend itself to be complementary to personal investigation. I was part of IVTCC when it was just a few of us that were talking and saying, you know, all the VT trials, Penn was clearly leading the way, and they were like 50 or 100 people, but we got together and said, let's just add all our data together. And as Luigi said, we put together a publications committee, and if you included like 20 to 30 cases, you could submit your idea and lead the paper for it, and it became a great resource that I was happy to be part of, even though I- Actually, close to 20 papers came out of that registry. Yeah. Yeah, not mine, right? And it's ongoing, and so I think there's value in that, and I think the relationships are tremendous in working with people, even if you're kind of a middle author on that. But I do think there's definitely a role for early contribution, early participation, and then forwarding your ideas to make things better. And your ideas will ultimately make that two page cut off next time, the AVIB registry, because these things can't be static. Our field's not static, so the registries can't be. Yeah, so I'll address your second question, right? So, I mean, Jerry touched on it to some degree. The whole problem is the procedures continue to evolve, technology evolves. The technology that captures these points also evolves. And so creating a level of flexibility and continued update of this registry data elements, I think, is critical. The existing registries are all built on the chassis that existed about 15 years ago, right? Because of the cost issues and because nobody ever complained, right? I mean, it's like there was the tremendous amount of monopoly that exists on who has access and control over these registries has become a foundational problem to why there is no transparency, why somebody like you who spends all the money and energy and contribute to this data, you can never get access to the data, right? That's a problem, and that is the reason why when we said democratize access to the data, that's one of the foundational principles that HRS believes in, right? So there should be a fair shake at these ideas that come out, and maybe there are ten people in a six month period submit the same idea. Maybe the idea is to put all those ten people together onto one panel to be able to work on that particular facet of the question that's been asked, right? I think those are all the issues that we want to address. I think we learned a whole lot from the existing registries, and we had to put a step forwards in a new direction, so great feedback. So next, I think we have two more questions, and then we'll close the session. Yeah, Peter Kistler from Melbourne, Australia, really enjoyed the presentation. We've actually, I've just been given a whole bunch of money to start a registry in Australia, so I really appreciate the comments, and wrestling with that complexity with putting too much stuff into a registry, which just scares people. And then it doesn't get filled in correctly. Do you think that we should just raise the white flag on AF as an end point in a registry, because it's just too hard to get that data? No one's going to wear 14 day halters so that we can get an approximation of AF burden, and you can't put continuous monitors, and rather look at quality of life and healthcare utilisation as our sort of registry end points, because at least we can maybe more easily obtain that? So as you might have noticed, the way we treat atrial fibrillation now took a little bit of a tone, right, of a differential tone now. The way I compare atrial fibrillation, I look at atrial fibrillation, is I draw parallels between AFib and CAD to be the two sides of a coin with the same comorbid conditions that drive both, right? Age, hypertension, genetics, you name everything that goes in there. One is an electrical manifestation, the other is a plumbing manifestation, right? And there's two sides of the same coin. And so if that is the case, then why are we so hung up on the idea of a point in time recurrence of atrial fibrillation, right? I think that that wave of thinking is here already, and very soon you will see that shift in how we're going to take care of it, right? You will never go to an interventional cardiologist and ask them about re-stenosis rates anymore, right? They always measure their outcomes in mortality, morbidity, survival, and whatever else, the quality of life and everything else. I think we as a field are maturing towards that stage. Yeah. Can I? Sorry, go ahead. I was just gonna say, I think there's a role still for burden when we consider healthcare utilization and hard end points. And there's an active discussion on how to measure it with compliance of non-continuous, with our smart technologies, with implantable devices. They all have their own nuances, and obviously implantable devices would trump everything. But they're prohibitive to get people even in trials if you have to put an ILR in. But I think there's still a role for burden. We just have to be creative with all these wearables. In the US, 50% of people have Apple technologies, whether you use PPG or confirmatory EKGs. And then can we use natural language processing in a chart? Can some of these, with Epic in the US, there's all these tools, AI tools that can bring the whole conversation into the medical record. So I think I wouldn't abandon it totally. I get your point right now because there's so many definitions and issues. But I think there's a lot of opportunity with new technology to enhance our analytics. Okay, last question. Hi, everyone, this is Nina from Hopkins. So great discussion. I wanted to follow up on the end points for AFib trials. Should we be kind of more flexible? Because for paroxysmal AFib patient trials, where we're evaluating, for example, one type of ablation to another type of ablation, we know the burden goes really down. Or if you're evaluating, for example, we're doing a study evaluating AFib ablation, and on top of that, one group gets risk factor modification, the other does not. And you're getting such a big reduction in AFib burden, that detect indifference would take such a big sample size. And your power is really kind of the big problem there. So should we be flexible with the type of studies? And the other thing is in terms of flexibility of how we assess the burden. If we say it should be ILR, which makes sense, then if you're like a smaller study or an investigator initiated study that is not just industry sponsored, you're kind of limited to use the wearable technology such as Apple Watch or Cardio and so forth. So in terms of how much flexibility do we have with that inside of like rigid outcomes? And then second comment is, should we be involving industry? Like right now, you're talking about the PFA registry. And we know that we want to integrate that Apple Watch data. But there is no way you can get Apple Watch data to any cloud without the third party app, unlike Cardio or Fitbit. So you have to go through another. So engaging them early on as we're thinking about designing registries so that we have more comprehensive data. Nice comments. I do believe continued monitoring for chronic diseases, I think, has already become a reality in many practices. For example, if you really look at us, we manage pretty much every one of our atrial fibrillation patients with an implantable loop recorder. There may be a debate about it. People may agree or may not agree with it. The analogy I draw is, if you have diabetes, you have one of those gluco monitors that's stuck on your arm 24-7. And you have dynamic readings of this. And you have very aggressive management of that. If you have hypertension, you're measuring your blood pressure four or five times a day, right? I mean, so if we are relying on data to really take care of chronic diseases, then atrial fibrillation, being a chronic disease, why are we afraid of leveraging a very simple tool in the form of an implantable loop recorder? That gives you a very precise assessment of many things, right? Including your afib burden, your rate controls, how fast, how slow. And also your activity levels, right? And so many other important pieces of information that come along with that. So again, this is, again, a paradigm shift that will continue to happen. Yes, there are cost issues. But again, the long term cost effectiveness and implantable loop recorder, in my opinion, and the initial assessments we did, it's actually very cost effective, rather than using these external monitors on, off in multiple different times in evolution of the disease. But for your question, you and I were on the efficacy for this art committee. And even the ILR is challenging, because if you think about in a trial design, if you implant the ILR and do you let them be in a fit for four weeks to collect that baseline? Cuz a lot of times the people who come in for a persistent trial have already had a cardioversion or anti-rhythmic. On an anti-rhythmic, and then your burden is zero, and then you have an ablation and you can't really change your burden of zero over time. And it gets into a lot of these nuances. We do need more persistent fibrillation patients, but to Hugh's point, then you go back to, is it a 30 second, or is it a six minute? And I think for burden, a lot of the data's aligning at less than 0.1 or 0.2% per the circuit dose trial. It should be looking at a burden and keeping everybody in that low burden. But looking at a difference in a trial design becomes really a challenge to collect that baseline in an ethical way, where you've already engaged the therapy before they even see the EC. Or is that 0.1% burden really relevant to the overall patient care, right? Or I'll cut you. And you saw the data, the patients don't care. Okay. Thank you very much, I have to cut you, I'm sorry, we are over time. I would like to thank everybody for attending this session, all the speakers and Michael Chair. Thank you very much and enjoy the rest of your meeting.
Video Summary
The video presentation discussed arrhythmia management registries, focusing on challenges and innovations in outcome measures. The first speaker, Dr. Bunch, highlighted various types of outcome measures like patient-reported outcomes and digital measures using wearable devices. He discussed the Heartline trial, which uses smartwatches for health optimization in atrial fibrillation detection, linking it to objective outcomes in healthcare utilization. Dr. Bunch pointed out the need for long-term data to understand safety events and emphasized patient-centered care by considering patients' perspectives in treatment success metrics.<br /><br />Dr. Mansour covered the objectives and limitations of arrhythmia registries, focusing on participation, ease of data collection, and the impact on quality of care. He described the NCDR program's registries, particularly for left atrial appendage occlusion and AFib, emphasizing the challenges of manual data entry and access issues. Participation in these registries is vital for improving care quality despite data entry challenges.<br /><br />Dr. Frankel discussed the need for standardizing outcome measures in arrhythmia research, particularly for meaningful endpoints beyond traditional survival rates. He highlighted ongoing efforts by the Heart Rhythm Society (HRS) to define relevant endpoints for trials, ensuring regulatory and practical relevance.<br /><br />Dr. Reddy addressed leveraging advanced technologies, like AI and big data analytics, to strengthen clinical registries. He proposed a vision for the HRS to develop a flexible, AI-powered registry for pulse-field ablation, emphasizing democratized access and adaptability. This approach would enhance data collection and interpretation, aiding in both clinical practice and research innovation.<br /><br />The session highlighted the significance of patient-centric outcomes and integrating new technologies to refine arrhythmia management research, fostering collaboration for improved healthcare delivery.
Keywords
arrhythmia management
outcome measures
patient-reported outcomes
Heartline trial
atrial fibrillation
arrhythmia registries
NCDR program
Heart Rhythm Society
AI and big data
pulse-field ablation
Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
P: 202-464-3400 F: 202-464-3401
E: questions@heartrhythm365.org
© Heart Rhythm Society
Privacy Policy
|
Cookie Declaration
|
Linking Policy
|
Patient Education Disclaimer
|
State Nonprofit Disclosures
|
FAQ
×
Please select your language
1
English