false
Catalog
Foundation for the New HRS Research Network
Foundation for the New HRS Research Network
Foundation for the New HRS Research Network
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Thank you, Ken. And thank you all for coming, and I know that we're up against a lot of really great sessions, including late-breaking clinical trials. But this is really important, I think. It's just really become a, has been a passion of mine, and I hope I can, we can all share that with you. And I want to thank Ken Bilchick, who's put together this session and another session this afternoon, and really just put together some of the framework for what we envision for an HRS research network. So I want to assure you that HRS has had a commitment to research. As part of our vision, it's to end death and suffering due to heart rhythm disorders, and our mission is to improve the care of patients by promoting research, education, optimal health care policies and standards. And as was pointed out to me very recently by, where'd he go? Anyway, oh, there he is, Peng. And by Jeff, that research is the first pillar here in our mission. And perhaps we haven't done as good of a job of that, especially in recent years, but it's more important nowadays than ever before. So it is listed in our strategic priorities, and it is one of the pillars of our strategic plan, which includes advocacy, innovation, and advancing knowledge. So we really do need to rise up to the challenges that we see today and identify our opportunities, elevate research and innovation initiatives, support science and research advocacy, and get funding to support these. So to date, the innovations pillar of our strategic plan I thought was pretty anemic, to be honest. It really was focused on our HRS main meeting and the innovations meeting, HRX. Now at this meeting and our board meeting, I want to emphasize that I have really been won over by the importance of HRX, the innovations meeting, and it could be a platform for us, for our research network. So it is making a turnaround. It has been growing. We're going into our third, fourth year. And at this point, it's really at an inflection point that if you're able to sign up to go to it this year, that I don't think you'll be disappointed. We're getting a lot of interest in it, also venture capital and banking and those kinds of interests in this. And I think it's really taking off. We have had some successes in terms of some startups actually making it past the pre-commercialization stage to commercialization. And another focus of mine has been trying to tackle sudden cardiac arrest and it's been a good venue to start the conversation about that and to get interest in it. But most of the rest of the innovations pillar has been focused on our journals, which is great. And thanks to Dr. Chen for his leadership in Heart Rhythm, the journal, and also our research fellowship awards, and then developing these collaborations with other groups, also quality improvement initiatives. But I think really we could do better. So that's the motivation, I think, for doing this. Now I want to also make you aware that with this meeting, we are launching a philanthropy initiative. And this whole meeting is also going to be part of that. The Heart Rhythm Gala is a night for advocacy and innovation. So we're hoping that we will raise some money that will be able to help fund some research initiatives. We don't have the funds to be like AHA to fund a research project yet, but you never know. And hopefully, you know, we need to start somewhere. But hopefully, at least to start, we want to fund some research fellowships, et cetera. We had four research fellowships that were funded by industry. And for whatever the reasons, they all pulled out. And over the last couple of years, HRS is funding it from our reserves and operating funds. So we really would like to build an endowment. And you know, we've started that. My husband and I also recognized this and wanted to start off the basis of endowment to HRS. So we also have a legacy society that we're starting. A lot of our leaders in EP, a lot of our forefathers and our fathers of EP, you know, they have contributed so much that if they call HRS their home, their professional home, like I do, that I'm hoping that people will see that, feel that vision for the future and also support HRS. We're going to try to reach patients. You know, it's very, we're not at any kind of stage like AHA does, but we will try to also engage patients more. So you see some of the things that we want to support. Our goal is really to enable our fellowships, our member and committee projects. You all have so many ideas. We want to be able to support that. And maybe in the future we'll be able to support some research funding. So our motivation for the HRS Arrhythmia Research Network, as I sat back and I was thinking about what I wanted to do this coming year, you know, one of the gaps that I saw was really this research funding. And I think it's, I think we see that a lot. And I would, what I want us to do is to really think about within yourself what you think the gaps are in our field and how we might move it forward. Now we have a lot of industry, a lot of industry projects, industry supported projects, NIH, but what are those areas that maybe don't fall into those buckets that you really want to pursue? Is it something that if we can come together in a multi-center way or with more collaborators that we identify here, can you then perhaps accelerate your research toward your goals? So what we hope to do is form networks of us, the networks of centers and investigators for clinical trials. And it's not just clinical trials, it's basic and translational research. This is a time of big data, needing big data sets for AI, whether they be various omics sets, biorepositories, imaging data, ECG data. You know, this is a time where we can really become more powerful if we, and more productive if we come together. Of course, I don't need to mention all the challenges and funding. And we also need to promote and mentor investigators in EP research. We've seen a decrease in people going into EP research. We need to really try to fix that and reverse that. And it's even harder today to motivate people to go into EP research, be it basic, translational, or clinical. So hopefully we can also provide some mentorship initiatives as part of our research network and mentoring. So I don't want to, there's going to be challenges. We have thought about doing an EP research network in the past. So Jeff Ogan, a number of years ago, had first, I remember he had presented to the board the idea for an EP research network. I think he intended it to be one of the NIH ones, for example, the CT surgery research network and the heart failure research network. At that time, I think that NIH was rethinking its sponsorship of research networks and the EP research network didn't go anywhere. We tried to reach Jeff to talk about, you know, what can we learn about what happened there? He's on a very well-deserved vacation. So we'll connect with him afterwards. But you know, hopefully we'll learn from that. The other thing that we tried to do was form an HRS research community through the research committee, and it was more of a social media community, but it hasn't really taken off as much as we would like. Then there was the EP collaboratory where this was FDA motivated, actually. There's a heart failure, there's a valve collaboratory. And FDA wanted us to come together and, like those other collaboratories, come up with more harmonized definitions of things. We brought together, we had a meeting of industry, academics, FDA. We brought together this really wonderful meeting. There was a white paper out of it, but there wasn't really a champion of it to bring it forward. And there was no funding. The industry didn't want to come forth with that. So it didn't go forward, but I am happy. There was another session here where there is an EP collaboratory. FDA really wanted that. And they're focusing on new technology and also trying to harmonize AF ablation outcomes and fields and the like. So that has been taken up by Paul Wong and Pete Weiss. So that is ongoing. But why do we think that we might succeed this time? I think the time is right. First of all, the success of the EP collaboratory, it is ongoing. It's actually going to be productive. And for me, I see that as a missed opportunity that we had at HRS, that we didn't do it, that we didn't support it. And hopefully HRS can become more involved again with the EP collaboratory, which I think it will. And there's a great need for science and research advocacy with the funding uncertainties. And we have formed this year the Heart Rhythm Advocates, which is our advocacy arm. And although research initially was not one of the top priorities, it was always listed on there, with all the events of the last few months, it is now up there. It's like a very high priority for research and science advocacy. So I would encourage all of you to try to join, to join HRA. So if you're an HRS member, all you have to do is go and it's a free membership. We actually need to build numbers for HRA and join our Science and Research Advocacy Council. So hopefully, we need a lot of ideas, Jeff Safitz is here, and you've been motivating us a lot. And Ken is a chair and a co-chair. So please, we need ideas in terms of how we can advocate. We need connections, et cetera. Now, for me, I started rethinking about a research network when I got invited by Jeff Ogan to the Canadian Heart Rhythm Society Research Network. And I was talking about a research network, and he said, well, we're doing this already. And the Canadians have been very, very productive, right? They meet twice a year. They've done some really impactful studies. And I was just really impressed. They had some wonderful talks, they had very senior investigators. And then they had a session where, I call it like their open mic session, where they would have people, it wasn't exactly open mic, but people would sign up and they could present their proposals or their ongoing research. They'd do like a five-minute pitch or however long, and then they would end with a slide of questions. And then you had this room full of very experienced investigators who would help answer those questions and help them with some of the challenges. So I thought, wow, that's really wonderful. And they also had this focus on specific projects. So I thought, you know, maybe we didn't do it right before because we actually didn't have a lot of focus. And what I'm hoping is that what we could do is form focus areas that we can all collaborate in, that we can identify. And we might have people in that focus group that are good at writing grants, other people that are good at recruiting. It's such a, hopefully if we can come together for those research questions, identify those research questions, we can perhaps accelerate our field. So it is really important, I think the multi-institutional, the multi-PI collaborations are, I've always done a multi-PI NIH, I'm not smart enough to do all the things that, you know, our grants propose. And I think that has been really, really very productive for us. And I think it gives you a lot of strength in when you're applying for grants. So hopefully we'll be able to do that, I mean, it could include doing biorepositories, the network of investigators I've always, I've already mentioned. The other thing I want to mention is that we formed an innovation hub for year-round virtual platform, so we could potentially have meetings through that, through the year. So you know, these, what are some of our focus areas? These are just some of my initial ideas, mine, and we just, I want you to think about what yours are. I was thinking that biorepositories could be one area with shareable resources. And an example for me, something that I've wanted to do for years is reversible cardiomyopathy due to tachy-induced cardiomyopathy or electrical dyssynchrony, you know, these people develop cardiomyopathy and they may recover, but then they get another insult and they drop their EF again. There must be something genetic, something, something in there, so I thought that might be kind of an interesting area. And then imaging, I've mentioned, you know, getting a lot of imaging, electron atomic mapping, ECG, electronic heart, electronic health record data sets for AI, registries. We're starting a PFA registry, so we're sinking money into this and we have industry support for doing a PFA registry, which will help develop the infrastructure. We will be bringing on a third party to help us with putting that registry together, hopefully using AI methods to mine the electronic health record. We could leverage that with a CSP registry to follow or cardioneural ablation, leadless. Those are all areas that have been brought up. And then sudden cardiac arrest is another passion of mine that I've developed over the last few years coming up to this next year, and that, I think, is another area. So, but there are many other areas that are your areas and I hope that we can help with that. So, I want to end with highlighting our research network sessions that we have here. We have our presentations that you see on our program today that Ken has really been wonderful at putting together. And then this afternoon in the exhibit hall, we'll have some roundtable panel discussion and we want to come up with focus areas, just your ideas, and hopefully you can show up to that as well. So, I want to thank you and hopefully we will succeed this time in our research network. Thank you. Thank you, Nina. So I think a number of the key points were covered. I'm going to, as I highlight some things on my slides that are, I think, relevant to the great points that Dr. Chung just made. So I want to just take us back to the history of this where we initially did a member survey in 2020 and we highlighted a number of perceived barriers to research among them were funding, collaborations, connecting with industry, help with FDA submissions and some other things. And the survey identified the need for a research platform to address these issues. And so that then led to this idea of an EP Collaboratory, which I'm calling it the 1.0 EP Collaboratory because now Stanford and MDIC are doing early feasibility study coordination as part of a 2.0 EP Collaboratory. And the idea with this was to address these challenges, but in a way that would be holistic and accomplish the goals. At the time in 2022, it just wasn't the right thing for the board to proceed with. And so that was left at bay. And then what has happened over the past year is that Stanford Biodesign under the direction of Paul Wong and Pete Weiss have taken over that role of convener and the Medical Device Innovation Consortium has been willing to take on the role of connecting institutional champions for early feasibility studies and industry partners. Now, there's a difference right between the EP Collaboratory 2 and the Research Network where the EP Collaboratory version now is focused on bringing really early technology to market, whereas as Dr. Chung highlighted, the Research Network is focused on a broad focus, including registries and some coordination of partners for clinical trials that span the full gamut from basic research to translational and clinical. So in thinking about how to design the HRS Research Network, we looked at what was out there already. We looked at the Canadian HRS Research Network, which has been a relatively informal program, activities focused on mentorship and idea sharing. And as was mentioned, they have this sort of open mic paradigm where they invite members to present ideas and there's an FTE staff. And here's a list of some of the projects that Radhika Parkash, Andrew Kron, and others have been successful in implementing. Another example is the HFSA Research Network, and this is included in member benefits. It's supported by the user fees and trial sponsors and they have at least one FTE support. And this paradigm is more based on actually governance of clinical trials, which will contrast, I think, in what is feasible and cost effective for us. But here they actually really do more granular work related to site selection, clinical trial maintenance support, patient recruitment, retention, and oversight. With device trials as opposed to pharma, these things are particularly expensive. So that's where, from the standpoint, I think, of the HRS Research Network, what we can do is engage in these matchmaker functionalities where we can connect people, connect industry as well. As opposed to an early feasibility study, an industry partner may be interested in a post-approval clinical trial and they may need advice from EP experts about how to do it and what clinical question would be most impactful. The Association of Black Cardiologists also has had a successful research network similar. In a similar way, they support stage three clinical trials and have one FTE support. So we've talked about some of the scope ranging from, of our proposed research network, encompassing a wide range of research from basic to clinical. And when we think about the leadership support and project selection and how HRS is going to support this, what the board is, would be interested in approving because it's feasible and cost-effective, there are a number of potential kinds of people that could be useful in this. And I think in the afternoon, we'll talk about what kinds of support we would want from HRS. These are people that aren't necessarily working for HRS already, so they could be hired, or they could be, this kind of work could be contracted outside of HRS. There are different models we can consider. You think about IT specialists, data scientists, clinical trial administrators, data entry staff, laboratory specialists, statisticians, financial administrators. And then we wanna have a steering committee, and we'll talk about some questions regarding how that steering committee should be constructed. I think a key point is, a key service that we could provide is more efficient sharing of data and specimens, which is not at all trivial in this day and age. Even for ECG, you can get a paper ECG, you can get a PDF copy of an ECG, you can get an XML copy of an ECG. And then there are omics and MRI, and there are prolonged times to approvals and contractual agreements that we can address with the network. I wanted to highlight one thing that we've developed at our institution for this process of sharing files, which involves this cosine similarity index where we group, say, DICOM files or XML files into similar groups of files, and they have similar fields. And it's sort of a semi-automatic approach that it can accelerate the process of evaluation of these studies, just to make sure that there's no PHI when they're shared across institutions. Sudden cardiac arrest is certainly an interest highlighted here in this point about the prevalence in the general population versus the proportion that has this in patients with structural heart disease. We're certainly interested in applications of AI for sudden cardiac arrest. I think there's some interesting things that some of the other speakers will talk about related to CMR and ECG analysis, convolutional neural networks, vision transformers recently shown by, for example, Debbie Kwan at the Cleveland Clinic to be able to identify cardiac amyloid, and then also radiomics analysis. Natalia Trianova has done some seminal work related to virtual EP studies to predict ventricular arrhythmias, and now that we have recent clinical trials showing that ventricular tachycardia ablation is something that we should be doing more often, we want to think more about which patients would be best to have these ventricular tachycardia ablations and ICD therapies based on MRI data showing how inducible they are. And then in this recent paper in Jack EP, Dr. Chen showed that septal scar burden can identify patients that may do better with biventricular pacing compared to left bundle pacing because if you have scar on the septum and it's hard to tunnel a left bundle pacing lead into the septum, so an example of a personalized approach where then the registry could help us. And I also want to highlight the YCRT leadless LV pacing platform. Aldo Rinaldi from King's College and Steve Nieder who's in the audience has shown that you can do left bundle pacing with a transeptal introduction of an electrode that's connected to a transmitter and battery and there's potential for a completely leadless now, completely leadless CRT system that would involve this system along with a micro. And we need to understand how to under pick whether we should do left bundle pacing with this one versus LV free wall pacing, just like we are trying to figure out that same question in others. And these kind of post approval studies where we partner with a scientific advisory board from a company like this to figure out these questions could be another feature of the network. And again, AIECG, Dr. Fuxiang is going to talk about this in a subsequent talk as well. So Dr. Chung identified potential funding, identified potential funding sources for the network. I have some questions on this that I think we want to explore in the afternoon panel discussion. I also want to highlight that this does present opportunities for early career trainees by providing an infrastructure to do these studies. I think it levels the field a little bit and people who might not have these abilities at their own institution with the help of HRS could really be productive. And so we can really give really talented, bright early career EPs who want to engage in research but may lack mentorship, which is highlighted as a problem for their not having more people in research that this sort of opportunity with a research network could be very important. And also there was a session on gender diversity in clinical trial leadership where women are underrepresented as leaders of clinical trials. And so this could also help with diversity of leadership. So in summary, the guiding principles for the HRS research network are shown here. We want to advance high-quality heart rhythm research, provide a platform for collaboration to promote career development, educational opportunities, and connections with the heart rhythm community across the full spectrum of basic to clinical and connect with other stakeholders like FDA and CMS, communicate these opportunities to our membership and so on. So here are our milestones over the first three years from development of the framework to exploring a data collection platform. And there are questions that we'll go into more detail. I'm not gonna read these at this exact moment, but at 1245 session, I've put together 10 questions related to things that we need to work out as we move toward our goal of submitting a proposal for the research network for approval by the Board of Trustees at the end of June. Okay. Thank you. And our next speaker is going to be Gaurav Padia, who's gonna talk about multi-center registry for CSP and model for registry network research. Thank you so much, Meena. Thank you so much, Ken. It's really a privilege to be here to talk about research. This is something that's near and dear to many of us. And certainly in the current environment, there needs to be more creative and new avenues to approach research. So we'll be talking a little bit about CSP as a use case for potentially developing a collaborative registry model. So the goal here is to tackle a couple questions. Why study conduction system pacing might seem evident, but we'll give you some analogies from history which might make it even more prescient. What questions are worth investigating? Can we learn from other recent registries which have been successful in cardiovascular research? And how do we structure a path forward? So we'll begin here with a statement from an individual who's not a scientist. Disruptive innovations are not breakthrough technologies that make good products better. I'm not sure if any of you can recognize who this is. So this is actually Clayton Christensen who is really a leader in the business space and is the first individual who coined the term disruptive innovation. In his original publication, he talked about the fact that disruptive technologies often perform far worse along one or two dimensions that are particularly important. And as a rule, mainstream customers are unwilling to use a disruptive product at first and tend to be used and valued only in new markets or new applications. So disruptive technologies hold promise but need to be tracked carefully. Disruptive technologies evolve. You know, when we think about disruptive technologies, we're really talking about time and performance. RV pacing at one point was a disruptive technology and now, of course, it's an established technology. Leadless pacing, you could argue, is an iterative performance improvement that was required by the mainstream market, not necessarily a disruption per se because it's fundamentally the same approach, whereas one could argue that conduction system pacing is truly disruptive. The initial entrant into the space had a lot of limitations, his bundle pacing, particularly with high capture thresholds as well as concerns about technique and longevity. Left bundle branch area pacing is further along in the iteration, appears to hold more promise, but we need to do more work and investigation. What are some samples from history of disruptive technologies? Well, the automobile. When the automobile was first introduced, the horse and buggy carriage was a much more reliable form of transport. Cars used to break down all the time. There were no car shops or tire shops, but of course we know cars now have completely supplanted the horse and buggy. What are other disruptions that maybe went the other way? Well, here's a Zeppelin. Now, Zeppelins were the fastest way across the Atlantic when they were first introduced, the fastest form of travel, air travel. But of course we know that where the story ends where they had a fatal flaw. So why is this relevant? And I say this partly tongue-in-cheek is that we don't always know the answers. When we think we develop new techniques that hold significant promise, we need to track those carefully. Particularly when it comes to conduction system pacing, I think that there are a number of questions that are worth considering and answering. You know, here's our stepwise approach to assessing conduction system pacing at implant. This is from the EURA guidelines. Anything that requires nine steps to confirm that you actually are doing what you think you're doing means that we don't know what we are actually doing. And definitions are definitely a work in progress. And R prime and V1 is not enough. I'll just put that out there as one simple example. You know, the R prime shows up way earlier than one would suspect. This is actually a beautiful case report. There's two leads here. One is on the left side of the septum. Another lead is being slightly pulled back. They're pacing from that lead that's being slightly pulled back. So here you see an R prime. It's definitely near the LV subendocardium. You pull it back, you still have an R prime. You pull it back more, you still have an R prime. You pull it back. Here we're at the mid-septum. You still have an R prime. You only lose that R prime when you're really at the RV side. And you can see actually the stain of the septum, which shows you how deep that you're in there. Now, we take it for granted that that R prime means we're near the left conduction system. But that's just not the case. I'll even show you cadaveric dissections. So this is from a really lovely short case theories looking at patients who have conduction system pacing who underwent heart transplant. Here are the EKGs, and here are the hearts. So that first patient, it's an interesting QRS morphology, but it looks like we're close to a conduction system. But that lead is only nine millimeters in. It's nowhere close to that left side. The one on the rightmost side of the screen, now, that QRS didn't look good to me. It looked more sort of intraceptal-ish, and indeed, that's where it was found. But what about that middle one? If I did a CSP implant for a left bundle patient, and I ended the case with that V1 QRS morphology, I would feel pretty good. But in fact, at the time of transplant, that lead is not at the left conduction system. So, you know, these sorts of observations do require subtlety and nuance, and they have relevance. This is a really important study that was published last year at Heart Rhythm by Xiaohan Fan and colleagues, and they looked at the outcomes of CSP among patients with CRT indications, and they found that left bundle branch area pacing did as well as biventricular pacing for an all-cause mortality or heart failure endpoint. But this was a very thoughtful group, and they really sussed out, they tried to suss out the difference between LVSP and left bundle branch pacing. And here, the signal for mortality is significantly worse. I actually think that we have irrational exuberance for this particular technique in certainly patients who are CRT indicated, and we have to be very rigorous in this vulnerable patient population before we march ahead. If this mortality signal is real, then there's harm in doing this approach and only achieving LV septal pacing. Now, it has to be confirmed when you have to go further. So can we learn from other recent registries? You know, probably the best example of an extremely successful cardiovascular registry is NCDER. I think Greg and I were talking before this session started. This is an amazing registry. It started with the ICD registry, which was mandatory in order to get payments from CMS for us to enroll patients. And in fact, my tech still, when we have a new ICD patient, will automatically enroll patients in the NCDER registry as part of a QA pathway. There's no consent that's required. And now they've expanded this registry to include pacemakers. Here's their implant form. So they get some basic demographic information. Then they get a little bit more information about past medical history and cardiovascular conditions. And then some information regarding QRS duration and EF labs. So this is all very good information, very valuable. I think you can make important observations about outcomes. But I would argue that we're electrophysiologists. We care about the physiology. And we're not capturing the key information that we need here. And we have to capture EP data if we want to answer EP questions. And we're the Heart Rhythm Society. So I do think that we should go a little bit deeper than the NCDER when it comes to asking questions. Milos is an excellent example of a really terrific retrospective study trying to describe the rapid adoption of conduction system pacing. 2,500 patients here. And they really nicely described risk of early complications and success. So I think that was quite valuable. We also have recently finished assembling a prospective clinical registry. Now, this was sponsored by Biotronic, but it was vendor agnostic. So Biotronic gave us an unrestricted grant. We could study any vendor. And we're looking here with a centralized core lab. Why is that? Because the people who publish on research are typically early adopters who cherry pick their patients and demonstrate success. So we wanted to try and see, well, if you're prospectively acquiring data, what does it look like? And so it was a simple inventory of uploads, including annotated measurements and images. And then we used free applications, which we could consider for something like an HRS registry, including RedCap, which, of course, is free. Box is a platform that's pretty widely used at many centers. Now, there are still lots of important questions in terms of structuring a path forward. How do we identify centers? Well, I really think it should be representative, representative of our patients, as well as our institution types, private practices, and academic centers across ethnicities and gender. And then should we open centers without formal research background? I would say, yeah. I think that that's how you really understand what a new innovation is doing in the community, is by seeing how it's actually being used. How should local IRBs be involved? Because this is now a registry that requires a consent. Was there a QA mechanism that we could use instead? Maybe that's a path forward. Ideally, data should be prospectively acquired at the time of implant, and ideally, simple enough so that the techs can do this in our cases and that we're not personally involved. Because realistically, at any particular site, one individual can't be responsible. It's got to be simple enough that the person who's with you in the lab can put in the data. Remote monitoring is a huge wealth of information that requires collaboration with industry. And defining these data instruments is its own process, which will potentially need to be iterative. And can there be a mixed approach? Can there be some centers that provide basic data, other centers that provide more advanced information? How do we assess success? I mean, if we started a registry, that's great. I mean, simply having information doesn't necessarily mean we're successful. So I think before we embark on any effort, what should the goal be? Should the goal be research paper? Should it be a guidelines change? Should it be understanding what centers should be able to move forward on cases or which centers need help? How can we establish an ongoing means to engage patients and other stakeholders in reviewing this data? Patients are a big piece of this. And I think that that, not just lip service to it, I think it's important to involve. And the short term, this will likely be a volunteer effort. But an industry involvement consortium can play a role. And partnering with hospital systems may be a path forward as well. It's been said already, I think the research mission of HRS is key to our member society and our future. It's really our relevance. You know, I was struck by Califf's discussion with us at the plenary. The first thing he said was don't come to Washington and talk about your paycheck. I think that's right. I mean, it's like really very compelling for us to talk about what we're good at, which is taking care of people with arrhythmias and doing it in a way that's new and different. So I really think that research has to be key to what we do. Otherwise, our relevance will be lost. You know, there are gonna be huge cuts to Medicare and Medicaid that everybody's broadcasting goes down the line. So what's the relevance of going after heart rhythm disorders? And how do we do it? Anyways, it's exciting. Thank you so much. Thank you. It's Pengcheng Chen who's going to talk about the basic science applications for the research network. Thank you for presenting. Thank you for inviting me. I also want to congratulate Nina for coming up with this idea and be our leader for the next year. I have been given this task about how can basic and translational science community benefit from the network of the research network. So before I came, I discussed this idea. Many of the take the inputs from cardio-electrophysiology officers, including Grant Fishman, Lee Eckhardt, Nip Chian-Minowab, Pat Boyle, Karen Ramey, and Eddie Grandi also listened to Nina and also got some information from Jeff. Thank them for their inputs. Traditional NIH definition of translational science is a process of turning observations in a laboratory clinic and a community into interventions that improve the health of individuals and the public, from diagnostics and therapeutics to medical procedures and behavioral changes. From basic science point of view, often we thought the translation is, as stated in the previous slide, is focused only on the bench-to-bedside, where we have findings in the basic science laboratory and we found a cure, and oftentimes that cure is in the freezer. And we need to convince a clinical colleague that this is feasible and somebody will obtain preliminary results and then go through clinical trial. So we have many experiences with this and a lot of the things we are using right now is originated from the basic research laboratory. I would argue a research network could also facilitate bidirectional translation or collaboration. So it's not only from bench-to-bedside, but also from bedside-to-bench. A research network that has human specimen can provide basic scientists with the following opportunities to improve the understanding of the mechanism of the disease and therefore develop breakthrough ideas of how to take care of diseases. These samples that may be available to us that may be extremely useful are tissues plus samples of genetic materials. And also, a very important thing is integrate state-of-the-art phenotyping into a clinical network where various analysis of biospecimens and deep phenotyping using a range of omics platforms, such as genomics, proteomics, and metabolomics, et cetera, which the basic scientists would have the ability to do on these samples and provide the clinicians with feedback. So the help is actually, will be tremendous and could be bidirectional. During the Cardiac Electrophysiology, Cardiac EP Society meeting on Thursday, Dr. Dan Roden was the speaker, the Gordon Moe lecturer. And he has mentioned an example of clinic-to-bedside collaboration and a translation. I would think this is a good example to mention today. So David Park had a good idea as a transgenic mouse model, and he developed an idea of transcription factor, ETV1, is essential for rapid conduction to the heart, and provided a lot of mouse evidence for this finding. And when he submitted the paper to a journal, the journal would ask him, the reviewer would ask him, so whether or not this ETV1 actually has any human relevance. Therefore, he took this idea to Dr. Dan Roden, who happened to have a genetic and phenotyping biobank called the BioView at Vanderbilt University. So from a finding in a mouse model, he then did a phenoid association, not genoid, but phenoid association, looking at people with conduction diseases. And he did identify a link between ETV1, bundle branch block, and heart block in humans. So with that, they now have sufficient evidence to prove that ETV1 is a critical factor in determining the conduction physiology in the heart. So this is one thing that the clinicians can help basic scientists with their findings and with them drawing a conclusion. And expansion of this kind of collaboration could be extremely helpful in advancing science. There are several network that has been out there that could do this. We just heard the discussion about many Canadian network and others, but there are some other networks that we could consider to pattern. These networks has been around for more than five years. So there are ways to maintain these networks. So it's very encouraging. So this CCCTN network was founded in 2017 with 16 centers and has grown to a research network of over 40 academic clinical centers in the United States and Canada. They have collected 20,000 unique CIC admission samples in the registry. And there is a description of variations in care, epidemiology outcomes of CICU patients as well as subsets of patients with specific disease states such as shock, heart failure, renal dysfunction, and respiratory failure. There are also, in Europe, there is a Dutch national network for PEDMED-NL, which is also established in 2017, to facilitate pediatric clinical trials. It's a collaborative consortium involving 17 Dutch hospitals, including University of Medical Centers, the Patient Alliance for Care and the Genetic Diseases, and services of both industry-sponsored and investigator-sponsored clinical trials. There's a little story here that I wonder if you, when you apply NIH grants, do you read the fine prints? So I thought that was clueless in writing a grant for clinical trial because I'm a basic scientist. But one of my friends who writes clinical trial, who did the clinical trial for the entire life, also got his grant administratively withdrawn, just like mine. Just this all happened within the past year. It turned out NHLB has a new rule that R01 cannot be used to support studies that have a safety clinical efficacy or clinical management outcomes. So getting an NIH grant is not easy. There are many different roadblocks. And a Dutch national network, however, service was industry-sponsored, investigator-sponsored clinical trials. I think most were efficacy, among others. So these are something that we can consider to do outside NIH. There is also European reference networks, which focus on rare and low-prevalence and complex diseases and conditions requiring highly specialized healthcare. So that they enable specialists in Europe to discuss cases of patients affected by rare, low-prevalence and complex diseases, providing advice on the most appropriate diagnosis and the best treatment available. I thought this could be also a way to sponsor the relationship between basic and clinical scientists, because there's somebody you can call within the network, and they provide a website that provides information on what the networks do and what is their expertise. And this clinical data could be very helpful to basic scientists in conducting their basic research. There are some ideas that have been done in the past, such as to develop a rare genetic disease repository here in the United States. That sounds like a pretty straightforward initiative. And the problem is the clinical data is super-siloed in the United States, and it creates significant and artificial barriers to research. If you have two patients and another one has one, you cannot publish a paper or learn anything from these three patients. But if you have more patients, then it could be helpful. However, these private networks tend to be by invitation only, and they may fizzle out over time. I'll give you an example. This example is the Arrhythmia Genetics Network, or AGENT. It is a multi-center observational registry and DNA repository for inherited Arrhythmia syndromes. So you can still find some information on the Vanderbilt Medical Center website. And they are missing additional institutions like IMH or University of Washington Medicine. So these institutions get together to do biorepository. The problem is the network fizzles out due to the loss of an investigator to the industry. So there is a risk of this private network from good friends. Things change, and things may not sustain. So an Arrhythmia Society-founded network, as far as the society continues, the network would be able to sustain. There are some specific ideas of a network. This one was proposed by MENA, tachy-induced, electrical-induced, and collect patients with frequent PVCs to study PVC genetics. So this requires a large repository to make these studies possible. Finally, HRS research network could easily promote AI research. Various AI machine learning algorithms can be used to imaging ECGs and other clinical studies would benefit from large-scale diversity and standardization. So to leverage AI-ML using multi-model data with integrated omics, deep genotyping, and phenotyping combined with EHR would be something that is very topical, very important. I think will attract a lot of attention and will really improve the patient care. So in summary, that I think sustained multidisciplinary collaboration, access to human samples and longitudinal patient data would be a strength of the HRS network. And in comparison, private networks depend on individuals, and HRS network is backed by the organization, so it's much more preferable. There are well-fostered interdisciplinary interactions in the shared facilities expertise and useful training purposes. There are one of the members of the Cardiac EP Society really think a specialized center for teaching certain techniques would be very helpful and foster exchange of students and fellows as well as collaborators to joint grant applications. So recently, we have read the news just last week, the administration wants to cut nearly half the $47 billion budget from U.S. National Institutes of Health and reorganize the agency's 27 institutions and centers into just eight institutes. According to a leaked version of the near final 2027 budget proposal. So there is a lot of challenges related to NIH funding, just to make the MENA's proposal of a network very timely. I would end with this statement on the website of Heart Rhythm Society. This was mentioned to the group by Dr. Jeff, saying that the Heart Rhythm Society mission is to improve the care of patients with heart rhythm disorders by promoting research education and optimal health care policies and standards. So Jeff Safitz specifically mentioned this to all of the leadership in Heart Rhythm Society, saying that the number one mission for Heart Rhythm Society really is promoting research and we can do more in this area. Thank you. Thank you very much for a wonderful presentation. Dr. Ng. So Dr. Ng will be talking to us about AIECG and European experience. He's got a wonderful program from Imperial College in London. Okay, I'm going to start by thanking Dr. Bilcek and Dr. Chung for the invitation to speak, but also on this excellent initiative of an HRS research network I think we will all benefit from. So I think my task is to try and think about how a research network can be beneficial from the data science perspective, and I'll try to illustrate that with some of our own AIECG work to show how we've benefited from having a network of collaborators. I've got my slides here. Oh, there we go, thank you. So as I say, I'll illustrate this principle of the benefits of a research network through our AICG work. And recently, Aaron South, my group was in the audience, published a paper on developing a model to predict mortality. So instead of just predicting mortality, he took 1.2 million ECGs from collaborators in the US, so not our own ECGs, but from across the pond in Boston, and trained a model to predict not just risk of mortality, but the time to mortality. So for each ECG that's put into the model, it outputs an individualized survival curve that tells you your risks and your chances of being alive or dead over a 10-year period, as shown here. And it seems to work pretty well. So if you look at the predicted time of death and the actual time of death in the test set, the model can guess roughly when someone is about to die from a single ECG. If you take those who survived a 10-year follow-up, you can see the predictions are fairly flat, so the model gets it right in that their risk is fairly low of actually dying during the follow-up period. And we worked on this, and we had quite a bit of press interest in the UK, including some slightly negative press interest. It says here, we will start to use a death calculator to tell you when you will die. Do you actually want to know? But what it's useful for is actually tracking trajectories over time. So if you have multiple ECGs, as shown here in two patients, you can look at their risk of survival over 15 years. And here in the red dots are inpatient ECGs, where you see there's a dip in their predicted survival probability, and then when they get treated and get released in hospital, that improves. But you can essentially track individual's risk over time and use that to guide potentially treatment decisions. And if you take half a million patients, again, from this Boston dataset, and split them by risk quartiles and follow them up over 20 years, you can see that those in the highest risk prediction quartile have an eight-fold age- and sex-adjusted risk of mortality. Even if you take normal ECGs in patients who have been signed off by cardiologists as having normal ECGs, this hazard ratio of around eight still holds. So even ECGs that we think are normal, the AI model is able to see things that the physician cannot see. But coming back to the point about research networks, AI models can be overfit to a training data set, and it's very important to have multiple data sets to test that your model can generalize. And we've been very fortunate to have collaborators from across the world and three separate groups from Brazil, including one from primary care, one from a volunteer cohort from the civil service, and one from a cardiomyopathy cohort where we've had ECGs, and we've been able to test the model to show that it generalizes quite well and works across all different cohorts, and also in the UK volunteer cohorts. It generalizes across different pathologies, primary, secondary care, and also across different continents. We did a bit of explainability analysis to look at what the morphologies are, and we found things that, reassuringly, we understand, so a broad QRS, abnormal T waves, abnormal SD segments, these are all things that relate to risk in our model. And since then, we have expanded the suite of models beyond mortality prediction by predicting a whole risk of cardiovascular outcomes, as shown here, but also some non-cardiovascular outcomes like diabetes and CKD, which can be picked up from an ECG with variable C indices between about 0.7 and 0.8. Just another quick example, just to show the benefits of global collaboration. We had another model to predict future valvular heart disease, and for this, we had about nearly three million ECG echocardiogram pairs from a collaborator in Shanghai who was able to share this sort of wealth of information. So we're able to train a model using a Chinese, purely Chinese dataset, and then test it in a Boston US-based dataset that I talked about earlier. We're not going into too much detail. We've got a model that seems to be able to predict future valvular heart disease relatively well with C indices, again, of around 0.7 to 0.8, so from a single ECG, it can tell you whether in the future you will have significant MR, AR, or TR, and it also externally validates relatively well. And again, these are the same Kepler-Meier curves over several years. If you see, if you're in the highest risk prediction from your single ECG, you have a much higher risk of developing these conditions in the future, and this was in the derivation cohort from China, and very similar curves, as you can see, in external validation from the Beth Israel Deaconess Medical Center dataset from Boston. And as I said, about AI models, we really want them to generalize. We want them to work across different cohorts, and that's why you need the collaborations, and what we found in our models is that these performance seem to be relatively stable across males and females, but also across all the different ethnic groups. We were concerned at some point that ECG features are different across different ethnic groups, but this seems to be not so much the case as we found. We did some explainability analysis again here, and interestingly, for MR, it seems to be more about the QRS and the P-wave, as you can see here, and in terms of predicting future AR, also less the P-wave, but more the QRS, as you might expect from where the valve is situated. So I guess that's really just to illustrate the benefit of research networks. We've had several papers in the last year alone, been quite productive, but all of this would not have been possible if we had not built a strong collaborative network of researchers and colleagues from across the world who've been happy to share not just expertise, but also data sets, and we met Andrew Crown this morning to update him of what we're doing with his work, and I should have put his picture on here, and obviously he's shared some of his ECGs from his hero data set of unexplained cardiac arrest. What we've got is a network across North and South America, across East Asia with their massive data sets of millions and millions of ECGs, which are well-labeled, and colleagues in Europe, and as a result, we've had some success, and I think the HRS with the aspirations for building a research network and a biorepository could, I think, mirror something like this. As Peng says, this is not sustainable. If any one of us stops working, this is gone, but if HRS leads on it and has the HRS brand, these things become far more sustainable for the future. So from the data science perspective, I think there's a clear benefit for collaboration. You get much larger data sets, you get millions and millions and millions, and these models are data-hungry. You can't do it with 100 ECGs. You are left with more generalizable models, so if you have data sets from across the world, they work everywhere, they work across lower-middle-income countries, they work in different disease subgroups, they work in different ethnicities, and there's always the worry if you train, say, in just a UK data set, it doesn't work anywhere else. It allows you to robustly validate a model to make sure it works elsewhere, and as someone touched on earlier, it also helps rare diseases, so if you want to predict VF in a specific niche cohort, you won't have enough just by yourself or with your friends. You need a big network to provide all of these ECGs to train the models, and we've been fortunate with lots of good collaborators. We had a workshop that involved all these institutions just in London, bringing us all together to discuss expertise and data. I think networks can also share expertise and not just data, and the reason I met Ken was because we reached out a couple of years ago to talk about digitization, so we have a software that can take paper or PDF ECGs and extract the digital signal, and Ken was keen to use that for one of his projects, and the model works pretty well. Here's the ground truth, and here's what we extract from the paper or the PDF, and you can see they pretty much overlap, and we've worked with several groups from around the world helping them to digitize ECGs. Again, their HRS network could also share expertise and help each other in doing simple tasks like ECG digitization. I want to finish off just with a couple of examples from the European perspective about how we've worked together in research networks. The British Heart Foundation is a charity that funds a lot of cardiovascular research in the UK, and they've built a data science network a few years ago, and this really came together around the time of COVID where we were able to pull together data from multiple institutions in the country and bring together researchers to understand the impact of COVID on the cardiovascular system, and this has now grown into a much bigger network that allows people to access data and has resulted in multiple publications just in the last few years or so. Another example from Europe, we can probably learn from ERA, which is the sort of European version of HRS, and they don't do a lot of sharing of data, but what they do a lot of is establishing different registry initiatives. If you look at their website, they will show you what papers have come up from these networks, and it seems to be every two or three months they have some paper published and they've worked on things like genetic testing, cardio-neuroablation, detection and management of arrhythmia-induced cardiomyopathy as we discussed earlier, and these are things that potentially the HRS network can work on. So that's my 10 minutes up. So before I finish, I asked JetGBT to come up with a quote about why it's beneficial to have a research network, and this is what he came up with, which I quite like. Every single institution or nation holds all the data or all the answers, and clearly we need to have open global partnerships for that to happen. So thanks to the team who've done a lot of the work, and thanks. Thank you. Thank you for the fantastic talk. Our last presentation will be Dr. Karandeep Singh, who's local here in San Diego and an expert in informatics and large language models, natural language processing. Thank you. Thank you. Thanks so much, and I know we're at the tail end of our session, so I'll keep it quick, but hopefully we'll cover some of the ways if we're setting up a collaborative research network, how we might think of the role that large language models might play, and so in this talk, I'm gonna very briefly talk about what are large language models, how do they work behind the scenes, what are some common use cases, how have we used them, and then close with an idea of how research network might use them. So going to what they are. So large language models are basically trained in two stages. First stage is you collect all the data of the entire internet. It's often termed the pile, but there are kind of different data sets going around, many of which contain copyrighted information and all kinds of other stuff, and so those are used to train a model that can predict the next token or next word given all the previous words. Large language models in today's world are a fancy type of neural network known as a transformer, which have kind of two parts to it. They first take the text that you give them and convert those into a series of numbers, and then they take those series of numbers and convert those back into text. So when you train a language model on internet data, what it can essentially do is autocomplete a sentence, autocomplete a paragraph, autocomplete a chapter of a book. As we know, that may not always be reliable, but the point is that they can do a lot more than kind of the old autocompletes, which could autocomplete a word. The second stage of training a language model is actually what makes it useful, which is called instruction tuning. This is where you take that model that was trained on the entirety of the internet to predict the next word, and you actually fine tune it so that you give it instructions, and then the output of what you want. So as an example, you might say, summarize this text, and you would have a bunch of text, and then you'd have as the answer an example of what that summarized text might look like. You might say, answer this multiple choice medical question as your question, and then your answer might be the actual answer. And it turns out that if you do enough of this supervised kind of fine tuning or instruction tuning, where explain the moon landing to a six-year-old, and then you have kind of an answer and some context, the point is that it'll actually eventually learn how to follow instructions. And so most modern language models that we work with, like ChatGPT, those are all instruction-following language models, which actually is what makes them useful. It turns out when you autocomplete, there's a lot of thinking you have to do to get the thing to behave the way you want, but when it's instruction tuned, it actually generally does what we expect as kind of end users. So just kind of giving you an example, if you have a prompt, that's that input text you put in that says, determine if this patient has a transportation problem based on the following note, answer yes if a problem, no otherwise, do not provide any explanation, and then you copy paste the actual note below that in your prompt that says case management note, this patient called their family member to come pick them up, but their car is currently in the shop. Notice that that's something that in an old natural language processing approach would have probably failed. Doesn't mention transportation, doesn't mention a problem, the car is currently in the shop, which humans know means that it's not available, so there is a transportation problem, but it used to be the case that if you were a natural language processing researcher, you would spend days trying to figure out these kind of edge cases and fix them, and nowadays when you feed this thing to a language model, and it converts that text into a bunch of numbers, and the numbers back into text, most language models today will just spit out the answer yes, there's a problem here. So that actually makes it quite useful as a way to extract information from text in a way that previously required a lot of expertise, and now is something that most of us can do on the comfort of our phones. So we're used to chatting with large language models, I think many people have used them kind of in a colloquial personal sense, but not in a professional sense. So if you're looking at professionally where they can help us do kind of high level tasks, here are some of the sort of tasks they can do. So they can help us extract information. Think of all the places in research, or clinical care where we need to do chart reviews. Those are situations where previously you really had to optimize and do a lot of training to get something to behave the way you want, and now off the shelf, if you've got information that's sitting there in a chart, and you have the ability to securely run a language model on it in a HIPAA compliant environment, you can actually speed up a lot of these kind of tasks. These are probably the easiest tasks and where I put a lot of the emphasis, because the information you're looking for is in the text. There's no prediction happening. Translation, you wanna convert materials to another text, or sorry, to another language, or just simplify it to make it more patient accessible, patient friendly. That's something that most large language models can do off the shelf. Summarization is an area that's improving. There are still a lot of challenges, because when we're summarizing a lot of text, many of the language model historically haven't been able to take in a ton of text, at least the free ones. You know, the chat GPTs in our world can, but those often are ones where we can't send protected health information. But this is an area that's improving. In fact, Lama4, which is a model that was announced by Meta, that's coming out. It's out already, but gonna be available soon on a lot of cloud services, can ingest 10 million tokens. And so that essentially means we'll be able to do whole scale chart summarization, probably better than we can today with kind of open tools. Generation, like answering patient messages, and then prediction. Remember, we do have that first stage where the text gets turned into numbers. And you can take those numbers and feed those directly into a prediction model, and do prediction off of those. So I think that's kind of another way we can use large language models. And I won't get into the things on the right, but just to say, most of the use cases for language models that we're gonna do, that I think are the easiest ones, are the ones where you're really just working with a prompt. You're just entering text, you're copy pasting in patient-specific context, and trying to get out answers. And we refer to that as zero-shot learning, because it just means you haven't given it a bunch of examples, in your very first try, are trying to have it tackle a task which it may or may not have ever seen before. When you work with language models in that way, how do you structure your instructions? There's prompt frameworks, I often teach Risen. Risen is like a soap note, or an S-bar for these things. So you write what role the AI's playing, you copy paste the relevant input, you give it the recipe of the steps you want it to follow, you tell it what the expected output is, and what things to not to do, like narrow the result. And usually when you give instructions in that way, you actually get out the best possible response that you can. So it's not just a matter of conversationally chatting, but actually structuring it. A lot of the open-source LLMs now are actually really good. So the sixth and eighth best language model, in terms of these public language model rankings, are things that you, in principle, can download today. Although many of these are things that may not run on your local computer, but there are ones that actually will run on your local computer, and are in the top 50 of all language models. How have we used them at UC San Diego Health? So a couple of things. Quality measurement is really expensive. Hopkins published a study showing that it costs about $5 million to abstract about 150 quality measures. We are starting to now look at, can we use large language models to read patient charts, and abstract, for example, a 60-question sepsis measure that we do kind of one question at a time, having it read through the chart. On any individual question, we can get about 97% agreement, or more. But on the series of 60 questions overall, do we agree on all of them? We can get 90% agreement. And that's actually roughly similar to the iterator agreement between different people trying to abstract that same sepsis measure. So the point here is that a lot of the things that we've thought of as the really kind of labor-intensive, time-intensive tasks of research are things that we can start to potentially throw language models at, recognizing that we do need to do some validation for these tools. Similarly, we're trying to read our 50,000-plus patient safety reports that get filed per year. This is something where we have to assign severity, and we have to look at contributing factors. With an off-the-shelf language model, we can get pretty good at extracting those contributing factors. And with a fine-tuned language model, we can get pretty good at actually assigning the correct level of severity to those events. So again, these are things that we've had teams of 40 people doing, and now are things that we can do, not only with less time-intensive approach, but also we can do it on a lot more patients. Hospitals are required regulatorily to do 20 sepsis case reviews per month. Now we can review every sepsis case that walks in the door and have that same measure that's much more statistically valid than just the 20 randomly-selected cases that we're currently regulatorily required to do. I'll skip this one, but just to say that using language models doesn't always achieve the actual outcome that we're looking for. In this case, it didn't really save time. The whole reason we were trying to use it was to save time. And also just that there are now several HIPAA-compliant CHAT GPT-like tools that are out there. I'm not endorsing either of these two tools, but just to say that at UC San Diego Health, these are two of the approved HIPAA-compliant tools that you can actually directly copy and paste patient information into to use. And this is something that we're now upscaling all of our staff, recognizing that research administration is one of the biggest places where this sort of thing will actually help. So I'll close with this slide, which is what are the ways we can use large language models to support collaborative research? I kind of see this as three categories. One is all the overhead of research administration. So think about drafting, modifying, comparing trial protocols. Certainly these things can hallucinate, but with example being the copilot tool I showed, I can upload examples of current protocols we have and have it modify or adapt those protocols. So this is not where I'm telling it to do the science. I'm saying, here are some example protocols I have. I want you to adapt these and do the following thing differently. And that requires just a lot of human time, and oftentimes we can get a good starting point to do that sort of thing. Institution-specific language, if we have multi-center trials, there's a lot of tweaking you have to do. Why do the tweaking? Have the AI draft the tweaking, and then you can review it, and that's a lot faster. Even moving information from spreadsheets to documents and vice versa are examples that, for example, copilot can do, where you give an Excel sheet and say, extract this information, throw it in a Word document, and create the Word document for me, and then it creates it, and you can click and download the Word document. Just kind of mundane, boring stuff that language models can do. And then I think conducting research using LLMs, where we're using it to either help us with systematic reviews, which there are tools out there specifically for that, extracting information from charts, or deploying chatbots to answer patient questions before it gets escalated to a clinical research assistant are things where I think they can help us do the research. And then obviously I think one area as an AI researcher I find interesting is they are actually an interesting object of the research. So we can benchmark them on their ability to do various useful things for us to deliver that better patient care and to have that better policy story to tell. And we can also fine tune them to do specific tasks. And more and more recently, I would say the multimodal ACG AI stuff that you're talking about are things that are now also blending kind of unstructured text as part of that kind of overall task. And so just to say that's a narrow slice of maybe what we can do, but there's a broader way in which we can use them as part of the infrastructure to support a collaborative research network. Thank you so much. Thank you. Thank you. Thank you for all the wonderful presentations. We'll close the session now. Please come to the 12.45 panelists round table discussion.
Video Summary
The session outlined a strategic vision for the Heart Rhythm Society (HRS) to elevate its research network and enhance collaboration across various projects in the field of electrophysiology (EP). Acknowledging past challenges, leaders highlighted the society's commitment to advancing research, education, and advocacy as key pillars. Emphasis was placed on forming robust networks among centers and investigators for clinical, basic, and translational research, leveraging modern tools such as artificial intelligence and comprehensive data repositories.<br /><br />Key initiatives include the HRS Research Network aimed at fostering collaborative, multi-center trials to accelerate advancements in arrhythmia research. Proposed focus areas range from sudden cardiac arrest studies to innovative technologies like conduction system pacing (CSP) and AI-driven ECG analysis. These initiatives aim to answer critical questions regarding the efficacy of new technologies, promote translational science, and ultimately improve patient care outcomes.<br /><br />The session also highlighted examples of successful models like the Canadian Heart Rhythm Society Research Network and HFSA Research Network that HRS could emulate. Additionally, the potential of large language models (LLMs) to enhance research efforts was discussed, particularly in streamlining document management, conducting systematic reviews, and integrating into multimodal data analysis.<br /><br />To support this, philanthropy and industry partnerships are viewed as crucial for funding, alongside building a sustainable endowment to back fellowships and long-term research projects. By integrating these elements, HRS aims to overcome existing barriers, promote early career development in EP research, and support diverse leadership in clinical trials.
Keywords
Heart Rhythm Society
electrophysiology
research network
collaboration
artificial intelligence
arrhythmia research
conduction system pacing
AI-driven ECG analysis
large language models
philanthropy
industry partnerships
Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
P: 202-464-3400 F: 202-464-3401
E: questions@heartrhythm365.org
© Heart Rhythm Society
Privacy Policy
|
Cookie Declaration
|
Linking Policy
|
Patient Education Disclaimer
|
State Nonprofit Disclosures
|
FAQ
×
Please select your language
1
English