false
Catalog
Imagineer Tech Showcase - Harness the Power of Dat ...
Harness the Power of Data to Advance the Science o ...
Harness the Power of Data to Advance the Science of EP
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello, welcome, and thank you all for taking time to join us today for this presentation that we've titled Harnessing the Power of Data to Advance the Science of EP. It's my privilege to introduce our presenter today, Dr. Paul Tsai. He's the Director of the Comprehensive AFib Program at Brigham and Women's Hospital. He's also the Associate Professor of Medicine at Harvard Medical School. As the Director of the Comprehensive AFib Program, Dr. Tsai and his colleagues are taking a multidisciplinary approach to the treatment and management of the disease. And one area that's become of particular interest to Dr. Tsai is finding ways to more effectively leverage the overwhelming amounts of data tied to the science of EP. He's been a valuable and outspoken member of various advisory boards and steering committees focused on this very topic, and we look forward to learning from him today. At the conclusion of the presentation, we'll have about five minutes for Q&A, so please stay tuned for that. And with that, I'll turn it over to you, Dr. Tsai. All right, Chris, thank you so much. And thank you, first of all, to the Heart Rhythm Society for this opportunity to present on what I think is an exciting topic that will impact, I think, all of us in electrophysiology. I'd also like to thank Boston's Webster for the ability to participate in this exciting area of research and development in our field. So without further ado, here are my disclosures. And here is an introduction of what I'd like to present to you today. So first, I want to go over with you what the issues are here and now with data management in electrophysiology. What are the issues? What are the roadblocks? What are the concerns? And then I will move on to what solutions are available today to help us address this. And beyond that, not only to address our current needs, but how can we leverage these newer ways and tools of managing data to harness the power of these new tools, such as artificial intelligence and machine learning, in order to advance our field of electrophysiology. So here is a slide that helps to demonstrate the issue at hand. So there is a vast quantity of data that is produced in electrophysiology every day in the EP lab. So there are well over 10,000 mapping systems installed worldwide. And of those 10,000 mapping systems, more than a million cases, electrophysiology mapping and ablation cases, are performed annually. And with all of those cases, over two petabytes of data, that's a vast, almost hard to imagine amount of data is produced. Now that's not the only issue. The other issue is that this data that's produced is quite versatile and heterogeneous. There's all kinds of information that's produced. That includes, as you see here on the right side of this screen, electroanatomic mapping data. There's data that comes from the electrograms that we record. There is even data of the movements of our catheters during the mapping and ablation, as well as locations of ablation. There's the data that can be input from various imaging modalities that we integrate into our maps, including CT scans, the ultrasound devices we use, as well as fluoroscopy. So with all that data, what do we do with that today? Well, it's unfortunately not an ideal solution. Currently, data storage and then retrieval is quite fragmented, disjointed, and difficult. So you're probably familiar with this. If you work in the EP lab, you know that a lot of times we're storing data on, at this point, somewhat arcane technology, including disks, hard drives, and such. It's stored locally, hard to access. And so if we're trying to perform data analytics, reviewing cases, or looking at our case statistics, that can be challenging to collate that data. And then even going beyond that, when we're trying to share that data on a larger scale, whether it's through academic research, through clinical applications, or through research and development, that poses a significant barrier and challenge. So here is a slide that should help you understand this new solution that's available right now. The CardoNet architecture is shown here in a schematic form. And the idea here is, as you can see, data transfer and access, shown by these arrows, occurs through a variety of way stations and platforms. The core of this is that all this data is stored within a secure cloud server. So the Microsoft Azure Cloud Platform has been partnered with Biosense to provide a platform for storing that information securely. This is a platform that's used throughout the medical field. So it has both the experience as well as the security for critical information that's been shown over and over again. That data can be accessed or is then input into the cloud through a partnership with Siemens through something called Teamplay Gateway. So this is also something that's a platform that is widely used throughout medicine, especially in the imaging fields. This is installed in many hospitals around the U.S. and the rest of the world. And again, is secure, provides privacy for patient information. And the idea then is, if you look at this gateway in this part of the schematic, when we acquire our data, that data and information are accessed typically into our hospital network locally. And that's where it can be stored per hospital protocol. But then when you utilize CardoNet, that information is input through the secure Siemens Teamplay Gateway into the cloud. And once there, those who have permitted access can access that data either in a nominized fashion or as needed clinically through a standard computer or tablet or whatever we choose to use. So, you know, giving a talk through Zoom in this day and age, it goes without saying that we all are very aware of the growing importance of remote access to not only conversation and clinical work, but also data. So the CardoNet system or platform now in our new brave world of COVID-19 provides an ideal platform in many ways to be able to securely access that data, whether it's for clinical research use or other utilizations. All right, so moving into a little bit more granularity. So how does it work? How does the CardoNet platform work? So the idea is that you produce your cases, that data gets input automatically into the CardoNet system, which then stores it in a way that can be retrieved in a very user friendly way where there's basically a dashboard that you access where you can review your individual case data. You can also review that case data remotely. And then if needed, you can attach key patient information that may help in how you manage the patient based on what you've done with the cases. So patient demographics, patient clinical characteristics, et cetera. And the parameters are listed here and go over those very quickly. But the idea is that currently I'm going to for detail, I'll go over that you can record things like, of course, where the hospital, which hospital the procedures performed at, which lab, the procedure date and type, procedure time, the catheters used, the software modules used, and then metrics such as mapping time, ablation time, number of lesions, ablation statistics, number of points taken, average contact force, average visitag sure point tag index. And these are all currently available. And I'd say that really the secondary larger picture issue or feature is that there's flexibility in the platform such that if we as a field of electrophysiology determine that other parameters are going to be important parameters to track, those can be very simply added into that dashboard later on. So once that data is in the system, so to speak, how does the accessing and sharing work? So this is another schematic that gives you an idea of that. So again, with that information stored in the cloud after a case is performed from the mapping system, that data can be accessed by various interested parties to varying degrees of security and anonymity. So, of course, non-anonymized data can be accessed for clinical purposes, such as a redo procedure. And I'll just throw out there, imagine the how very cool it will be when we can do a procedure and we can just very quickly pull out the previous mapping, not only mapping data, but also where we perform ablation, help us predict where potential recurrences in electrical connections and such may occur and go beyond that even. And how about a redo from another institution where you could then have that data come into your lab right when you're doing your procedure? And that also goes true for or holds true for research purposes as well. Now, the data can be at a more anonymized level when we're looking at institutional access to the data and research access for the data. We can share that data in ways we deem appropriate for both referring health care providers as well as patients themselves. And that data can be then exported in various data formats for research and quality purposes, that sort of thing. And then, of course, access can be taken by Biosense as well in order to help improve the system in a positive feedback kind of a loop. So this is just another way of displaying that data access and the various players that may be interested where I think this can be leveraged as I'm transitioning over to some of the uses of the data. So, of course, this is a graph and this shows each of these stakeholders in the data in a different way of displaying this. So on the y-axis, data ownership, meaning a higher level of data ownership up at the top versus the bottom of this axis. On the x-axis, AI or machine learning technology capabilities with higher levels on the right side. And, of course, you can see sort of private institutions, non-academic programs more likely than not have less ability to leverage these newer machine learning technologies. But they have a high level of data ownership depending on their case volume, their system size, while large academic medical research centers, as well as industry players and researchers have potentially a lot of access and capability when it comes to artificial intelligence and machine learning. And the idea is that all these entities want to be or should be able to communicate with each other to leverage the different features. For example, the high volume centers may have a lot of data and they produce even more data than many academic centers, but that data right now just goes into storage and it's very hard to access and, in fact, likely is underutilized while we can sort of facilitate or lower the energy barrier of communication from that data access to entities that can actually help us improve EEP through these newer tools. All right. So in that regard, getting to the specifics of artificial intelligence and machine learning and EEP, this slide is really just to give you a flavor of what already some of us are thinking about in terms of applications of these AI machine learning tools within various subtopics of electrophysiology. So I've listed some of these here that are quite intriguing and exciting. So going from the left top, you know, ECG signal analysis, which has been an area of interest for, of course, decades and decades. This is one area where machine learning can have a lot of impact in looking at signal analysis for surface or noninvasive ECGs. In terms of invasive studies, the signal quality can be improved. What we can extract from the intracardiac data could be potentially improved through machine learning algorithms. Likewise, with medical imaging, with all the data that's present in each imaging study, such as segmentation, automatic contouring, these are all things where machine learning may have some interesting application. Down the bottom row, there's some really interesting longer term view potential applications. The category of disease management we're talking about is not just within our procedure, but pre and post ablation procedure. How do we make decisions about clinical management, stroke prevention, anticoagulation management and beyond? Those are areas where AI machine learning are already being investigated. And then optimizing personalized treatment, not only in a general sense, but even specifically during an EP procedure. Can some of these tools help us determine more reliably where to ablate in a case based on the findings of AI machine learning? And then finally, risk prediction. So how can AI machine learning be helpful to predict the risk of cardiovascular disease and its complications? So to summarize, I hope I've sort of gone through what I set out to do in discussing first the problem at hand, which is the ever growing and vast amount of data that we produce as electrophysiologists in our procedures and beyond. And that massive amount of data currently is stored locally, primarily with very little to no clinical or scientific benefit. It's sort of lost in the dust, so to speak. And then the data aggregation issue, taking that data, taking larger and larger amounts of data and interpreting and sharing it is very challenging today. With CardoNet, the solution that is currently available, this helps to address some of these issues. We have the ability to remotely access our cases and remotely access analytics of aggregate cases. We can do this through what we would hope to be a confidence in a secure cloud server through Microsoft Azure, as well as access in and out of that storage through the Siemens Teamplay platform. And then the very cool looking towards the future idea is then when you have these massive amounts of data, this is a natural place where machine learning and artificial intelligence can really be leveraged to find new ways that we can approach our EP patients, whether it's in the lab or clinically. And that in part is because not only are we leveraging and collecting this data, but it helps to facilitate the interactions between the various players in our field, both clinically and research and industry players. And I'd like to end at this point, and we will now transition to some questions and answers for the next portion. Thank you very much. Thank you, Paul, for the excellent presentation. I'd also like to add that Dr. Liat Tsouref is on the call with us and will direct a question to her as well. She is the Director of R&D and Collaboration and Acceleration at Biosense Webster Headquartered in Israel. Paul, let me ask you a question. You mentioned about the potential research opportunities that will be afforded to us by the cloud storage of the data. I find that very intriguing. You mentioned the possibility of examining lesions for durability for various reasons. For durability, for various parameters that might go into creating durable ablations. Could you give us perhaps a little bit more detail on your thoughts of collaboration for future clinical research projects? Yes, thank you, Eli. Like many other institutions, our hospital utilizes EMR, and we leverage that for many purposes, including physician-to-physician communications and physician-to-patient and back-and-forth communications. And I envision the data that you can gather from CardoNet as adding an important feature to how we can talk between physicians and patients. Some of it will be available directly through a CardoNet platform for those who have it, which will likely primarily be electrophysiologists. But on the other hand, referring physicians, whether they are general cardiologists or internists, and then certainly patients, we have to be able to then get this information through the EMR to their typical methods for communication. So, yes. Dr. Tsoureff, you also mentioned risk prediction as a possible outcome of data storage and offline analysis in the future. Could you please elaborate a little bit and let us know what your thoughts are regarding risk prediction? Sure, I'd be happy to. So maybe I'll start with trying to explain in a very simplified way what is machine learning. So it was Arthur Samuel, one of the fathers of machine learning, who said that machine learning gives computers the ability to learn without explicitly being programmed. So up until now, you know, in the classical way of risk prediction, we used to take data and results and creating algorithms that are providing a step-by-step definition and trying to assess what would be the risk. In the new era of machine learning, we do need to collect a lot of data, but we don't even know what type of data or what it should include. But as long as we collect enough data and we have the outcome, which will be considered as the ground truth, and we feed it to a machine learning algorithm, we can then build a new type of algorithm that will later on be able to predict quite accurately what would be the risk of a patient in different situations. And the nice thing about it is that you don't have to know in advance what are the exact features of the exact parameters that you need to collect. You just have to collect as much data as you can. Yes, I'll add to that in that, you know, you take electrophysiology as a field for, say, risk of malignant arrhythmia or sudden death. You know, for many years, we as a field have been struggling to find good risk predictors and perhaps our minds are stuck in the rut. We've been thinking along the lines of signal average ECG, LV ejection fraction, family history and such, but perhaps there's something that we just not have, we haven't thought of as a field because it's just beyond our field of view. And the wonderful, I think, potential for machine learning is that it's agnostic to our presumptions and our assumptions going into it. They will, it will theoretically come out with information that's again agnostic to any of those pre-made assumptions that we have. So instead of, you know, taking specific features or characteristics of the ECG, we could just feed the whole ECG into the machine learning algorithm. And from that, later on, it can learn by itself what would be risky, what would not be risky, and so on. And finally, is this product that we've heard about today commercially available in the United States? So the product is commercial. It's commercial in Europe, the United States, and in Australia. And if you're interested, you can just turn to our representatives and they'll be happy to help. I'd like to thank Drs. Paul Tsai and Liat Tsoreff for this excellent presentation. I've learned a lot, and I hope so have you all. Thank you.
Video Summary
In this presentation, Dr. Paul Tsai discusses the challenges of managing and leveraging the vast amount of data produced in electrophysiology (EP) procedures. He highlights the issues with current data storage and retrieval methods, which are often fragmented and difficult. Dr. Tsai introduces CardoNet, a platform that aims to address these challenges by securely storing EP data in a cloud server and allowing remote access and analytics. He explains how the Microsoft Azure Cloud Platform and Siemens Teamplay Gateway are used in this system to ensure security and privacy. Furthermore, Dr. Tsai discusses the potential of leveraging artificial intelligence (AI) and machine learning in EP through the analysis of large amounts of data. He gives examples of potential applications such as improving signal analysis, medical imaging, disease management, personalized treatment optimization, and risk prediction. Overall, CardoNet provides a solution to the problem of data management in EP and opens up possibilities for further advancements through AI and machine learning.
Keywords
electrophysiology
data management
CardoNet
cloud server
artificial intelligence
machine learning
Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
P: 202-464-3400 F: 202-464-3401
E: questions@heartrhythm365.org
© Heart Rhythm Society
Privacy Policy
|
Cookie Declaration
|
Linking Policy
|
Patient Education Disclaimer
|
State Nonprofit Disclosures
|
FAQ
×
Please select your language
1
English