false
Catalog
What's New in Digital Health?
What's New in Digital Health?
What's New in Digital Health?
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Well, thank you everyone for joining today, we're really excited for this talk. My name is Chris Chung, I'm from the University of Toronto, Sunnybrook Health Sciences Centre, and my co-chair, Dr. Sanjeev Narayan, from Stanford University, is probably just running a bit late, so he'll be joining us shortly, but I think we'll get started. We've got an exciting set of talks today on what's new in digital health, so we're very excited for all our speakers today, lots of discussion we anticipate. So without further ado, we'll get started. So our first speaker is Dr. Henry Grewitz, so Dr. Henry Grewitz from the University Hospital, Lewin, speaking to us on advancements in digital devices for AFib detection and management and the emerging role of PPD technologies. One thing I'll say is we'll probably have a Q&A at the end, so we'll go through the four presentations first and then we'll save all our questions at the end. If you have any questions, though, please feel free to log onto the app and send them through the Q&A and we'll make sure to get to them at the end. Just waiting to get my slides up here, yeah. So good afternoon, everyone, and well, thank you for attending my presentation on the emerging role of PPG. Now, this session is on what's new in digital health, however, PPG is not new. Let's see if this works to go to the next slide. Yeah, PPG is not new. It was first described in 1938 as technology that measures changes in blood volume, and then it disappeared from the clinical scene for a while, and when it came back, we lost the habit of looking at the tracing. This was only 30 years after the invention of EKG, of which we all interpret tracings, and still today, if an algorithm detects AF on a digital EKG device, we don't feel comfortable to change the therapy without looking at the ECG traces ourselves. The same should be true for PPG tracing, and this is something I want to express in this presentation. If we want to use PPG for AF management, we should learn to interpret PPG recordings. So what does a PPG tracing represent? Think of PPG as the digital pulse. In smartphone PPG, a fingertip is placed over the smartphone camera with the flashlight turned on, and then the camera detects cyclic color changes, which are caused by the pulse style expansion of capillaries reflecting the heartbeat and generating a waveform that is regular in sinus rhythm and irregular in AF. Now the same principle of a light source and a photodetector is the same for all PPG devices, but I want to stress the difference between smartwatch PPG and smartphone PPG. Most smartwatches and other wearable PPG devices are primarily designed for and marketed directly to consumers. These watches perform rhythm checks in the background that are, and for this they use short PPG recordings that are labeled by an algorithm as sinus rhythm or irregular rhythm. And only when a certain threshold of irregular recordings has been exceeded, a notification is triggered. And this notification is the only feedback the user or the physician receives, and we have no or only very few data on the labeling of the individual recordings. So in the interest of time, I will only discuss this for the Apple Watch. Here we know that the positive predictive value of an individual tachogram is only 66%, and by requiring five out of six irregular tachograms before a notification is triggered, the positive predictive value increases to 95%. But the sensitivity remains at 60%, which is fine for a screening tool, but is insufficient if we want to use it to guide AF management. And according to this paper published just two weeks ago in Heart Rhythm, this still might be an overestimation of the true performance. So in contrast to most wearable devices, the leading smartphone PPG applications are designed to operate like a clinical device. Alongside algorithm detection, they provide an interpretable PPG tracing, similar to how we use digital EKG devices, and necessary to guide AF management. Now these depicted applications are some of the highest performing PPG apps, from which FibriCheck is the only one that's FDA approved for AF detection, but there are many more. Every gray dot here represents another PPG app, and as you can see, they all reported excellent sensitivity and specificity. Now with such high numbers, you're right to be vigilant, because many of these were tested in selected populations and performed under supervised clinical conditions. So before we implemented a PPG app for AF management, we validated the accuracy of the algorithm against single-lead EKG in the clinical setting how we wanted to use the application. For us, that was at home, before and after PVI. And in our 3,400 study measurements, the prevalence of AF was 21%, and we validated the PPG algorithm to detect AF with sensitivity and specificity above 98 and 99%, with predictive values above 99%. After the algorithm classification, we can select and interpret the PPG tracings that might impact AF management. So we should get familiar with these tracings, and the first step of interpretation is to check the quality. Poor quality tracings should be marked by the app and should not be interpreted. The tracing is accompanied by a tachogram reflecting the heartbeat intervals, and the Lorentz plot representing the relation between the beat-to-beat interval of the previous and the following heartbeat. In sinus rhythm, this will be a straight line and a small dot. In AF, the tachogram will show a random variation of heartbeat intervals and a big cloud on the Lorentz plot. And this way of representing a PPG recording helps to differentiate AF from other irregular rhythms, like sinus rhythm with a high heart rate variability, which is reflected by an oscillating tachogram and an ellipse-shaped Lorentz plot, or extrasystoles reflecting as extra clouds on the Lorentz plot. And using this representation, we tested the accuracy of 76 cardiologists to differentiate sinus rhythm from AF in PPG waveforms, which they did with a sensitivity and specificity of 95 and 92 percent, which was comparable with the diagnostic accuracy of these physicians using single-lead EKG. And this illustrates the feasibility to manually read PPG measurements and potentially even diagnose AF in the future. The first step towards implementation of smartphone PPG for AF management was made during the COVID pandemic by the TeleCheck-AF project. Here, it was used to support teleconsultations, and half of the patients included in this project were post-PVI patients. So we conducted a study to compare conventional follow-up, or the conventional follow-up approach to detect AF recurrence after PVI with Holter monitoring and EKGs against a digital follow-up approach consisting of twice-daily PPG measurements. And after 12 months, conventional follow-up approach detected 18% AF recurrence, whereas the digital follow-up strategy detected AF recurrence in twice as many patients. And I'll show you an example why we think this is useful. This is one of the patients who underwent AF ablation but remained to suffer from palpitations after the procedure, which was frustrating because the EKG-based follow-up detected no AF recurrence, whereas the digital follow-up revealed that his symptoms coincided with episodes of AF. And so this patient received a second procedure based on what we learned from the digital follow-up. Now, this example of PPG, or this is an example of PPG monitoring that impacted AF management, but the effect could not be measured as there was no randomization in this study. So we were able to evaluate the effect on AF management in a trial that randomized patients after cardiac surgery to either 45 days of smartphone written monitoring at home versus usual care. And with usual care, late postoperative AF was detected in 2%. With PPG-based monitoring, AF detection was increased to 18%. And this led to a five-fold increase in AF management interventions, which here consisted of anticoagulation therapy and written control therapy. But this study demonstrated the feasibility to impact AF management with smartphone written monitoring. And I really look forward to future trials that will study the effect on clinical outcomes. To summarize in four key points, one, we are evolving from using PPG only as an AF screening tool to using PPG as a technology that can support AF management. And two, smartphone PPG is an alternative to what we know as smartwatch PPG with single EKG confirmation. And three, use PPG algorithms that are validated in the setting you want to use it, meaning in a real-world setting. And finally, when in doubt, look at the PPG tracing. Thank you. Great. Thank you very much for that presentation. I think I'll go check my PPG recording right now. So without further ado, I'd like to invite our second speaker to the stage, Dr. Chad Bonhomme. Dr. Chad Bonhomme is presenting from Grand Heart and will be speaking to us on the topic of real-time monitoring of BNP levels, aptamer technology in cardiology. Thank you, Dr. Bonhomme. Good afternoon. Thanks for being here. I'm happy to provide an update, in a sense, on aptamer biosensors in cardiology, and this is really sort of an update from HRX 2023 in Seattle, where we had a talk on this. But first, I'm going to just, let me figure this out as well, my disclosures, give a sense of what an aptamer is, if you're not familiar. Essentially, they are sequences of nucleic acids that are designed to bind onto a target molecule, and the way the process works is there's typically a very large library of aptamers, about 10 to the 12th to 10 to the 15th, and they are incubated with a target molecule and then filtered, and the aptamers which do not combine are then washed away, and then there's a removal of the target, and then there's sort of a rinse and repeat cycle, and this is done until there are several that are chosen to investigate further. Essentially, the way they work, you can imagine that there is a gold strip with the monitor attached onto it, sort of like a plate, and then an arm going up, and then a nucleic acid sequence, and when the target molecule comes along, it grasps onto it and undergoes a conformational change. When that change occurs, there's a change in position towards that gold strip, and between the two, there is an electrical mechanical current that is created, and there's a cadence of the grasping and the release and the up and down, and this measurement of this cadence can be essentially extrapolated to what the level of any targeted biomolecule is. The problem with this technology has been durability, and there's a process that is called fouling. Essentially, there's a degradation of what is placed upon the gold electrode, such that for many years, the durability was only a few hours, maybe in some cases up to 48 hours, but nothing that was clinically applicable, and this is really the major update that has been solved over the last couple of years, and if you follow this technology, there was a release about three weeks ago where one particular company has now solved this to the point where their durability is up to 14 days. The molecules that can be targeted are typically up to around 60 kilodaltons, and number one here on this list is phenylalanine, and this is an interesting scenario that actually my last slide is on it, but I'll go ahead and talk about it now. A lot of the effort has been on, for a few companies, has been for NT ProBNP to monitor in real time for heart failure, and with the Aptamer technology, a level can be obtained around every 15 minutes, and there is an intraday variation that occurs, but recently there was an approach by the PKU Association to monitor phenylalanine for kids, and I think that that's probably going to end up being the first one on the market, whereby patients are managed for their levels. The rivaroxaban molecule was one of the very first to be tested. It was sort of proof of concept, but really doesn't have a whole lot of clinical applicability. Antibiotic levels, hormone levels, particularly inflammatory markers are of interest, but there are several companies now that are going after the NT ProBNP, as that seems to be the most commercially viable target. What is next? Multiplexing, and this is where you have a microstrip with several Aptamers on it, looking for a cocktail to manage, in this example, heart failure, where the interest would be NT ProBNP and perhaps a combination of potassium, magnesium, creatinine, or cystatin C. I would imagine that the human data will start coming out in 2026, and it will be for phenylalanine, NT ProBNP, and likely interleukin 6 and CRP. The form factor is essentially the same as continuous glucose monitoring, which is an Aptamer itself, but is enzyme-based. That form factor, essentially the manufacturing involved in it is a chip replacement. It has a different power unit. The connectivity of it is fairly the same, but because of this, it can easily be attached to a strip that could combine ECG data. So that product is also being worked on. As well, there is interest at this point to go further beyond continuous glucose monitoring in a sense of skating where the puck is going to be, as glucose monitoring is now coming to parity. The companies are interested in insulin, glucagon, somatostatin, and cortisol to use predictive analytics to better manage the amplitude of the glucose levels from an insulin pump. And again, this is my final slide, which is an interesting state, PKU, where you really need to be in a fairly narrow target zone, and it is difficult in kids. On the upper edge, you have cognitive damage. On the lower end, you end up in a malnourished zone. So I would anticipate, again, this will be exciting. I think you're going to see this come out in 2026. Thank you for attending. And then if you love digital health, please consider going to HRX. It will be in its fourth year now. I know Dr. Slot-Weiner is involved in this. It's been a true joy being part of that over the last four years. It's my plug for it. It's at the Signia Hotel in Atlanta. It's just a nice place and a fantastic environment to hang out and talk about digital health. It's like a big living room full of joyful nerds. So please come. Thanks, Chad. I remember seeing you at your pitch presentation a couple of years ago. It was great. So our next speaker is David Slot-Weiner, who's going to be speaking about updates for ambulatory monitoring data integration and interoperability. Thank you. Let me just find my slides. Okay. And may I just remind you, if the audience has any questions, please use the audience response system. And, of course, at the end, you can come up to the mics. Okay. There's a wireless. I think I'll stick to the wired, no new technology for me. Thank you very much for giving me the opportunity to provide some updates for ambulatory cardiac monitoring, data interoperability, and integration. So, today, what I'd like to cover is just briefly some of the clinical workflows we encounter, the range of devices we're discussing, some of the present interoperability challenges and the opportunities that are becoming online thanks to new standards, the potential clinical impact of better integration of data, and a roadmap for how I think we can get there if there's enough will. So, the device landscape, everyone in this room is very familiar with. Two adjectives, I think, could be described, describe it. It's very rich, you know, anywhere from Holter to different medical grade monitors, MCOTs, you know, event monitors, and then the smart watches, CardioMobile. But it's also the Wild West and a bit overwhelming. And the data, of course, we know comes in mostly proprietary or locked PDF formats. The use cases really fall into these groups, I think, palpitations, syncope, cryptogenic stroke, AFib management, and then post-intervention management of arrhythmias, post-ablation or post or on antirrhythmic drug therapy. The workflows are challenging. If you're lucky and you have an IT department that will work with you, you may have a one-off integration with one of the medical grade vendors that you work with, medical monitoring companies you work with. Maybe you're able to order the monitor directly through there as opposed to going to the portal for the vendor. Then you'll start getting the data in PDFs, or you may be able to log on to the vendor and see electrograms in their proprietary format. And ultimately, what you're left with in your EHR is a flat PDF that can't be searched or used to trigger any other actions. The data, as we all know, is very rich but difficult to manage. We have arrhythmia episodes locked in PDFs, we have our ECGs locked in PDFs, and trends locked there. And this creates a lot of paperwork, a big burden, and overloads our staff and limits the utility of this information. This is what ChatGPT gave to me when I asked it to illustrate a frustrated electrophysiologist trying to get data into his electronic health record on the left. And then data flowing seamlessly from our devices out in the wild as discrete data into our electronic health records. And I think now is really a very exciting time for us to have this discussion, thanks to a relatively new technology and data standards interoperability that many of you may be familiar with called FHIR, HL7 Fast Healthcare Internet Resources. And I'm not going to make this a technical talk, I think the audience is mixed, but I do want to just explain some of the building blocks that we need to consider if we want this data to be granular and discrete and manipulable. And there are really just two simple elements we need for interoperability. We need a common dictionary so we all agree on the definitions of what we're talking about, and we need grammar. We need a structure in which to put those words. In healthcare, our definitions are managed by standards development agencies such as LOINC or SNOMED CT is one of our largest nomenclatures. And for CIEDs, we have created a very extensive nomenclature that's managed by IEEE. But the grammar is a different story. Most of what healthcare has been using is HTML or one of the earlier HL7 versions, 2.5 or 3.0. But this is really state of the art from 20 to 30 years ago. FHIR really now brings us finally into the internet and web age. And for those of us who are not technically minded, just to explain why, FHIR is based on application programming interfaces or APIs, which is what the internet and all computers now communicate with. And we can consider these APIs sort of connections between servers and apps. For example, on your smartphone, your weather app uses an API to get the most recent weather forecast. You don't download a new app every time you want the forecast. Your bank, et cetera, everything is through APIs. And that's where FHIR's strength is. And those little pieces of data, that weather forecast, each piece is called a resource. So that's fast healthcare internet resources. There's no reason why this technology can't be used in healthcare. It just hasn't been until recently. But we could, for example, measure or get the battery voltage of a pacemaker using a FHIR HPI, rather than downloading the entire interrogation with 2,000 plus data elements into a PDF. And we could do the same for wearable devices. We could agree on a definition for an AFib episode, create a FHIR resource to describe that. And that would be how we would make this data truly interoperable. And I'm excited to share with you a preview of something that's going into testing next month. And I hope will be available to the world by next year at this time. It's not a video, I'm sorry, but it's coming to an EHR near you, and it takes me over to our implantable devices, our CID world. And I know this is about wearables, but I think it's relevant. I tried to figure out how to use the audience response thing, but I couldn't figure it out. But if you could ask your device clinic what single problem they would like you to solve with better interoperability solutions, what do you think it would be? I hope it's what I have on the next slide. If not, hopefully you'll agree. We asked this question in a manuscript we published a year ago, where we went through the stages, the life cycle of a CID, a patient with a CID, and we described each stage where data is communicated in or out of the device and managed. And we came up with about 19 different steps. I won't bore you with all of them. But the one that we thought would really be important to start with is remote monitor connectivity status. This is something that really plagues, I think, all groups that monitor these devices. We have a few disconnected monitors that swim in a sea of connected monitors. And identifying those with all our different vendors has really been a challenge. So working with our CID manufacturers over the past 15 years, we have actually created a single nomenclature. So we've got all the implantable device manufacturers to agree on a single nomenclature for, I think it's about 1,500 or 2,000 data elements. So that battery voltage from Medtronic means the same thing as Boston, et cetera. And we've published this in the IEEE nomenclature, and now what we're doing is we're putting it into a FHIR standard so that those data elements can become interoperable. And the use case we thought would be most helpful to start with is remote transceiver connectivity status. So we brought together the international heart rhythm societies together through the World Forum, we brought CID manufacturers, we brought all the middleware vendors, and we worked with Cardex, which I'll explain in a moment, and MITRE, to develop a single FHIR resource. And so starting next month, this will go into clinical trial, we will be able to have the connectivity status of our CIDs, whether we're looking at the latitude for Boston, or whether we're looking at a middleware vendor, PaceArt, Merge, whatever you use, you'll see that same data live there. Or in your electronic health record, if you use Epic or whatever you use, you'll be able to open that chart and see whether your CID is communicating, whether it's online, when it last communicated, if it stopped communicating, why and where, so that you can then act on it. So it'll show up in your EHR, and you'll also be able to pull up a list of patients if you choose. You can pull up all your patients for your clinic who are not communicating, regardless of vendor, and then you can get to work. And so we're really excited about this. I think it's going to make a big difference for patients and efficiency, and I think it's a great example of what true interoperability can yield. How did we do this? Simple collaboration with medical device manufacturers, the middleware vendors, EHRs, and clinical societies, and what's called a FHIR accelerator. FHIR accelerators, there's so many good puns around FHIR, FHIR accelerators are organizations generally sponsored by HL7, but they bring together the proper groups to develop FHIR resources. They consist of clinicians to develop the use cases, and the vendors in that area that support that particular technology, and electronic health records, and regulators as well. So the Codex is the FHIR accelerator we used, and we have a subgroup for cardiology called Cardex. So we are the second use case, the first was for hypertension. So why is this important? Well, with true interoperability, you can automatically ingest and display AF episodes, start and end times, duration, burden from remote monitoring. You can pull rhythm data from multiple vendors into one longitudinal view. And you can set triggers for alerts for AF burden, you can create flow sheets, and clearly it will reduce staff time, make everyone more efficient, but I think most of all it's going to improve the quality of care and what we can do with the data. And I won't go through this slide in detail, but there is federal policy really pushing for API-based standards like FHIR, and so we have a lot of wind behind our sails, but it still takes hard work. The next step for wearable devices is really going to be up to us, people in this room, and people listed here. We need to engage with the vendors in this space. This space is very different from implantables. It's a much broader group that are not necessarily used to working with us. We need to engage with them through Heart Rhythm Society, and HRX I think is a really important way we can do this. And then once we engage with them, we bring it to Codex, the FHIR accelerator, and work from there. It's by no means a conclusion that this will happen. I think it's very much open-ended, and it really depends on how much clinicians push for it, and of course how willing the vendors are to do this. They have to recognize what's in it for them, and what's in it for them is they can focus then on higher-level functionality, they don't have to build one-off integrations with each healthcare system. So that's my pitch for interoperability, and I look forward to questions later. Thank you. Thanks very much, David, and then our next speaker, Greg Marcus from UC San Francisco, Best Practices for Wearable Technology. Thank you. I really appreciate the invitation. It's a pleasure to speak to you all. So we're all bombarded by these incredible gadgets that I think have tremendous promise to help our patients, to help the lay public, to help us engage and understand what's going on. But a lot of the challenge has to do with culling so much of the rich data that one might receive to separate the wheat from the chaff and to work on data interpretation and management. So about 2014 or so, we had launched a study called the Healthy Heart Study that enrolled more than 100,000 patients, and around the same time, Apple came out with the Apple Watch, not yet with an ECG feature that enabled users to look at their heart rate, and they can put it in workout mode and look at their heart rate more frequently. And we realized, huh, I wonder if some irregularity in those rates might be useful to help infer the presence of atrial fibrillation. One of these co-authors, Brandon Ballinger, was a data scientist. He had worked for Google, and he's what we call in the Bay Area post-economic, so he had time on his hands and just decided to hang out with us for a year and learn from us and us to learn from him. And he said, you know, I could apply this. There's this thing that people are using more and more called machine learning, and we could maybe use that to see if we might determine the presence of atrial fibrillation from an Apple Watch. And so we did that. We trained a machine learning algorithm in a very structured setting in patients undergoing cardioversion, so pre and post, and then we validated it in an ambulatory population and demonstrated, yes, indeed, a smart watch could detect atrial fibrillation. And apparently, the companies agreed that was a good idea. So then Apple developed its own algorithm and famously published that in the New England Journal. Then Fitbit followed suit. We worked with Samsung to do the same. Now this has been a very interesting phenomenon in that these companies kind of got ahead of us, in a sense, as investigators and clinicians, because there is really no consensus yet that one should screen for atrial fibrillation. So clearly, one can find atrial fibrillation, and there is an unmet need here because, as hopefully everyone in the room knows, AFib can be asymptomatic. It's obviously an important risk factor for stroke. The hope is we could avoid a stroke in someone who might have atrial fibrillation so identified. The problem is there's the context of bleeding and cost and false positives, and so thus far, the consensus of experts has been there's insufficient evidence to recommend general screening in clinical care. So I was the reviewer for one of those studies, and I was not pleased with the way the authors decided to interpret some of the data. I got a phone call from the then-associate editor of Circulation, now the editor-in-chief of Heart Rhythm, Sammy Viskin, who said, Greg, I have good news and I have bad news. The bad news is we're going to accept the paper. The good news is we want you to write a commentary. So I wrote this commentary on the idea of focusing on the positive predictive value, which I will briefly review here. So this is something that we kind of gloss over very quickly. This is something we all learn about in medical school. Hopefully it's emphasized again and again. But I think this is a great application of this important concept, and that is the sensitivity of the positive predictive value based on the prevalence of the disease. So sensitivity, as you probably all remember, is test positive over true positives, specificity test negative over true negatives. But in clinical practice, when we're seeing a patient, we don't know the truth, right? That's the whole reason we get a test. So the clinically relevant question is, given a positive test, what's the likelihood that there is a disease? That is the positive predictive value, which again is highly, highly sensitive to the likelihood that the disease is present, to your pretest probability or to the prevalence. So to give a specific example here, a mathematical example. So here's a test, performs great, 95% sensitive and specific. Most of our medical tests do not achieve that level of accuracy. But if your prevalence of atrial fibrillation is 1%, which is not unrealistic if you consider especially a young, healthy population wearing smartwatches, given those test characteristics, the positive predictive value will be 16%, meaning 84% of all positives will be false positives. On the other hand, if in your study you decide, oh, let's test this out in a group we already know has atrial fibrillation, same test characteristics, your positive predictive value can be reported as high as 99%. So please be aware of this really important issue, especially when we extrapolate studies to the general population. Now clearly those studies focused on the PPG, which the first author did a great job describing and giving us some new information about. But accuracy is almost certainly going to only be enhanced with the addition of an ECG. But even the ECG is not perfect, and especially, at least currently, the automated algorithms are imperfect. So this was a case where I asked one of my fantastic research coordinators to record their own ECG and to take some deep breaths. So healthy individuals are going to have the greatest heart rate variability, especially when they're breathing deeply. They're also going to probably have the smallest P waves, in this case the automated algorithm falsely called atrial fibrillation. Now you may say, well, yeah, give me an implantable loop recorder. That's the gold standard. Even those can be wrong if you rely on the automated algorithm. In the top tracing here, this is intermittent T wave over-sensing, and the bottom algorithm PACs that faked out the algorithm. So in terms of actually using actual clinical practice, how should we think about these things? So automated algorithms are certainly far from perfect, but they are likely quite accurate if they indicate a normal rhythm. They're much less likely to be faked out in that circumstance. And of course, there's this value that the strips can be saved and sent to healthcare professionals that can then over-read the strip. So what do the guidelines say? So I was privileged to be part of this writing group, 2023 seems kind of ancient now, but really it came out at the end of 2023, so it's a little bit more 2024. And so we had very thoughtful, careful, vigorous discussions about this topic and thought about how can we provide some clear guidance while acknowledging the need for a lot more research. So the way we ultimately worded it was among individuals without a known history of atrial fibrillation, it is recommended that an initial AF diagnosis be made by a clinician using visual interpretation of the electrocardiographic signals regardless of the type of rhythm or monitoring device. So in summary, at least for now, and with the caveats of the first speaker's really interesting data, events by PPG don't cut it for making an AFib diagnosis. Of course, they might enhance your suspicion, but there is this concern about false positives. Automated algorithms applied to an ECG, a consumer-based ECG, also don't cut it, should not make a atrial fibrillation diagnosis based on what an automated algorithm says, at least for now. But an ECG is an ECG is an ECG. So if an ECG from a live core cardiomobile or an Apple Watch is overread by a qualified professional who can then make their own judgment, yep, this is sufficiently free of noise or not, it is kosher to make a diagnosis of atrial fibrillation and to act on that. So meaning you can make a decision to start Eliquis. You could even make a decision to pursue an ablation or to start Flecainide based on a consumer-obtained ECG. Again, it has to be overread by a human at this point. So how might these devices otherwise be useful clinically right now without waiting for more research? So clearly, if the prevalence is high, there's a high pretest probability. Therefore, people who already have a diagnosis of atrial fibrillation, I think these devices are especially useful. Also in those circumstances, it's not like you're going to change their life based on, oh, here's an episode of atrial fibrillation where suddenly they're confronted with, oh, should I start, should this person be started on an anticoagulant or not? So some specific examples are a patient monitoring for pill-in-the-pocket therapy. So we talk about, sometimes we're a little too binary in talking about either symptomatic AFib or asymptomatic AFib. But in reality, some people with intermittent AFib just feel a little off sometimes. They're not feeling well, and it's not clear, is this due to the atrial fibrillation or are they anxious about something, do they have an upset stomach? They can check and see if taking a PRN flecainide might help. Similarly, monitoring for NF1 trigger testing. This has been a great interest of mine and many patients to understand what's causing my atrial fibrillation. What did I do? What's the behavior? What's the exposure that might be important? They can use these devices to help test that out. And then when it comes more to the PPG, I personally find it useful when, for example, I have a patient who's CHA2DS2-VASc of 1 or 2, we've done an ablation, it looks like they're AFib-free, I do a monitor to make sure there's no asymptomatic AFib. We have a careful discussion where the patient's fully informed and the patient is motivated to get off anticoagulants. I usually wait a year and I'll say, okay, yep, fine to stop the blood thinner, but that is a situation where I will suggest you might purchase one of these consumer wearable devices to keep an eye on things in case the atrial fibrillation comes back. Now looking ahead and thinking about, okay, how can we do better in the general population? Is there a way to boost the pretest probability among those who don't yet have the disease? So there are these prediction algorithms. This is an example from Renata Schnabel and Amelia Benjamin demonstrating that with some fairly complex data and data that is unfortunately not immediately available to the patient, such as from an ECG, from the echo, you can predict atrial fibrillation. So Peter Kistler, who I'm pleased to see is in the audience, published this great paper where they essentially used data that the patient would generally know off the top of their head. They called it the HARMS-2 AFib risk score and demonstrated that actually you can predict AFib based on that. So one could imagine an algorithm or an app where someone plugs these data in and then they're told, you know, you're so low risk for atrial fibrillation, it's not worth monitoring you. The chance of a false positive is too high versus, oh, yeah, you're a good candidate for, quote, screening of atrial fibrillation. Other predictors to think about. So we previously showed that the PAC count, this is from the general population. These were people randomly assigned to halter, not people with symptoms. The PAC count alone was the single most predictive factor for future atrial fibrillation. More potent than even the Framingham risk score. Other things that could be done are, and again, this kind of harkens back to the first talk, applying machine learning to the PPG waveform itself. So this was a pilot study where we demonstrated that using the actual waveform and applying a machine learning algorithm was superior to simply relying on the irregularity of the pulse. And in fact, machine learning is already being used in our clinical practice now. I mean, we think of this as experimental, but iRhythm and ZioPatches, they are using machine learning now, and it's been demonstrated in this Nature Medicine paper that the machine learning algorithm they used was superior to the average or to a single cardiologist. Now a common question there is how can you possibly, what's your reference standard in that case? And the reference standard in this case was the consensus of a group of cardiologists. And then finally, I just want to highlight this, what I think is really interesting kind of paradigm shifting disruption that we are now all a part of. This I was asked to write in response to the Apple Watch study. So on the left is kind of the normal flow of things. We usually do research, that undergoes peer review, it's then disseminated, that then informs consensus statement and guidelines. That makes its way to clinicians, and the clinicians then use all of that process to inform patients and the lay public. What's happening now is, starting from the top right, private industry just kind of went ahead and started marketing these devices that are essentially screening for atrial fibrillation. And they have developed this relationship directly with the consumer, bypassing the clinician and the usual scientific process. So we, as a scientific and clinical community, need to work especially hard, starting from the bottom up, to perform the research to inform the guidelines and try to catch up and provide our patients and the public with some useful information as to how to digest and utilize that information. So in conclusion, wearable sensors offer incredible opportunities to improve the health of our patients and the lay public. There's still not sufficient evidence to recommend broad screening for AFib, even if this is already done by private companies, and be wary of population prevalence. Think about pretest probability when applying test characteristics. Even when remotely obtained by a consumer device, ECG strips currently still require an expert interpreter to overread. But those can be used to make a diagnosis of atrial fibrillation. PPG-based heart rate and irregular heart rate notifications are likely currently most useful for those with an existing AFib diagnosis. And investigator-initiated research is needed to optimize the use of these devices. Thanks very much. Thank you for that great presentation, and thank you to all our speakers. I very much enjoyed your point there, Greg, about the importance of investigator-initiated research to really build this from the ground up, because otherwise the private companies are just creating the device, but we need to be able to validate them and evaluate them. So if you have any questions, please feel free to come up or to enter them into the app. I have a question just because, I mean, it's quite a bit of a different perspective. So Henry and Greg, you talked about the benefits of the PPG and picking up AFib, and Greg, you talked about the importance of the ECG. So how do we reconcile that? So are we going to get to a place where the PPG will be enough, like, or maybe repeated PPG recordings? Can we treat eventually based on multiple PPG recordings that have set high likelihood of AFib, or do we always need an ECG confirmation, knowing that we may not get that if someone is paroxysmal? How do we reconcile those two differences? So I'd be curious to hear what you think. I'm curious if, in your research, you look specifically at patients, for example, the frequent PACs as a tough kind of case. It does seem like this is a great application of machine learning, right? It might be hard for us, especially since we're not trained since medical school precisely how to read them, but it's something where a machine learning algorithm could be trained. The challenge with, well, there are many challenges with machine learning, but one of them is it tends to be what they call data hungry. So you need lots and lots of samples, generally, and you need to have a really good reference standard, but it's certainly feasible. So yeah, I would be curious what you think. Yes. So I think in my research we focused on implementing PPG and how we can implement PPG in clinical practice. As for now, we mostly use it in a screening setting, which is very different from the AF management setting, and as you showed, that the predictive, as a sensitive and specificity might be the same. But the predictive values are very different because there's a different pre-test probability. And then about sorting out true AF from sinus rhythm with a lot of PACs, I think that's something for the companies to work out and for something for the companies to get enough data to train their deep neural networks that might be able to use the waveform to make a differentiation between sinus rhythm with PACs and AFib. And for the device I used, FibriCheck, they kind of worked it out pretty good, and that's what we found in our validation studies. But that still doesn't make it, that still doesn't convince us to implement it easily. And I think part of this step towards implementation is trust, that we need to trust the algorithms. And I think the first way to start trusting it is by looking at the traces ourselves and by learning to analyze the traces with this way that we represented now with the tachogram and the plot. And I think if that would give us more trust to PPG, we can start using the algorithm results, but then we really need to sort out which ones are really well-validated and which ones are not. Can I just pick up on that? This is, I think, a great example of where, although PPG itself may not be, you know, great guidelines, by the way, Greg, Joglar et al., that is inferior to ECG, it's complementary because it's not easy to pick up a P-wave from an ECG, and it's usually done by RR regularity or irregularity. But of course, if you had a fairly fixed coupling interval of your PACs, you could imagine that the pulse waveform integral, some measure, could give you a consistency you wouldn't see in AF. And is that part of what they did, Henri, do you know? Being able to pick up the PACs from AFib, do you think it's because they were using that waveform characteristic? So the algorithm that I used from FibriCheck, they look at the waveform characteristic. But that's something, in the waveform characteristic, the information of the waveform goes lost in the tachogram, the Carré plot and the Lawrence plot, obviously, because it only looks at the RR intervals. So that is something that we really need to get from the algorithm, that we cannot see, or at least I cannot see, from the PPG waveform. So yes. Great discussion. We'll jump to the question here. Hello, Pat Novke, I come from a, previously, ECG company, so your comment about ECG is true if it's a standardized ECG, but these patches are all over the chest and their morphologies are so different. I mean, like, you know, I've seen some with pointy P waves, they almost look like QRSs because they're focusing, you know, they're over the atria as much as the ventricles. So there's safety standards for standardized ECGs, but it's like, it's really a wild west for wearables and patches, so just, it would be great if, you know, we could get a whole bunch of, like, you know, public data on these that you could just, like, study them and say, yeah, I know if it's this orientation, it should look like that, and, you know, this, that, you know, the companies don't always release their ECGs, but that'd be helpful, I think. And I think, to that comment, I think that's what we saw in our studies as well, that while we accept a diagnosis from single EKG, if we look at the tracing ourselves, we don't accept a diagnosis from PPG, we're not there yet at all, but the sensitivity and specificity or the accuracy of physicians to evaluate the single EKG is not perfect as well, and that is something that I think we need to keep, and it appeared to be very similar to what we can achieve with PPG. Maybe a question from Henri, and then maybe a comment from Greg. So my take from your PPG talk was that the only reliable one at the moment is to use the finger on the camera lens rather than the back of the watch. Is that correct? I think both are technically possible to rely on. Both can achieve very good waveforms. It's just that the commercial watches, or the commercial PPG watches, have not focused their dashboards, and they have not focused on giving this data to the clinician, whereas the PPG smartphone apps, they really try to focus as a clinical device and try to present this data in a way that the clinicians can use it for AF management and to manage patients. But technically, it could be feasible with a watch as well. The watch obviously has the advantage that it takes measurements without needing any action from the patient, so they can get many more measurements during the day. However, we should not overestimate that part as many. For a watch, we also need wear time, and then we also need the patient to stop moving. I think in that way, the reason why I used a smartphone is just the sheer availability. I think even here in this audience, if there would be someone who has atrial fibrillation and didn't feel good right now, who's wearing a watch that can record PPG, here there'll be many patients, many people, but not everyone. Everyone is wearing a smartphone. So it's much easier to start implementing PPG in a phone, because all patients already have the required hardware. Yeah. I suppose it's like a lot of us who are involved in clinical trials, and you've already seen the shift in the field that we look at burden, we don't look at time to recurrence. And then we always have the financial conundrum about wanting to put implantable loop recorders in everyone, but we can't. So more and more, I think probably a number of you on the panel, like myself, are tending to use smartwatches. But I suppose the message I had from you was that the PPG detection on smartwatches is nowhere near reliable enough. I mean, we also combine them, obviously, with getting patients to record ECGs. But I suppose I would be hopeful that the PPG detection algorithms on the watches, Greg, will catch up and become reliable. Potentially. I mean, I think you're pointing out an important distinction and potential limitation, which is, so with your data, by definition, the patient is deliberately putting their finger on the camera, which is a different circumstance than passively, continuously monitoring something that's moving around. Presumably that noise will reduce sensitivity and shouldn't cause false positives, ideally. And again, it depends on the algorithm. But I do think that's an important step to address and to recognize that how clean the PPG signal is, is almost certainly important, especially if we're going to start relying more heavily on it and it's going to be more of a challenge with a watch. We're going to do one more question. These came in a couple of minutes ago. It gets to the notion of, Greg, your point's great, that ideally, PI-initiated, very nice, rigorous, but we are where we are. Can these questions speak to that? Which is, one is, could you comment on the interoperability, given the varying pretest probabilities of people who are using devices? And this is open to the whole panel. And then specifically for David, how do we motivate vendors to adhere to FHIR, particularly when it gets to wearables? So the whole panel, feel free, jump in. What was the first question? Can you repeat the first question? Yeah, so number one, interoperability, how do we establish it, given that the pretest probability is going to be so variable? And number two is, how do we induce vendors, such as of wearables, to ever come to the table, since patients own the devices, actually? Right, right, sure. Well, I'll start with the first question, in terms of the quality of the data and interoperability. You have to take that out of it. You have to first come up with a definition of what you're going to accept. And so that really will get placed on top of it. Then you come up with a definition, and then you put the interoperability component around it. So you're going to have to come up with a way to describe poor quality, or noise, or something that you can't make decisions based on. And you have to get industry consensus to do that, which is challenging. That leads me into the second question, which is, I think, how to get industry consensus. And that's a challenge. You don't need the whole industry, generally, the market leaders. And this is true for standards development anywhere. The light bulb was the same way, you know, GE was the last one to come to the table. And the history with CIDs has been exactly like that. When we started in 2005, Medtronic wanted nothing to do with us. But now you see they sold PaceArt, and they're leading the FHIR effort, because they've realized that managing data is not what they do, they're not good at it, and they're tired of developing one-off integrations with each health system. So the industry has to reach a maturity level, first of all, where they recognize that there isn't power in owning that part of managing the data, and that it's in their best interest to work with the community to develop those interoperability standards. Great. Well, with that, thank you so much. What a great session. We're two minutes over, but the discussion was so good, couldn't stop it. Thanks very much, everybody. Have a great rest of the meeting.
Video Summary
The video presents a series of discussions from a digital health conference focusing on advancements in atrial fibrillation (AFib) detection and management technologies. Moderated by Chris Chung and including an assortment of medical experts, the session delved into various innovative health solutions.<br /><br />Dr. Henry Grewitz discusses the role of photoplethysmography (PPG) in detecting AFib and highlights the differences between smartwatch and smartphone PPG devices. He advocates for the healthcare community to strengthen their competency in interpreting PPG tracings and stresses the importance of validating PPG algorithms for improved AFib management accuracy.<br /><br />Dr. Chad Bonhomme elaborates on aptamer-based biosensors and emphasizes their emerging role in real-time biomolecule monitoring. He points towards the advancements that allow for longer durability of these sensors, making them viable for medical applications such as monitoring NT ProBNP levels in heart failure patients.<br /><br />Further discussions led by Dr. David Slot-Weiner cover challenges and propositions concerning the integration of ambulatory monitoring data with healthcare systems. He underscores the potential of Fast Healthcare Interoperability Resources (FHIR) to improve data handling across various devices, highlighting an initiative with cardiac implantable electronic devices (CIEDs) that standardizes remote monitor connectivity status reporting.<br /><br />Finally, Dr. Greg Marcus discusses the benefits and current skepticism surrounding wearable technologies for AFib diagnosis, particularly focusing on the need for physician-overread of automated ECG analyses to confirm diagnoses and potential future capabilities of PPG and ECG in consumer wearables.<br /><br />This session collectively emphasizes that while there is tremendous promise in digital health for cardiac monitoring, current technologies require further validation and integration with clinical standards to enhance reliability and accuracy of diagnosis and treatment.
Keywords
digital health
atrial fibrillation
AFib detection
photoplethysmography
PPG devices
aptamer-based biosensors
Fast Healthcare Interoperability Resources
wearable technologies
ECG analyses
cardiac monitoring
Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
P: 202-464-3400 F: 202-464-3401
E: questions@heartrhythm365.org
© Heart Rhythm Society
Privacy Policy
|
Cookie Declaration
|
Linking Policy
|
Patient Education Disclaimer
|
State Nonprofit Disclosures
|
FAQ
×
Please select your language
1
English