false
Catalog
Inside EP: Insights From Clinical Decision-Makers ...
Perfect is the Enemy of Good Enough: Improving C ...
Perfect is the Enemy of Good Enough: Improving CIED Quality
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
We're here for a bunch of reasons. Some of it is, if you like, information transfer, but quite frankly, you can get information almost anywhere these days. So some of it is the experience of understanding what's going on. We've done our best, I think, to try to convey things in a manner that you can digest. But at the same time, the breaks, the question and answer, those are key things because we do wanna know what your needs and questions and concerns are. So talk to us. For instance, my email address is on a bunch of my slides. Just email me if you have a question. I give away my slides all the time. So please try to think of this as a chance for an exchange, not just a chance to listen. If you think the weather sucks, you can blame me. It's actually good for meeting attendance when the weather sucks. But yesterday, when I got here yesterday, because from Vancouver, you can't get an early enough morning flight with the time change. And when I landed, it was about 50 degrees, and by the time I went to bed, it was snowing. But you might not know that actually, if you're from Vancouver, it doesn't snow in Vancouver because it's coastal. So you can't totally blame me. Okay, so I thought we might have audience response. So we're gonna have to go with a very practical audience response. And so I was gonna test the audience response with this five-point question. And I'll answer this a little bit later in the talk. But now we're gonna have to use the old-fashioned analog version, which is raise your hand. Okay, so October 15th, 2007 was a very important day for me. And I'm gonna get you to try to guess why that was the case. So at the top right there is me 33 years ago getting married to the best thing in my life. And did I forget her birthday on that day? Did I win an HRS award for my research accomplishments? Was my brother in a car accident, and he's never been the same since? Did I see the Red Hot Chili Peppers concert? Trust me, you wouldn't forget. Or was it something else? So let's vote, hands. And by the way, if you don't vote, I have an eidetic memory, I'll remember, and I'll pick you out. Okay, did I forget my wife's birthday? I noticed mostly men raise their hands. Okay, did I win a research award? Okay, so somebody's trying to flatter me. Was my brother in a car accident? No. Yes, that did happen, but not on that day. Did I see the Red Hot Chili Peppers? Yes, that did happen, but not on that day. Okay, and so the answer will be other, and I'll answer that a bit later, because it has to do with the topic of conversation. Okay, so now we have proven we don't have an audience response system, but that you're all capable of raising your hand. So we did a survey in advance of this meeting, and we asked, and got many responses, though not all of you, about who you are. And part of this was to try to gauge what you want to learn, and what you need to know, and what your background is. So I think it's fair to say we have a diverse audience in the sense that we have people who are from, if you like, the business end, also from the ablation end, and also from the device end, and so on. So we have varying needs. So we have people who are very expert in some areas, and non-experts in others. So there's a bit of patronizing basic information in my talks to try to sort of build it from the bottom, with a bit of refinement at the other end. So it'll be beneath a few people, and it'll be above a few people, but I'm trying to kind of make a bit of a journey. So we'll start with one slide of the basics, okay? So first of all, we used to talk about pacemakers, and defibrillators, and so on, and now the buzz term, or typical language, is CIED, or cardiac implantable electronic devices. These are devices that are basically constituted of a can, or a generator, and leads. And so you see that in the diagram here, and what you see is the generator is composed of a header, which is where you plug the leads in, and the can. You can see a can that's been opened up here, this image in the middle, and that metal device is about 80% battery, and about 20% circuitry. And then the leads are plugged into that, and then that creates a circuit, and if you remember anything about your basic high school physics, a circuit needs to involve current going from one place to the other. So usually that current is going from one place to the other within the heart, within the tip of that wire. So if you see those three wires that are in the heart, for people in the CIED business, this is a biventricular system. Current comes out of the tip of those leads, and goes to another part, another thing called an electrode in those leads. So wires and batteries are the basic constituents of this. And so when we talk about generator failure, or generator replacement, or lead failure, or lead performance, or quality, we're measuring how reliable these devices are in serving the needs of our patients. So I'll present a case, and some of this is a little bit of a walk down memory lane, or the history of how we got here. So I remember this guy, because I consented him. I didn't do his procedure, but I consented him. And he was a nice old guy. He came from another city, and he had ischemic cardiomyopathy, poor heart function, was a typical patient who got a primary prevention ICD, had a little bit of heart failure, and he got a Medtronic Marquis device, and this device is a ICD generator. And there was an advisory in 2005, and that advisory said there's a one in 10,000 risks that the generator will fail. Now does that strike you as a high risk or a low risk? So I guess if you have the device, it might feel like high risk. If you are a math person, that's very low risk. But that perception is what I'm getting at with this situation. So with that knowledge, clinicians and patients got together and said, how does this affect me? Can I rely on my device? Will it be perfect? What do I do? And so he then got a generator replacement. Somebody said, give me the latest and greatest, because it must be more reliable than my one in 10,000 failure device. And in so doing, he developed an infection, and then when you get infected, the only thing you can do is remove the whole system. So he comes for a lead extraction or a system removal. So he had to remove the whole system, he's infected. We go through the process, he has a procedure, and during the procedure, a complication happens, so he has a perforation, that lead pops a little hole in the heart. And when that happens, actually at the time he's okay, but about 10 minutes later, he develops low blood pressure. You have a quick look with an echocardiogram, you drain the effusion, he has low blood pressure for about six minutes while all this stuff is going on. And then because he had low blood pressure, they say instead of going to the usual recovery, he goes to the ICU, and in the ICU, he then develops pneumonia, then he has a stroke, then he develops renal failure, and he dies. So 10 days later, this guy who came in with a one in 10,000 risk, it's death by doctor. Okay, so we have created this problem by choosing to replace a device because it has a one in 10,000 failure rate. Okay, and does that seem logical to you? Okay, so I'll tell you how logical it was. So should these devices be perfect? Is one in 10,000 unacceptable? It was felt to be unacceptable at the time when we were notified of this. If these devices are not 100% reliable, what do we do? What's the threshold by which we take action? And the other thing is, as you know, because you probably have a fair bit of exposure to physicians, we're in the business of fixing things. We like to do stuff. EPs are particularly keen on doing stuff. So in that instance, then, what do we do? Well, I thought, wow, this is quite something. So I asked the question, well, what's everyone else doing? And back when this happened in 2006, in fact, what everyone else was doing was completely unknown. So there was no consensus process about how to respond in these kind of situations. Usually the information was literally sort of laid out there and then people did as they saw fit as they discussed it amongst their peers and also with patients. So we looked at this across Canada. So I, you know, Canadians are nice, right? So we're nice, so what we do is we work together. So what we did is we actually found out very quickly what was going on across Canada in terms of the centers. There's about 25 ICD centers at the time. And the question was, what are you doing with this information if you have these kind of patients? And here's the answer. So on the left and the y-axis here is the proportion of patients or the percent of patients where you put in a new generator. You took it out for the one in 10,000 risk, okay? And then on the bottom are all the different sites. So you can see here, some sites didn't take any out. They just said, that risk is trivial, doesn't matter to us. Some sites took half of them out because they said that risk is high. We better take it out and put a new one in. And the average was 18%. So that's what you call, it's all over the map. Nobody's quite sure what to do. And the communication about whether we should generally approach this one way or the other didn't exist at that time. So where have we come from there? So the history is that was an era, those five years or three to five years in that era where we had, what we realized was, there weren't that many models of devices. And when there was an issue with the quality of an individual model of device or lead, it had a huge effect on our patient population. And so we realized then that things like waiting to hear about problems, surveillance, the manufacturing and surveillance processes for the production were not enough. You couldn't just watch. You actually had to raise your game. So what happened after that, we knew we had a problem is we had about five years where the major vendors that make devices and leads went through a whole, they really upped their game when it came to quality. So I'll give you an example from an insight perspective is if you have a production line where you're making something, right? You have components that come from all over the place, you put them together in a plant, you have oversight, you test them to make sure they're working and so on. But if you find minor flaws that don't seem to affect performance, what do you do with them? Do you throw them all out? Do you put them back in the line and fix them? Or do you just turn a blind eye because no one's reported that this is a problem? So that kind of process has then now led to a zero tolerance kind of manufacturing process, a redundant testing of all of the components, a high degree of, if you like, invading the manufacturing processes of the components that you get to put them together. And all those things have raised the bar on the quality of the products. And so that danger sign of my guy who died, for example, with that one in 10,000 risk, triggered changes in manufacturing quality processes that have made a big difference to the quality of what we're dealing with. So despite the fact we think that there's a big issue, in fact, I'll show you that things are better than they used to be. So then what happened is that then created caution of what was going on. So people said, hang on a second here. You know, we can't just keep making products and thinking that they're probably pretty good and go out there and then find out there's a problem afterwards. So in fact, things like the basic leads and generators, people were much less receptive to the idea of making changes. So we went through five, seven years where nobody really wanted to do something new because there was too much risk. Couldn't be reliable. So I remember being part of a group of people trying to give some advice about developing a new ICD lead because one of the key things that's happened with ICD leads is when they got smaller, we weren't sure they were quite as reliable. And when that process played out, what happened was they said, listen, if the FDA says, listen, if you have seven failures, your $150 million project is doomed. And you kind of think, think of the fiscal reality of the risk of a small number of failures killing an entire project. It's like five cases of liver toxicity for some new wonder drug can kill the drug. So there was a lot of caution and not much development. We said, listen, tried and true is okay. Let's go with what we know because those are approved devices or leads with their performance track records that we can rely on. But the other thing it did is in a way it triggered attention on developing innovative products. So rather than a slightly better incremental lead or generator, they said, let's go leadless. Let's go subcutaneous. Let's go these routes that are the modern suite of where innovation's happening in our field. So in a way it was good and bad. It slowed the development of what we had and it triggered the development of what we, I think in retrospect now really wanted and needed to keep our field moving forward. And the other thing that's happened that is in fact arguably the biggest evolution is the software to support this whole process, to evaluate, monitor, report, and approve things has dramatically transformed how we care for patients and are aware of the quality of these devices. So remote monitoring has made a huge difference. So here's my perspective on this. Watching is not enough. You need active systems in place to ensure that quality is happening at every level, right from the build, from the materials, to the components, to the assembly, to the oversight, to the remote monitoring, to the reporting and the evaluation. And what the engineers will say is that there is random component failure. It happens, okay? And that you can never get that number quite to zero. Even the Nassau people will tell you that, which is arguably the highest bar there is. On the other hand, if you watch carefully, you can see patterns forming in terms of clusters of failures that should then trigger the question, hang on, this is the second one we've seen from that batch, or this is the seventh one in that model, or we've crossed some kind of threshold of more than one in 1,000, or one in 10,000, or something to trigger trying to connect the dots. And AI and machine learning, of course, will help this because it'll see those patterns coming rather than having to have engineers or product performance reviews identify that this is the third one this month kind of process. And then the other part of it is, which is very much a technical thing, which is root cause analysis, which is this whole question of why does this happen? What is it about this? And then there's a whole, if you like, forensic process around trying to figure this out that I am definitely not an expert in. But it's the due diligence around trying to understand why because, of course, if there's a systematic problem, what you want to do is correct that so that future systems are more reliable. And the other thing is, just like law, in medicine, precedent makes a big difference, right? So what I'll show you next is that when you have something and it sets a benchmark for how good it can be, anything new will be compared to that, okay? And here's a summary of that process, and then I'll show you a little bit of what October 15th, 2007 was about. So first is advisories still happen, and that's because there's concern about the safety of patients and the impact of device performances on patient outcomes. So all the manufacturers have what are called product performance reports. Those are largely based on two sources. One source is if you take a system out or you report a problem to them, they then put that in the batch to figure out what's going on. They track serial numbers and products and so on, and that's a publicly available system that you can download. And the second thing they do is they usually have cohort studies that are part of the regulatory approval for the device. So if you have a new lead, you have to take 1,000 of them and follow them for five years and figure out how they perform prospectively and then report that to the FDA. So those two systems are observation things, and the huge new thing they have access to is remote monitoring, because that remote monitoring takes tens, hundreds of thousands of patients who have different systems in place, and although the information is not comprehensive, it's sufficient to look at some performance metrics and then tells you something about, for instance, how a system is doing and what its quality is. So they're looking for those clusters. They're going through root cause analysis. I'll declare my conflict of interest and say I'm on a Medtronic independent physician quality panel, and because of my longstanding interest in this area, they asked me to participate in something where they basically present the results, if you like, of their oversight of the return product, of their remote monitoring process, the new and old products to see whether there are any patterns or issues that arise, and if you ever hear that there's, for instance, Medtronic issues and advisory, just like all of the other manufacturers, that process involves physician consultation, and we are not bound to keep secrets in the sense that it's not our intention to try to withhold information. Our role is to protect the public or the patients, and each of the companies has this, and then those recommendations that we make then lead to what you see when, if you like, the news breaks and there's been some change in status or a recommendation regarding performance and then what's called field actions, and all of that also obviously involves the regulatory folks too. So this is a key document if you're interested in this area. This is our guidelines that apply to this area about system performance, a broad group of people, and one of the things that I was pushy about in my participation of this was to state that we have good precedent for lead performance, which are by far the most vulnerable parts of the systems, and so one precedent is we have two longstanding ICD leads that fail at about four per 100 at 10 years, or 0.4% a year. So we say ICD leads need to be that good because we've got two leads we can rely on that work that well. Pacemaker leads should be even less likely to fail, probably half that. So those are sort of rough benchmarks, if you like, for how reliable leads should be as an example. So October 15th, 2007 was the day that the Sprint Fidelis lead advisory came out, and many of us had been so thrilled by the idea that ICD leads could be small and easy to put in, just like pacemaker leads were, that we put these in by the bucket, and they turned out to have a higher fracture rate than we would have liked, and those fractures then led to the system detecting the ends basically rubbing against each other and leading to shocks for patients. So the leads were failing, the patients were getting shocks, no one was happy, and this became an official event, and a couple things happened. First of all, I don't know how many of you were busy with email at that time, but most of us weren't very busy with email, but for the first time ever, I got more than 100 emails in an hour, and that's because we had just published that previous paper, I was the de facto Canadian lead in this area, and then everyone involved in leads emailed me and said, what's everybody doing? So we had this sort of virtual blogging, which by now is sort of boring, but back then was quite a deal, and so this lead then became probably the most recognized lead advisory, or at the time the terminology was recall, around a lead that was not entirely reliable. But ironic, at the same time this is published, or at the same time this happened, this paper was out in circulation. So this is a review of ICD leads from the decade before, from the 90s, and their lead performance. So you see here on the left in the survival curve, 40% of these leads have malfunctioned by eight years. Okay, so the math on that is, that's not a very reliable system, no matter what you set your threshold at, certainly a lot different than one in 10,000, okay? Here's the Sprint Fidelis. It's a little bit of a, you know. So in other words, it's comparable to its predecessor, maybe even a little better. But by that point, we had raised the bar and said our expectations are higher than this. And the reason for that is, at the same time, there were two leads that were doing this. Okay, this is the four per 1,000 per year, 0.4%. So these are even more reliable. So we will always take the best thing we have and say the next one needs to be better. That's humanity aspiring to be better. It's a wonderful thing, but there's been a learning curve through this process about improving our quality. And this reflects robust quality processes that have happened at the manufacturer and the surveillance and also at the clinical interface as well. So, and these are the difference makers. So I don't know if any of you are, anyone in the audience an engineer? Oh, wonderful. Okay, so this is fantastic. So this is my tribute to engineers because I think they're wonderful people, okay? These are all of the, this is all of the laundry list of wonderments that have come to us in the last 10 years, thanks to engineers. And on the right is a picture of a group of engineers. And the other thing that my personal observation is with engineers is if you look in the front of the picture, or if you like our typical engineers, as my wife says, they look like they're smart. But at the back half of that picture, what you see is the next generation of engineers, right? So there's many more females, a lot of diversity in that group too, because the engineering community has really upped their game in this area around innovation as well as quality. So look at all these things on the left, the laundry list you've already read, which are our day-to-day things, which are, we have tiny little cardiac monitors, we have avascular ICDs, we have quadrupolar leads, we have leadless pacing with several platforms, we have the biggest transformation in software management and remote monitoring, we have MRI compatibility and lots in the pipeline too. So there's a real transformation around innovation and arguably some of it is triggered by the fact that we stopped doing incremental and started doing disruptive innovation in this space. So it's a really fun time to be in this field. So in conclusion, these devices are incredible technology and they're getting better and better with time. Quality is like a collective process and the drivers of the improvement are manufacturing processes and engineers. We need partnership to both exchange the clinical implications of things as well as reporting back on the quality of performance and so on. And failure is inevitable, but it's getting increasingly marginal and that's wonderful news. Thank you.
Video Summary
The speaker discusses the importance of quality control and improvement in cardiac implantable electronic devices (CIEDs). They emphasize the need for active systems to ensure the reliability and safety of these devices. The speaker mentions that there have been instances of device failure in the past and highlights the role of remote monitoring in detecting patterns and clusters of failures. They also mention the importance of root cause analysis in understanding why failures occur and improving future devices. The speaker discusses the impact of the Sprint Fidelis lead advisory in raising the bar for lead performance standards and how the industry has responded with innovative and more reliable devices. Overall, the speaker concludes that while failure is inevitable, improvements in manufacturing processes and engineering have led to increasingly marginal failure rates, making these devices more reliable over time.
Asset Caption
Andrew D. Krahn, MD, FHRS, Division of Cardiology, University of British Columbia, Vancouver, BC, Canada
Keywords
quality control
CIEDs
device failure
remote monitoring
innovative devices
marginal failure rates
Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
P: 202-464-3400 F: 202-464-3401
E: questions@heartrhythm365.org
© Heart Rhythm Society
Privacy Policy
|
Cookie Declaration
|
Linking Policy
|
Patient Education Disclaimer
|
State Nonprofit Disclosures
|
FAQ
×
Please select your language
1
English