false
Catalog
Genetics and Arrhythmias: Beyond Mendel's Peas
Genetics Functional Testing: Are We Ready for Prim ...
Genetics Functional Testing: Are We Ready for Prime Time? (Presenter: Alfred L. George, MD)
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
I'd like to introduce our next speaker, it's Dr. Al George from Northwestern University in Chicago. He's going to speak to us on genetics, functional testing, are we ready for prime time? Thanks. Dr. Al George Thank you very much. I appreciate the opportunity to talk today in this really interesting session. The focus of my talk is going to be on, really, on the rare variant side of the story. With the real mission, I think, of my talk, despite the sort of somewhat inarticulate title I was given, genetics, functional testing, maybe genetics and functional testing would have been more grammatically correct. But I think the goal of my talk is really to convince you that we are approaching a day when genetics coupled with in vitro functional evaluation of variants of unknown significance may merge, and that may be a valuable, add value to clinical genetic testing. So I think everyone, I think, in this audience probably is aware there is a great deal of genetic heterogeneity among syndromes of increased arrhythmia susceptibility that have a monogenic or predominantly monogenic basis. And a large number of the genes associated with that genetic susceptibility are encoding ion channel genes. And I've cherry-picked a list of those genes for this table, which is a list of the various ion channel associated arrhythmia susceptibility genes that are prevalent in the literature. And what is shown here is by the number of genetic variants that appear in a database called the Human Gene Mutation Database. These are extracted from the literature. These are just the number of variants in the literature as of the end of 2017. Sorry, I didn't update the slide, but the numbers are even more impressive today. I'm going to focus a lot of attention on casein Q1, which is the most prevalent gene in congenital Long QT syndrome. And at the end of 2017, there was well over 600 variants in the literature. Many more are available in ClinVar and other databases. So genetic testing laboratories are challenged by the opportunity or challenged by how to classify variants that have never been seen before or those that have been seen before for which there's not a lot of data. And so I think many people are familiar with this cascade of classifying terms of pathogenic, likely pathogenic, benign, likely benign, and then, of course, in the middle is the dreaded variant of unknown significance. And unfortunately, a large fraction of genetic test results come back to a clinician who ordered the test that there is a variant, but it's a variant of unknown significance. The standardization of this classification scheme got better in 2015, and most laboratories, I believe, adhere to these guidelines that were published from a recommendation from the American College of Medical Genetics in which they stipulate criteria for classifying variants in this different way, these different ways. And among the various data in that paper was this table of a listing of the elements that were considered strong evidence in favor of pathogenicity. And among them is a criteria known as PS3, which states that if there is a well-established in vitro or in vitro functional study that shows a damaging effect of the variant on the gene, this would weigh in favor of calling it a pathogenic. And of course, the opposite is also listed on other criteria favoring it not being pathogenic. But anyway, just keeping that in mind, how many variants in the literature are variants of unknown significance? Well, actually, in this case, I turned to the ClinVar database and simply looked at the same list of genes and looked at the proportion of variants in that database that were listed as either variants of unknown significance or for which had conflicting interpretations that included VUS as one of the possible interpretations. And you can see that the proportion of variants among this list of genes is somewhere between a third and two-thirds, depending on the gene. For casein Q1 in particular, it's around a third of the 900 or so variants in ClinVar that are given that designation. So this represents an enormous challenge. And it really confounds the value of clinical genetic testing. So how could we add a PS3 criteria to help push the variant one way or the other along this cascade of classification terms? And there are a number of approaches that use computer models and computer algorithms to assess the deleterious effect of a variant based on conserved conservation of the sequence, some structural information, and other input variables. These have not really been validated for clinical use, and they haven't more or less been validated against experimental tests of function. So this is a nice opportunity, and it's used widely in research, but there's more validation to be done to improve its reliability. And then, of course, there is in vitro and in vivo experimentation. In vivo is a little slower, in vitro is probably a little faster in general. And so I'll talk a little bit about our strategy to use in vitro strategies. For ion channels, in vitro assays are very well established. These take advantage of a classical technique called the patch clamp recording technique, which records from single cells that have been — in which a variant or a channel plasmid has been introduced to express heterologously that channel of interest. And then one uses a one-cell-at-a-time measurement strategy to determine the functional consequences of that variant. This is a gold standard technique. It's widely used in research, and the question is whether or not this could be robust enough to make a call or help a geneticist make a call for variant pathogenicity. In reality, geneticists and genetic testing labs look at the literature for evidence of functionality of variants all the time, and so I think the answer is already established that, in fact, functional evidence from this kind of measurement is well accepted. The problem is this is a very slow technique, and given the large number of variants in multiple genes, it's just not up to the task. Fortunately, the technology has evolved over the last decade, and now there is automated technologies, including this platform, which is called the SynchroPatch, which uses a different strategy to do patch clamp recording that, instead of one cell at a time, can do things in parallel in 384-well plates, and therefore one can make multiple measurements. In this case, this machine can do two 384-well plates at a time and can make 700 measurements in an hour. And so that increases the throughput dramatically without a great loss of fidelity and robustness of the actual technique. This is a very sophisticated instrument, such that it actually has now its own Twitter handle. And if you wanted to learn more about the methodology, I refer you to this recently published paper from our laboratory that details a lot of the validation of the method for studying variants in case in Q1. Just so you know, this is kind of a cartoon of the technology at the heart of these two methods. Our manual patch clamp uses a mobile glass electrode to attach to the cell membrane of a fixed cell to make the recording, so that's why it's cumbersome, time- and labor-intensive, whereas the automated patch clamps are almost all based on planar patch clamp, where the electrode is essentially a surface, a glass surface in the bottom of a 384-well plate on which a cell will fall and get settled and get stuck, and you can actually form a high-resistance electrical seal at that junction and make recordings with the same level of fidelity as manual patch clamp. And I show this only to make an analogy of the technological advances in genetics. Back in the old days, this is how we used to do DNA sequencing by the Sanger method, on gels, labor-intensive, slow, low throughput. And then around the mid- to late-'90s, there was an evolution to automate that technology, and we had capillary electrophoresis. Of course, now we're well beyond this with next-generation sequencing, but this same sort of advance, I think, parallels what we're seeing now in electrophysiology, and I think the same advance in genetics enabled, really, a lot of the modern-day genetic testing that we do. And so I'm going to focus a lot of attention on what we've done to try to decrypt variants of uncertain significance in the casein Q1 gene, which is a potassium channel gene that has an important accessory subunit partner known as casein E1, and that combination of these two create a well-studied and important repolarizing current in the heart known as IKS. We've actually done now well over 100 genetic variants in casein Q1. This is just an example of the kind of data that we get from this technology. Here, this is a plate that was used to just show the functionality of wild-type IKS, which are cells transfected with the combination of the two subunits, and this shows the characteristic slow-delayed rectifier current of IKS. Just to summarize much of the work we've done, one challenge has been to try to assign simple terms that reflect the functionality of a variant based on the detailed functional properties and biophysical characteristics of the variant. And this work came from an extensive review of the literature where we looked at what was published and how the results matched up with functional classifications, and we basically created a matrix where we could actually simplify the characteristics of the channel variant into simple categories, such as loss of function, gain of function, normal function, that sort of thing. Using that matrix, we actually have now classified 109 casein Q1 variants that you can see their distribution across the channel is a little asymmetric, but these are the channel variants we've studied to date, and you can see the coding here is blue, represents variants that have a normal function. Green are those that have a slight gain of function, and red are those with a severe loss of function. Unfortunately, the ones in yellow are those that are sort of in the middle somewhere that are difficult to classify. Those are our sort of functional variants of unknown significance. So, anyway, this is sort of a first step. We hope that we can saturate the channel with mutations going forward, and the idea is to try to increase the throughput, the speed, the fidelity, and the reliability of these kind of measurements, but this is sort of a status point. Now, just sort of as a thought experiment, we took 48 of these variants that we studied that were culled from databases and somewhere from genetic testing companies that weren't published yet, but these were all classified as variants of unknown significance, and we gave that list of variants without any other information to two genetic testing laboratories, and we asked them to classify those variants using their standardized methodology, and we did not provide them any functional data, and this is the result we got. So Laboratory A classified most of them as variants of unknown significance with three of them they considered likely pathogenic. Laboratory B was a little — had a little more — a little more — had more resources at their fingertips and classified them a little bit differently, but the two laboratories really only agreed on about two-thirds of the variant culls. On the other hand, when we gave each laboratory functional data coded in the way I described in a simple way, we classified them as loss of function, gain of function, et cetera, or normal, here is how the numbers shake out. And so here are the two laboratories. Both showed significant reduction in the proportion of variants that they would call VUS based on the additional functional data, and if we now look at their concordance, it's around 96 percent. By the way, this concordance is basically pooling pathogenic and likely pathogenic and comparing that to VUS and the other categories. So there was a significant drop in the proportion of VUSs and there was better agreement across these two particular laboratories. So as I think about how this process of doing functional studies of genetic variants might move forward into a reliable and robust means to help genetic testing labs classify variants, I think here are the major challenges that we have to address. First of all, the turnaround time has to be on par with the turnaround time of the genetic testing laboratory. If you have to wait six months for it, they're going to write their report and be done with it. So this is an important part. We've been working a lot on how to optimize all of the steps that lead up to the report of the functionality. We're probably at a point now where we can blast through a list of a dozen or so variants in under three weeks' time. It probably needs to be closer to two weeks' time for that to really be in real time with clinical work. There has to be a standardization of the report format, and this is something that we'll need community input on. How do we standardize what we're learning about functionality? Certainly any method, any measurement that's made to help improve or that contributes to a laboratory test result has to be from a laboratory that's been certified under the CLIA Act. Would functional tests of this nature fall into that category? How would a genetic, how would a functional testing laboratory, an electrophysiological laboratory, be CLIA certified? I don't know that that's ever been done before, but that's a challenge that has to be examined. And then finally, who the heck pays for this? You know, what does it cost to do a genetic variant, the functionality of a genetic variant, and would anybody accept that as being something that is needed for clinical care? Big, big challenge. So anyway, so just to summarize, many variants that are identified are of unknown clinical significance. Understanding their functional consequences may, in fact, improve the ability of geneticists to classify those variants. And then the big answer to the question, is this ready for prime time, may be almost ready. I think we're moving in the right direction. Is it ready today? I would say no, for many of the reasons I brought up on the prior slide. Anyway, just to credit the vast number of people in my laboratory and collaborators who helped assemble this data set, and the NIH for funding it. So thank you very much. We'll defer to the audience for questions. Okay, great. Thanks, that was a really great talk. It's great to see high-throughput functional work starting to actually happen. I'm a genetic counselor, and I spend a lot of my job interpreting these variants, and I'm also involved in some of the ClinGen efforts to create gene-specific classification criteria. One of the things we've really struggled with is the validity of functional assays for the actual clinical impact. And when you look at these variants, you can create an electrophysiological phenotype in a dish, or observe one, but that doesn't always connect to what you actually see with segregation in families, or how rare the variant is, or other points of data. I'm curious what your thoughts are on how we get to a point where we've validated that these assays actually appropriately predict clinical pathogenicity at the human level before we start using them in a clinical laboratory. Yeah, I think it's a great question. I think our approach has been we need to benchmark against something. And so what we've done from the very beginning is we always include variants that are very well established from all criteria that are available. You know, so there's family data, segregation data, there's uniform agreement that these are pathogenic variants, and those have to be in our assay system to train it and to quality control it. And so that is, number one, what do we benchmark it against? Number two, certainly this is a new methodology, and we, in our paper back in last year, we spent a lot of time comparing the gold standard manual patch clamp with the automated patch clamp to sort of reconcile that. And so that's part of the validation against standard methodology. But I think the major thing is to include controls and well-established variants that we can sort of use as benchmarks. There will always be variants that cannot be classified, functionally, genetically, and the like. And we have to accept that, that this is not going to solve the problem for every variant. With that said, I think it can move the needle and reduce the VUS burden a lot, but it has to be done right. And I think you asked a really important question. So I think benchmarking against, you know, known variants with known pathogenesity is key. So. Female Speaker So I am going to speak on behalf of probably all the counselors and the physicians that I think this is absolutely critical as we continue to get more and more variants, and it's going to impact our clinical care ultimately, you know, once you get there. One of the challenges I'm wondering, when I look at all the different variants and you use the patch clamp method as your throughput, you know, I wonder whether some of these variants might actually affect not the protein itself, but actually perhaps for an example trafficking of that protein. So in other words, if you look at the protein itself, it functions fine, but it's just not getting to the cell membrane. And with throughput methods like this, would it therefore classify these variants incorrectly? And so then we've got a physician who doesn't understand that, they've got a patient, and they've got a report saying that the functional analysis says that your protein is perfectly normal. So I just wonder, is that something that you may want to be doing? Dr. David McWilliams Well, I think, you know, again, I think there's a good precedent for that, to answer that question, with HERG, KCNH2, which is mostly mutations that impair trafficking. And that work is actually, was first done in heterologous cells. So if it doesn't traffic in a myocyte, it probably will have trouble trafficking in a transfected cell. However, you bring up a really important point, which is seeing a normal functioning channel doesn't give you 100 percent certainty that it's normal, because we're only assaying for some things. There could be things that we don't know what to test or we can't test. And so I think the important point is a normal function doesn't move the classification from VUS to benign. That we can't do. But a non-functional channel or a dysfunctional channel, for whatever mechanism, I think is evidence that could help a geneticist push it the other way. We're also very careful when we're describing the functional consequences of a variant. We would like to use functional terms. We don't want to use pathogenic terms. We're very careful. So I think that's a really important dividing line. We'll let geneticists make that classification. Leah. Female Speaker I have a very nice presentation, as usual. But in your experience, with more than 100 KCN-CoV-1 variants, the one that you analyzed, in how many of these variants of uncertainty significant that you were able functionally to say that these were probably disease-causing, and how many, in terms of percentage, remain of unknown significance? Eric Green Yeah, no, that's a great question. So the set that I said we gave to the genetic testing laboratories of 48, a large fraction of them, I don't want to make up a number, but it was probably more than 75 percent of them had really pretty severe dysfunction or severe loss of function. And then the rest of them, I think there was a couple of outliers that had pretty normal looking function, and the others were sort of in the middle. And so I think we're looking at, of all the ones we've looked at, and by the way, among the 109, including some population variants, and some variants that were found incidentally in people who did not have an overt cardiac arrhythmia, and so that total list is very heterogeneous. But I would say in the ones that we looked at that were variants of unknown significance from patients who were tested because of a phenotype, it's probably close to 80 percent of those in our hands in that set had severe loss of function. So it was enriched, for sure. So. Okay. Okay. Here we go. Female Speaker Greg was going to ask you a question. Eric Green Oh, great. I thought he was coming up to talk. He's talking later. Sorry. It must be difficult. Male Speaker I am talking later. Eric Green Okay. Male Speaker I have a very clinical question to ask you, which is many of the people in this room who are clinicians are sitting on a cohort of patients that have had tests done over time, many of which are no longer current in terms of the classifications, but which haven't been reclassified, especially since you have a much better sense than the rest of us on where the state of the field is from functional analysis. How often do you think those need to be reclassified, and what group of people do people need to have on their team in order to be able to do that, recognizing that not everybody has the same expertise in their center? Eric Green Yeah, that's — I think it's a genetics question. I think the reclassification of variants is something I know some of the bigger genetic testing companies are trying to do on a regular basis. I don't know that that falls into our camp so much as a genesis, but I think it — I can tell you from other fields that we're familiar with, there's an effort to try to reclassify every year, at least, and kind of review variants that have been unclassified and apply newer algorithms to reclassify at least every year. Whether that's, you know, frequent enough, I don't know, but, I mean, you're sitting and talking to a family that has a VUS, yeah, you would like to know if there's been an update on that in the last few years, so. So who would do that? Certainly, I think a genetic counselor, especially one that's affiliated with a genetic testing lab can certainly run the algorithms of the day and reclassify that. In fact, one of our collaborators was, in fact, one of the genetic counselors at your hospital's laboratory. So those are people I would depend on for reevaluation, if it hasn't been done by somebody else. Okay. Thank you.
Video Summary
Dr. Al George from Northwestern University discusses the potential of genetics and functional testing in clinical genetic testing. He highlights the challenge of classifying variants of unknown significance (VUS) and the need for additional methods to determine the pathogenicity of these variants. Dr. George introduces the use of in vitro functional evaluation to assess the functional consequences of genetic variants in ion channel genes associated with arrhythmia susceptibility. He presents the patch clamp recording technique as a gold standard method for evaluating ion channels and discusses the limitations of the manual patch clamp method in terms of speed and throughput. However, he also introduces automated patch clamp technologies that allow for higher throughput measurements. Dr. George presents data on the functional classification of variants in the KCNQ1 gene and discusses the potential role of functional testing in improving the classification of VUS. He concludes by highlighting the challenges in implementing functional testing in clinical practice, including turnaround time, standardization of reporting, CLIA certification, and cost considerations.
Meta Tag
Lecture ID
6684
Location
Room 203
Presenter
Alfred L. George, MD
Role
Invited Speaker
Session Date and Time
May 09, 2019 10:30 AM - 12:00 PM
Session Number
S-013
Keywords
genetics
functional testing
variants of unknown significance
ion channel genes
patch clamp recording technique
classification of VUS
Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
P: 202-464-3400 F: 202-464-3401
E: questions@heartrhythm365.org
© Heart Rhythm Society
Privacy Policy
|
Cookie Declaration
|
Linking Policy
|
Patient Education Disclaimer
|
State Nonprofit Disclosures
|
FAQ
×
Please select your language
1
English