Interview: Navigating the complexities of biodistribution, transgene expression & vector shedding in gene therapy development

Cell & Gene Therapy Insights 2023; 9(6), 509–519

DOI: 10.18609/cgti.2023.075

Published: 10 June 2023
Podcast
Paul Byrne


In this episode Róisin McGuigan, Editor, BioInsights, speaks to Paul Byrne, Senior Director, Genomics, ProtaGene, about the evolution of the gene therapy field, with a specific focus on the complexities posed by biodistribution, transgene expression & vector shedding in gene therapy development. 

Listen to the podcast here, or read the interview below.


If you enjoyed this episode, you can listen to all episodes from The BioInsights Podcast wherever you normally get your podcasts. 

Follow the Podcast to make sure you don't miss the next episode.


You’ve been working in gene therapy for a long time now. What do you see as the key learnings of the industry related to bioanalysis of in vivo therapeutic approaches?

PB: Around 25 years ago when I first started supporting these studies, we were receiving cardboard boxes with room-temperature tissues wrapped in kitchen foil and leaking over each other, which obviously wasn’t ideal! There have been a lot of advancements since then. One really key learning is from the regulator’s perspective. We now see them being a bit stricter in terms of chemistry, manufacturing and control (CMC) and clinical considerations, but looking at preclinical development, they’ve become much more pragmatic and practical and very open to science-driven justifications for how these in-life or analytical packages are designed.

A good example is for preclinical biodistributions, which are a key part of investigational new drug (IND)-enabling preclinical studies. Historically, we would test about 40 tissues per animal. Now, while it does depend on the route of administration and the mechanism of action and the tropism, we’re generally testing about 15 tissues on average. Obviously, this differs depending on the molecule we’re working with, but it’s a much more pragmatic approach to how we are supporting this work. The tissue lists are different depending on how the molecule is being dosed—here we’re really thinking about adeno-associated viruses (AAVs). An ocular gene therapy would be very different from something that was dosed systemically. But historically, we would just do the same tissues. Now you essentially select the tissues that are relevant for how you are dosing material and what the tropism of your vector is.

Another example is in terms of the analysis for transgene expression. This is something we didn’t really do when gene therapies were starting to come onto the market, but then we started doing it for everything no matter what. Now we do a qPCR to look for the vector, and where we see the presence of that vector, we would then look for the transgene. Again, this is very different and much more science-driven.

We are also seeing that potentially there’s no need to do these IND-enabling studies if a developer has already done it with a similar molecule. So this is if you’ve got an AAV capsid with the same capsid and the same vector backbone, and all you are changing is that therapeutic gene. If you’ve got data for the previous version of that molecule to show that it was successful and moved into the clinic, then you’ve got good justifications for skipping most of that preclinical development and going straight into clinical development. Again, it’s still very much on a case-by-case basis. I’m working with a few companies now who are waiting to hear from the regulators on this. It highlights how practical the regulators have become, which I think is a good thing. Although, there are also places where they’ve become a bit stricter around what we’re doing in development.

One of the other things that’s changed is the validation requirements, and this is very much focused on the molecular biology. Back in the day, we were just starting to understand how to validate these assays and what to consider—which parameters, to what level we should do the qPCR, and whether we should also do the extraction. This is quite stringent now, especially for molecules that are in later stage clinical development, and you have to make sure you’ve got a very precise and accurate assay to give you the confidence you need in the data that’s being generated.

Finally, there is the equipment, which has been one of the big advancements. From a science perspective, the principles of what we do are pretty much the same, and this hasn’t changed in all those decades. The equipment we have now just makes it a lot easier to analyze samples as quickly as possible, there’s better connectivity, and it allows us to apply more high-throughput work streams. A very good example is the automated extraction platforms we have now – we can extract nucleic acids from up to 96 samples in a single run.

What would you identify as the key commonalities—and the important differences—that exist in approaching bioanalysis of ex vivo gene-modified cell therapies?

PB: When we compare gene therapies such as AAV to things like CAR-T there are probably more differences than similarities. The initial difference to note is the scope of the preclinical work. If you look at the regulatory approval documents for CAR-T molecules like Kymriah and Yescarta, you’ll see that not a huge amount of work is actually performed during that preclinical phase. They might do some in vitro, in vivo, or off-target tumor activity, and then usually just a biodistribution.

As a rule of thumb, when we’re doing preclinical development for cell therapies, the main analytical tool would be flow cytometry. There are some exceptions to that where we have used molecular biology tools, and then the way that we would support it would be quite similar, albeit a little bit more complicated because we’re usually looking for multiple targets for cell therapies.

Looking at the clinical aspect, the bioanalysis focus is again a bit different when considering biodistribution, shedding, and transgene expression. There are many other bioanalytical endpoints to these very complicated clinical trials, but we’re focusing on a small part of that. For AAV gene therapy the focus is on shedding primarily, but we can also use the same workflows, expertise, and equipment to also look for things like replication-competent viruses and maybe any other adventitious agents as well. We can do the same thing for chimeric antigen receptor (CAR)-Ts.

For CAR-T, it’s more about monitoring that CAR-T, so looking at concentration and persistence over time. The main analytical tool here is qPCR, and the workflows and approaches are quite similar between the two, but the focus is different. For AAV there is more of a safety endpoint, whereas for the CAR-T, it’s all about monitoring that product. The take home message is that it is very important to understand the molecule that you’re working with, because the analytical requirements will all be quite different.

How is the field of gene therapy evolving, and what new therapies and delivery methods do you see on the horizon?

PB: It seems that every day there’s a new or improved way to modify and deliver gene therapies, or a new generation of these adoptive cell therapies. Starting with gene therapy and AAV, I think there are almost two factions that are split geographically. In Europe, the sentiment is very much that AAV is still the future for gene therapy. Whereas in America, there seems to be a moving away, with people thinking that maybe AAV has had its day. There are issues with immunogenicity with AAV-type molecules; potential side effects from people getting very high concentrations of modified viruses. So, there is a push to move away from that and look for other delivery mechanisms—things like plasmids and lipid nanoparticles, and more non-viral delivery. I think perhaps the future lies somewhere in the middle. We’ll continue to see classical AAV-based therapies being developed and coming onto the market, and we will also see these non-viral delivery mechanisms coming onto the market at some point in the future.

The next couple of molecules are not generally new technologies, but we are seeing a lot more of these mainly in preclinical, and some in clinical development as well. The first is gene editing, and we are seeing a huge amount of interest in that from a preclinical perspective. Again, that’s something we’re going to be seeing and hearing a lot of in the next couple of years. For the other types of molecules, it’s debatable whether these are actually classed as cell and gene therapy. But for things like oligos, silencing and micro RNAs, and locked nucleic acids, if we do class them as cell and gene therapy then they make up about 25% of the molecules that are in mainly preclinical development.

Again, I think we’re going to see these small molecules playing a big part in preclinical and clinical, and then potentially coming onto the market in the future. To make things even more complicated, we’re also seeing a combination of therapies—not only AAVs that are being modified to deliver these therapeutic genes, but also delivering gene editing tools as well. That can create a bit of a bioanalytical headache because we’re looking at different tests. We need molecular tools for looking at the AAV and the transgene expression, whereas for gene editing that’s all about next generation sequencing (NGS), so that can make the analytical work quite complicated. However, it’s very exciting to see people combining multiple therapies to try and make even better gene therapies.

Additionally, we’re constantly seeing novel approaches to how adoptive cell therapies like CAR-Ts are being developed. We’re now—rather terrifyingly!—seeing artificial intelligence and big data being used to modify those molecules at a genetic level to improve the safety and the efficacy. That gives us some headaches from an analytical perspective, but it’s really interesting to see where that’s going.

When the patient’s own T cells have been modified, usually that’s performed using an integrating virus, which has some safety concerns. I’m hearing that electroporation is getting some traction, so that’s something that may be on the horizon. Ultimately, under what I call the cell and gene therapy umbrella, there are a lot of very complex and diverse molecules. As those molecules evolve, the analytical methods need to evolve with them. It’s a very challenging space – but at the same time extremely exciting.

Why are biodistribution, vector shedding, and transgene expression such critical factors for ensuring the safety of cell and gene therapies?

PB: For the preclinical phase of development, it’s very important to understand the distribution and persistence of these molecules. In the case of AAV we also need to understand the very high levels of gene expression not only in target but also non-target tissues. That data then needs to be correlated back to any toxicology findings before we can move into clinical development.

Shedding can play a big part during preclinical development, but some companies are not doing it. For the last two or three AAVs to come onto the market, there wasn’t any shedding assessment during the preclinical phase even though it’s a regulatory requirement. They instead did it in the clinical phase—this is another good example of how the regulators are being a bit more pragmatic and open to those science-based justifications for taking a different approach.

Within clinical development there are multiple other analytical endpoints we won’t touch on today, but shedding is a very important analytical safety endpoint. It’s crucial to understand what you are seeing, where, and for how long, as you move through clinical development and ultimately onto the market.

Can you outline the key challenges associated with assessing and monitoring biodistribution and vector shedding?

PB: The first will be no surprise to anyone who has worked in this industry and been exposed to the molecular biology tests, and it’s the lack of regulatory guidance. Currently there is no guidance for how these assays should be developed or validated. A few years back people were trying to get us to follow the guideline on Bioanalytical Method Validation (BMV), but that’s written specifically for things like ligand binding and chromatography-type assays. It’s really not applicable to qPCR. I’m quite glad that we moved away from that conversation, but it still leaves us with nothing—so what do we do instead? There’s a lot of information out there. We have a number of academic publications, and the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) is a very good example. It is very research-focused and provides guidance for research scientists to make sure that the data they’re generating is of the highest quality, but there’s also a lot to take from that outside of academic institutions.

We’ve got many white papers, position papers, webinars, and podcasts in Europe and America. We’ve got consortiums of CRO all generating opinion papers. There are lot of common themes but also some slight differences on how the work should be approached. I’m not seeing any assays that are being developed or validated badly. It all depends on the context of use and making sure it’s fit for purpose. However, I am concerned that as the regulators see more molecules being developed, and if they start seeing more of these IND-enabling studies, there may be a need for more consistency on how these assays are validated and the data presented.

It seems like every other week we’re seeing cell and gene therapies going on clinical hold. They’re all usually focusing on the CMC and usually related to the analytical tests. Potentially, moving forward, they might start looking at the preclinical and clinical. My concern is that that lack of consistency in how people are validating these methods might become an issue.

The other issue is timelines and potential lead times, as there is high demand for this type of work. We’re seeing long lead times across the industry for supporting the analytical requirements for the preclinical and clinical development of these products. Companies aren’t really considering the time needed to develop and validate these assays. They are coming to us with samples for testing, and they want results next month. Obviously, we have to say no. It takes two to four months to develop and validate these methods. It’s very important that that’s factored into lead times. You don’t want to approach a vendor, go to the back of the queue, and potentially wait many months to have a validated assay.

The next challenge is what I see as a bit of a battle at the moment between the two main molecular biology tools. The first is quantitative PCR (qPCR): this is a very well-established, robust, and sensitive molecular biology tool that has been used to support the development of cell and gene therapy for decades now. Then we have the new kid on the block: digital PCR (dPCR). It has been about for roughly a decade now. In the last couple of years there has been an increase in demand to use this during the development of cell and gene therapy.

What new approaches are being developed to address these challenges? And what impact is the introduction of digital PCR having?

PB: I think the introduction of dPCR has muddied the water a little bit. We are seeing it being used more and more during the preclinical stage, supporting the manufacture of these products and also in the clinical development stage. It is being included in a lot of literature, white papers and webinars. Most of that is focusing on the development and the validation of that method. What I’m not seeing too much of is where it should be used, and more importantly, where it should not be used.

If you start with CMC, then dPCR is a perfect fit. If you are manufacturing batches or changing your manufacturing process, you want as much confidence as you can get before dosing this material – you want an accurate, precise assay. That’s exactly what dPCR brings to the party: much higher precision and accuracy. It’s an extremely good fit for those CMC-type applications.

During clinical development it makes some sense to use dPCR. It’s potentially better at overcoming potential inhibitors. Although if you’ve got a good extraction method, then that will get rid of any inhibitors that will be present in those samples. It may also increase the chance of detecting rare events due to the multiple reactions that are set up. But what we’ve seen when detecting signals close to or below the limit of quantitation but above the limit of detection is really high levels of variability, making it quite difficult to validate. You really need to consider the pros and cons of these methods and what you’re trying to achieve before selecting the most appropriate test. If you’re looking to get very precise readings of something you’re going to get a good concentration of, then it may be a good fit. If you’re trying to quantify material that’s down near the limits of your assay, then it’s potentially not a good fit.

Preclinical is the phase of development where to me it makes the least amount of sense to use dPCR. It has very small dynamic range compared to qPCR, which in contrast has quite a large dynamic range. If you think about an AAV preclinical biodistribution for something that’s dosed systemically, you’re going to get very high concentrations of your vector in target and non-target organs. You’re going to get variability within different animals and different groups. It’s very difficult to dilute that material consistently down to the sweet spot of that small dynamic range, and that’s going to result in multiple repeats.

When I’m talking to potential sponsors and clients, the take-home message for me is that for many applications, for molecules that are well-designed and validated, qPCR should be sufficient. There are obviously exceptions to that rule. If you’re trying to multiplex an assay, then dPCR may be a good option. If you don’t have positive controls and you don’t have PCR bias, it’s going to be potentially easier to develop and validate. For most other molecules and applications, qPCR will give you all the results that you need.

For preclinical studies you could be looking at thousands of samples. For clinical, it depends on how the molecule would be delivered and maybe what kind shedding you’re seeing. Some dPCR platforms are very expensive to run compared to qPCR. The reagents are expensive, the plasticware is more expensive, and it takes a lot longer to run, which is factored into the cost as well. That additional cost and duration can be prohibitive for running these types of studies using dPCR. When we factor in that small dynamic range and the fact that you’re going to see more repeats, again, that’s going to increase your cost and the duration of the study, and potentially make it prohibitive.

However, to go against everything I’ve just said, we are continuing to see an increase in demand to use dPCR, in both preclinical and clinical, including with clients who we’re engaging with. Some are early adopters of the technology and understand dPCR very well. They understand the pros and the cons, and more importantly, they know where their product is going and what concentration it’s going to be at. They can therefore put in place the necessary dilution scheme to get it into that sweet spot of the dynamic range and make it as efficient as possible. On the other hand, there are clients who just see it as the new shiny piece of equipment in the lab and they want to use it no matter what, even if it’s going to take longer, cost more, and not give them any more data than a qPCR.

It’s quite an interesting, and sometimes frustrating, space to be in at the moment. We have lots of conversations about dPCR. Ultimately, if you are developing a molecule and you’re trying to decide what the best molecular tool would be, I would advise people to do some research and perhaps try and engage with a subject matter expert. It’s important to try and understand what the best tool for you would be to provide the data to move your product through the different phases of either preclinical or clinical development.

How can the insights and best practices we’ve discussed today be applied to specific R&D efforts? What would your key advice be for readers working in this space?

PB: We touched on the complexity and diversity of molecules that are currently captured under the cell and gene therapy umbrella. It’s important to understand the molecule that you’re working with and choose the best analytical tool. To summarize, for anything cellular-based flow spectrometry is a very good fit. For anything gene therapy, then you’re looking at the molecular biology tools, but care should be taken around what one you use.

For these small molecules, oligos, silencing RNAs, and locked nucleic acids, mass spectrometry is very good. Finally, for anything where you’re modifying the genome, NGS can be a good tool. And if you are combining multiple therapies, then you’re looking at maybe two, sometimes three analytical endpoints, and they are not easy to perform. This is something to be aware of.

We also touched on the lack of regulatory guidance, and this is more specific to the molecular biology tools. If you are developing a method, then just ensure that you are developing it and validating it as best you can. Take into consideration what phase of development you’re currently at, but think about future-proofing. There is a minimal amount of additional work that you could do within the preclinical phase in order to future-proof these assays so they can be used in CMC—albeit the validations will be a little bit different—and also in clinical development. For example, a qPCR to look for the biodistribution of an AAV preclinically can also be used to look for shedding in the clinical environment, just by including appropriate matrices into your validation.

In the absence of any kind of regulatory guidance, we need to utilize the resources we have, and there’s a lot of them. Don’t be scared to make decisions that are based on the context of use and your own expertise. You will know the molecule you’re working with better than anyone and you might know the technology better than anyone, so don’t be afraid to make some decisions that are a little bit different to what you’ve seen out there. As long as you’re taking a very science-driven approach to it, then I don’t think you can go wrong.

Finally, these products are expensive to develop and bring to the market, and we really want to minimize any delays. Planning is crucial—make sure you’ve got all your assays ready to go, so you can support whatever phase of development you’re at.

Biography

Paul Byrne is senior director of genomics at protagene. He has 25 years of industry experience and can frequently be found speaking at symposia on topics such as: analytical development challenges for ATMPs, biodistribution and safety assessment considerations for cell and gene therapies and more. Paul received his BSc (Hons) in biology from the University of Stirling (UK) and his MSc in research from the University of Glasgow (UK).

Affiliation

Paul Byrne
Senior Director, Genomics,
ProtaGene

Authorship & Conflict of Interest

Contributions: The named author takes responsibility for the integrity of the work as a whole, and has given his approval for this version to be published.

Acknowledgements: None.

Disclosure and potential conflicts of interest: The author has no conflicts of interest.

Funding declaration: The author received no financial support for the research, authorship and/or publication of this article.

Article & copyright information

Copyright: Published by Cell and Gene Therapy Insights under Creative Commons License Deed CC BY NC ND 4.0 which allows anyone to copy, distribute, and transmit the article provided it is properly attributed in the manner specified below. No commercial use without permission.

Attribution: Copyright © 2023 ProtaGene. Published by Cell and Gene Therapy Insights under Creative Commons License Deed CC BY NC ND 4.0.

Article source: This article is based on an podcast with Paul Bryne which can be found here.

Podcast recorded: May 12 2023; Revised manuscript received: May 24 2023; Publication date:  Jun 9 2023.


This article is part of the Gene therapy CMC and analytics spotlight