Fit for function: developing potency assays reflective of the in vivo environment

Cell & Gene Therapy Insights 2024; 10(4), 431–439

DOI: 10.18609/cgti.2024.057

Published: 10 May
Interview
Giorgio Zenere, Dirk Windgassen



Translating the therapeutic promise of cell and gene therapies into clinical reality relies on robust potency assays. However, designing assays that accurately reflect the complex mechanisms of these therapies can feel like chasing a moving target. Here, Charlotte Barker, Editor, Cell & Gene Therapy Insights, speaks with Giorgio Zenere, CMC technical project lead in the Global QC Technology Innovation Team, Kite Pharma, and Dirk Windgassen, Director of Analytical Development, Miltenyi Biotec, to discuss best practices and future trends in developing potency assays for cell and gene therapies.


Listen to the podcast here, or read the the interview below.


If you enjoyed this episode, you can listen to all episodes from The BioInsights Podcast wherever you normally get your podcasts. 

Follow the Podcast to make sure you don't miss the next episode.


Can you each start by introducing yourselves and telling us what you are working on right now?

GZ: I have been in cell and gene therapy my entire career. I did a PhD in CAR T-cells against HIV and then went to work for a biotech company, where I was in the R&D and drug discovery departments, looking at novel CAR T-cell strategies against solid and hematological tumors. Through that work, I learned a lot about potency assay development. Now I work at Kite Pharma, where I develop and validate novel analytical technologies for commercial CAR T-cell therapies, as well as consulting on late-stage clinical products.

DW: I have also been working in the cell therapy field for many years. I started out with a PhD in immunotherapy applications, trained as a biochemical engineer, and worked in assay diagnostics for many years. Now, I’m leading Miltenyi Biotec’s assay development team in San Jose, California. We develop assays for clients, including potency assays. We have assays for CAR-T cells, natural killer cells, hematopoietic stem cells, and others currently in development.

What are the greatest challenges in developing potency assays for cell and gene therapies?

DW: Bioassays come with several challenges. There is a lot of variability, and the biological system has variability in itself. To enable the qualification and validation of such an assay, we need reference standards, controls, and suitability criteria—and that reference material needs to be produced, maintained, and qualified.

Another challenge is the timeframe needed to measure biological responses. The 24–48 hours needed for some assays could make it challenging to release a freshly made product on time.

Ultimately, what regulators like the US FDA are looking for is a correlation with clinical efficacy and that is also a major challenge—to use an in vitro assay to reliably predict in vivo response.

GZ: For me, the greatest challenge depends on the purpose of your assay. A lot of potency assays, especially in the early days, were developed with the intention to predict how well your cell and gene therapy perform in patients. As Dirk just mentioned, there is poor correlation between in vitro potency and clinical efficacy because the in vitro assay cannot accurately model the complex microenvironmental conditions that you would see in a disease, such as a solid tumor or HIV.

However, if your goal is to check that your manufacturing process is giving you a product that’s within specifications, there are well-established and well-controlled potency assays available.

DW: I agree. In many cases the goal may not be to develop an assay that is reflective of in vivo conditions, but rather one that relates to the mechanism of action and can be used to consistently guarantee the safety of the product. If that is accomplished, I believe the FDA is very open to receiving such an assay for a commercial product.

What is the latest regulatory guidance on potency assays for cell and gene therapy, and are there any gaps or areas of disharmony between regulators?

GZ: I’m by no means a regulatory expert. However, in my field, the ICH Q2(R2) guidelines are coming in June. Those are the latest guidelines that we look at for validation.

In my opinion, sometimes people have unrealistic expectations of the guidelines. The guidelines show you the minimum parameters that your method must pass to be viable. However, they do not tell you about other parameters that are going to be critical depending on where you are implementing your method. An example is instrument reliability. It’s a parameter that could make or break your assay depending on where you are implementing it.

If you are implementing it for a commercial method that is seeing thousands of assays and thousands of patients a year, then the wear and tear on that instrument is going to be significant and any failure could hinder your operations because you would not be able to release all those patient samples. In contrast, if you had the same instrument reliability, but the instrument was used in a clinical program that only sees 100 patients a year, where you don’t have the same level of wear and tear, you would be less concerned with that parameter. This is where I see a big gap between where the industry is and where the guidelines are.

DW: The FDA released an update to their draft guidelines late last year. What I read from it is that they would like to see potency assays included more in process development for new products, with frequent mention of CQAs, process parameters, and potency assurance strategy. They want applicants for INDs to think about a strategy for potency testing early on.

They expect developers to start with a matrix of assays—multiple assays, capturing multiple modes of actions of your product—and then narrow these down if possible during development. The guidance may still lack some examples, but it is maturing and becoming more formalized, and the industry is evolving.

How can assay development be streamlined, while maintaining cost-effectiveness and safety?

DW: This is one of the primary concerns of our clients, who are often working to tight timelines. Nowadays, there is more awareness that they need to think about the assays at the same time as the process. However, some of the assays need a lot of time for development and often developers do not allow enough time. In potency assay development, we are lucky in that we are not required to have a finished potency assay ready for an IND filing.

At a minimum, there should be a plan involving multiple assays that can be used to characterize the process early on. In other words, back multiple horses instead of pinning all your hopes on one. Early on, a lot of the work we do is to look closely at the CQAs in your process and ensure that the assays address those. Some careful thought ahead of time helps a lot with timeline planning afterward.

GZ: I’m a big believer in doing upfront, exhaustive development work. I have seen multiple times in my career that doing bare-bones development gives you an imperfect method at best. It might be good enough for a Phase 1 clinical trial, but it will not be suitable for Phase 2, Phase 3, or validation of a commercial method.

If you try to move forward with a suboptimal method long term, eventually the FDA or other regulatory agencies will ask you to fix that method. Analysts can end up spending a considerable amount of time trying to fix inherent flaws or even having to start the entire development again with another method.

In addition, post-Biologics License Application method changes are very expensive and time-consuming so saving time upfront by doing the bare minimum in your development will cost you more in the long term. It could also derail your entire implementation plan by halting your clinical or commercial pipeline by a year or more, depending on how long it takes you to develop a robust method afterward.

What strategies can we use to address the variability inherent to cell and gene therapies?

GZ: In my opinion, a lot of the variability that we see in cell and gene therapy comes from the fact that most of the methods we currently use are inherently very analyst-intensive, manual methods, with a lot of analyst hands-on time. Whether you look at ELISA or flow cytometry, you could have hundreds of pipetting steps throughout the process over multiple days. That means that the probability of analyst-to-analyst variability is very high.

Given that, one of the strategies that I see in the field to address that variability is automation, because if you can automate certain processes, you reduce some of the variability that’s inherent to manual processes.

DW: This analyst-to-analyst variability has been the greatest challenge for us too. Clients with fast timelines, who put all their efforts into one assay, may enter qualification studies and suddenly find out that even trained operators are not able to reproduce the method within the coefficients of variation that we would like to see. As Giorgio points out, going back and re-optimizing the methods is very difficult.

Automation is useful and there are some instruments available in the field that help with that. Miltenyi is putting some effort into exactly this area, trying to automate flow cytometry methods—we see good opportunities in that area.

Another important point is the biological system. All the materials we use in cell therapy are biological materials and inherently variable. Even when using cell lines, you can still see variability. There are some efforts toward replacing those biological systems with more artificial targets (e.g., beads) that mimic cells. These systems do not have the total biological functionality of cell culture, but they can mimic some aspects; for example, they can trigger T-cells to make certain cytokines. There has been some success in that area, and I think that it will evolve. Qualifying new cell lines is a huge effort, so relying on more artificial targets for your cocultures is very beneficial for cost-effectiveness too.

GZ: It is true that automation is not perfect in itself. Dirk mentioned costs, and it has to be addressed that automation is costly, especially in the short term. However, there are long-term benefits if your volumes are high enough to warrant the initial investment.

I also agree that having good processes in place is important to ensure that your target cell lines are very robust across different batches and lots. It also goes back to the question of whether you are using an indirect or direct mechanism of action. An indirect mechanism of action can potentially have more variability because you are measuring the concentration of cytokines versus directly measuring cytotoxic killing with methods such as flow cytometry, bioluminescence, or even the good old chromium-release assay.

What technological advances do you see (or would you like to see) coming down the line for CGT assay development?

DW: There are some advanced single-cell technologies that have proven to have value, although we are still exploring what the value is. Sequencing and other techniques have also proven themselves scientifically, but they remain too costly to incorporate routinely. People continue to use very simple methodologies because they are more robust and keep costs down.

I think there should be efforts to be more data-driven—to gain more data from these bioassays and use that data to allow a more detailed response. I’m not seeing that much yet and I feel this is because of the cost and effort it takes to develop those types of assays. I wish there was a bit more data-driven assay development in potency assay development.

GZ: Automation is the low-hanging fruit right now because it does not change the paradigm of how we measure specific parameters; it just reduces variability by removing manual parts from the assay.

My personal opinion is that any new technology that shifts the way we measure something (such as single-cell proteomics), while it may be cutting edge and give you a tremendous amount of information, may not necessarily be the most robust and reliable method. If you are trying to go commercial, I would have some reservations about using new methods versus the tried-and-true methods in use now. We know that the FDA has seen and approved current methods, so they are a little bit more of a safe bet going forward.

However, I do agree that we need to look at new technologies and new ways to measure things. Specifically, going back to predicting clinical efficacy—if you measure direct killing or interferon-gamma alone, it won’t tell you how effective your CAR T-cell is in a patient. However, if you were able to access a new type of information that correlates better with clinical efficacy, I think you’d have a game changer.

Finally, what best practices would you recommend to cell and gene therapy developers with regard to potency assay development?

GZ: Solid upfront method development is necessary, even crucial. As Dirk said earlier, you probably don’t want to put all your eggs in one basket but start by looking at multiple methods and weed them out later. In my opinion, it’s really important to look at your implementation plan and the challenges that you will face wherever you are trying to bring this method, not just in the short term but in the longer term as well. Then understand what parameters are crucial in your longer-term plans for this method and what you need to assess early during your development.

It’s also important to consider what your final product is going to be. Is it going to be fresh or frozen cells? Certain methods tend to be more suitable for fresh, while others will be suitable for a frozen final product. It is up to you to understand those factors in advance and choose a method that is suited to you and your process.

The specification of a product can make or break an operation, so when developing a potency assay, have an eye on your spec. Understand what the spec is going to look like and whether it will be wide or narrow. A narrow spec can potentially be problematic simply because you cannot release a lot of the assay or a lot of the patient products that you have if you go outside of that narrow spec.

DW: You should definitely start thinking about assays early. We have often seen people focus on more processes and less on assays. We need to think about all assays, and especially potency, early on.

Another aspect that has worked out well for some of the projects we have done is to have multiple assay ideas in the background. When the clinical trials started, we had one primary candidate for the assay that was run as a release assay, but we also had concomitant research underway testing multiple assays on actual patient samples (not just healthy donors) to see if any were better than the chosen assay. Any opportunity to characterize assays throughout development and clinical trials is very beneficial and should be used.

GZ: Establishing frequent feedback loops is very important. Just as Dirk was saying, test your methods as you go. Challenge it, find the edge of failure, and make different iterations of it as you move forward and gain a better understanding of your product and method. It is crucial that developers continue to improve on methods as they move throughout their entire pipeline process.

Biographies

Giorgio Zenere is a dynamic professional known for his commitment to data-driven business innovation. Giorgio has a wealth of expertise in cell and gene therapy spanning both infectious diseases and hematological and solid tumors. Giorgio received a PhD in Cell and Gene Therapy specializing in autologous and allogeneic CAR T-cells against HIV. He subsequently joined the CGTI biotech world, where his focus switched to pre-clinical and clinical drug discovery of CAR T-cells against hematological and solid tumors. Currently, Giorgio leads the CMC development, validation, and specification of novel analytical technologies for commercial CAR T-cell programs at Kite Pharma.

Dirk Windgassen is a results-oriented leader with deep expertise in process and assay development for molecular diagnostics and cell therapy. He currently leads the Miltenyi Bioindustry Assay Development group, driving innovation in San Jose, CA, USA.

Windgassen boasts an impressive track record of commercializing cutting-edge diagnostics. Previously, he led teams at Exact Sciences and Thermo Fisher Scientific, ensuring successful market launch of novel molecular assays. His experience extends to cell therapies, having played a key role in the BLA submission for Provenge, a pioneering cellular immunotherapy, at Dendreon.

Windgassen holds a PhD in bioprocess engineering from the University of Erlangen-Nuremberg and completed his doctoral thesis on ex vivo T cell expansion at Northwestern University.

Affiliations

Giorgio Zenere PhD MBA
CMC technical project lead,
Global QC Technology Innovation Team,
Kite Pharma

Dirk Windgassen PhD
Director of Analytical Development,
Miltenyi Biotec

Authorship & Conflict of Interest

Contributions: The named author takes responsibility for the integrity of the work as a whole, and has given their approval for this version to be published.

Acknowledgements: None.

Disclosure and potential conflicts of interest: Windgassen D is an employee of Miltenyi Biotec. Zenere G is an employee of Kite Pharma since 2023, and was an employee of Precigen, Inc from 2021–2022. Zenere G does not represent Kite pharma or Gilead in any official matter when providing content for this manuscript. Zenere G’s commments represent his personal professional opinion and do not represent the views of Kite pharma or Gilead.

Funding declaration: The author received no financial support for the research, authorship and/or publication of this article.

Article & Copyright Information

Copyright: Published by Cell & Gene Therapy Insights under Creative Commons License Deed CC BY NC ND 4.0 which allows anyone to copy, distribute, and transmit the article provided it is properly attributed in the manner specified below. No commercial use without permission.

Attribution: Copyright © 2024 Miltenyi Biotec. Published by Cell & Gene Therapy Insights under Creative Commons License Deed CC BY NC ND 4.0.

Article source: Invited. This article is based on a podcast, which can be found here.

Revised manuscript received: Apr 29, 2024; Publication date: May 13, 2024.