2023 US Bioassay Conference
Workshops and Special Interest Group Discussions

Workshop and Special Interest Group discussions are designed as interactive sessions about topics near and dear to the hearts of bioassay professionals. Four half-day workshops and five special interest group discussions are available after the podium presentations on Day 1 and 2. Each session will be facilitated and moderated. These are not typical formal presentations, instead, it is a mix of slides, questions and/or fast presentations to get you talking.
These active discussions on Day 1 and 2 are available to in-person attendees only. The moderators of each discussion group will give a brief summary on Day 3, available to all virtual and in-person attendees.

Day 1 Topics:

Day 2 Topics:

DAY 1: Tuesday, 14 March 2023 –
Workshops and Special Interest Group Discussions

MODERATORS:

Perceval Sondag, Sr. Director of Data Science, Novo Nordisk
Tara Scherder, SynoloStats

TOPICS:

SPC Workshop I (1:30 – 3:00 PM PST)

  • The Role of Ongoing Monitoring in the Analytical Procedure Lifecycle (ICH Q14)
  • Benefits of Trending vs Meeting Acceptance Criteria
    • Bioassay Parameters to Trend
  • Fundamentals of Control Charts
    • Chart Types for Continuous Variables
    • Estimate of Sigma
    • Nelson Rules
  • Nuances of Pharmaceutical Process and Measurement Data
  • Establishing Control Limits and Ongoing Monitoring
  • Analytical Procedure Capability
  • Statistical Considerations including Normality

_______________________________

SPC Workshop II (3:30 – 5:00 PM PST)

Note: this session will include a quick review of key concepts from the first workshop.  However, this session will focus on hands-on activities using JMP software.  In addition to software instructions for each analysis, all examples will include participant interpretation of the output.

  • Review of Key Concepts
  • Create an Individuals chart (2 ways) and Three-Way Control Chart
  • Identify and Change Sigma Estimate
  • Add Tests
  • Color Rows
  • Separate Chart By Phase
  • Adding Specifications and Process Capability

MODERATORS: 

David Lansky, President, Precision Bioassays
TBD

Nonlinear Mixed Models Reduce Bioassay Potency Bias with Narrowed Equivalence Test Confidence Intervals
TOPICS:

In most bioassays it is difficult to achieve narrow enough confidence intervals to pass similarity with equivalence bounds that are narrow enough to limit bias to levels that are small enough to support sensible analytic target performance. Nonlinear mixed models improve the precision of potency and have even larger benefits by narrowing confidence intervals on measures of (non-) similarity.

 

Outline:

 

  1. Assumptions
    1. Fundamental assumption in bioassays used to estimate potency is that the reference and test samples contain biologically similar active materials.
      1. Assessing similarity:
        1. It is reasonable to expect statistical similarity
        2. Further, we require statistical evidence of similarity (from equivalence tests based on fitting full [unconstrained] models) before proceeding to use reduced [constrained] models to estimate potency
      2. Non-similarity has important practical consequences; for example, similarity failures are, reasonably often, the first evidence that materials after storage or from a changed manufacturing process are different from the intended product
  • Difference tests for similarity are a ‘Type III Error’ (right answer to the wrong question); similarity should be assessed with equivalence tests.
  1. The most important goals in bioassay design and analysis are to limit, document, and monitor potential sources of bias in potency. Note that replication (within and across assays) will not reduce bias; it will improve precision. Hence, it is strategically important to use assay designs and analyses that turn potential sources of bias into sources of variation. Randomization of sample and dilution positions is particularly appropriate here.
  2. Recent guidance about validation (e.g.; USP <1033>) puts more emphasis on a life cycle approach, use of an analytic target profile (ATP), and requirements for affirmative evidence that an assay systems conforms with the ATP. In practice, performance measures (e.g.; estimates of potency bias must be a small fraction of the product specification and interval estimates of bias must be within the ATP limits.
  3. Discussion of assumptions; possible questions
    1. What is the difference between biological and statistical similarity?
    2. Some (e.g.; BioPhorum) advocate estimating potency from unconstrained models – this approach conflicts with established theory and practices in bioassay, statistical modeling, and science.
  • In some applications (e.g.; choosing a new reference lot based on its potency), some require that a measure (e.g.; potency) be within an equivalence region AND that there be no statistically significant difference. In what contexts is this idea useful? Is this idea relevant for similarity testing?
  1. Practical challenges preventing routine use of randomization
  2. Why it is so important to use ATP-sensible equivalence limits for potency
  1. An important current challenge is figuring out how to set equivalence test bounds.
    1. Guiding principles:
      1. Three (of four) methods proposed in 2012 USP guidance are fundamentally assay-capability based, and this is clearly a bad idea.
      2. We want equivalence test bounds that are (somehow) based on the impact(s) of non-similarity
    2. Anecdotal evidence (experience) for amounts of non-similarity that are important. A 6% shift in the asymptote of maximum response was found to be caused by a process contaminant (present at about 5%; but can’t name the product or company). Similar stories from others.
    3. Graphical sensitivity analysis shows that modest shifts in no-dose asymptote and response range, especially positively correlated shifts, are likely to cause bias in potency estimates.
    4. It is not practical to estimate the bias in potency due to various amounts of various combinations of non-similarity in the laboratory because we do not have and cannot make samples that have known amounts of (various combinations of various types of) non-similarity and known potency. Hence, we use computer simulations to develop our understanding of the relationships between amounts of various types of non-similarity and bias in potency.
    5. Quantitative simulations that explore how various amounts of various types of non-similarity cause bias in estimates of potency are important sensitivity analyses. They show that surprisingly small amounts of some types of non-similarity cause appreciable amounts of potency bias.
    6. Power considerations for similarity testing with equivalence tests (motivated by Ralf’s observation)
  • Why mixed models for bioassay?
    1. Practical constraints in most labs doing cell-based bioassays lead to use of cell culture plates with multichannel pipettes used to prepare samples and plates. These constraints make most cell-based bioassays into strip-unit (sometimes called split-block or strip-plot) designs. These designs are statistically complex; fitting them with non-linear models is not straightforward; yet there are compelling reasons to at least try these models.
      1. Statistical analyses that fail to consider the actual design of the experiment (or assay) are considered unreliable with estimates of variation (e.g.; confidence intervals) particularly suspect.
      2. Many bioassay systems have appreciable variation in curve shape from various sources (e.g.; plate-to-plate, row-to-row, and column-to-column). Decomposing this variation into combinations of curve parameters and source which can sometimes be associated with specific steps in the assay process (e.g.; variation in concentration associated with initial dilutions or serial dilutions) has several important benefits:
        1. It can narrow the focus of development efforts
        2. It can provide powerful quantitative tools for assay monitoring
        3. By partitioning variation away from the comparisons of primary interest (i.e.; similarity and potency) the precision of these important comparisons improves
  • In practice, many bioassay systems cannot deliver narrow-enough confidence intervals to reliably pass similarity when using ATP sensible limits on bias to drive equivalence bounds for similarity without use of mixed models.
  1. There are several substantial challenges in using mixed models (a more general term than hierarchical models, which are not complex enough for common bioassay designs) or Bayesian models. These include:
    1. There are many potentially important sources of variation (combinations of parameters and sources) and not enough data in a single assay to reliably estimate them all or even make sound choices about which sources are important.
    2. Model selection is conceptually and computationally challenging even when using historical information about an assay system
      1. Sensible to choose a set of candidate random effect models based on those that have been selected as the best fit for past instances of the same assay system
      2. Choosing among a set of (historically based) selected candidate random effect models can work reasonably well
      3. It is wise to have the set of candidate random effect models include additional random effects to help monitor for potentially problematic changes in the assay system
      4. Choosing which random effects to include when fitting a run (an instance of an assay system) is hard enough that requiring a separate analysis for each test-ref pair is a poor strategy (can create a conflict with in-house regulatory team)
  • Statisticians with the required skills and experience are scarce; very limited software choices

MODERATORS: 

Bassam Hallis, Interim Deputy Director, Research and Evaluation, UK Health Security Agency
Anton Stetsenko, Associate Director, Cell-Based Potency Assays, 4D Molecular Therapeutics

TOPICS:

A preparation that stimulates the body’s immune response against diseases to protect the target population (vaccine product) must be evaluated for its safety and efficacy as any other drug product. Hence, the robust and QC-friendly potency assay is imperative for any vaccine manufacturer to ensure consistency of functional activity between manufacturing campaigns and throughout the product’s shelf-life. Different types of vaccine products such as inactivated, live-attenuated, mRNA, antigen’s subunit, recombinant, polysaccharide, conjugate, toxoid, and viral vector-based present various challenges due to specific mechanism of action, which can be unknown in some cases, and manufacturing process. Some questions about the choice of measurement system/readouts, assay format or design are very similar to other non-vaccine bioassays, but the large-scale preparation (millions of doses) and extended product stability demands reveal specific to vaccine challenges including stability and consistency of the reference standards (e.g., different formulation with preservatives, lyophilization or vialing with inert gas) and critical reagents, analytical testing throughput and duration. There are few stage-specific aspects to consider during the product life cycle such as a scale of vaccine product and reference standard characterization, adjuvanticity, equivalency between animal model (real immunogenicity assessment) and in vitro potency method such as immunoassay for neutralizing epitope, and more. We would like to hear and discuss these and other questions from the vaccine audience during this workshop including special cases such as bioassay development for therapeutic cancer vaccines. We welcome everyone who is interested in this topic!

MODERATORS: 

Nicole Abello, Research Associate, Seagen
Alayna Forler, Seagen

TOPICS:

  • Seagen QC-Potency validation of Hamilton STAR
    • General overview of transferring automated assays from Development to QC-GMP
  • Questions for audience
    • What do you think of when you hear the word “automation?”
      • Big instruments like the Hamilton robots
      • Semi-automated instruments/equipment
      • Integrated instruments/equipment that would normally be manually operated
    • What kind of automated instruments/equipment do you have in your lab?
    • What kind of automated instruments/equipment would you like to see in your lab?
    • What kind of hiccups have you run into when trying to onboard automation into your lab?
    • Do you have any questions for others who have successfully onboarded automation into their labs?

DAY 2: Wednesday, 15 March 2023 –
Workshops and Special Interest Group Discussions

MODERATORS:

Hans-Joachim Wallny, Exec Dir TPPM Scientific and Strategic Excellence, Novartis Pharma AG Switzerland
Ulrike Herbrand, Sc Dir GLBL in vitro Bioassays, Biologics Testing Solutions, Charles River Laboratories Germany GmbH

TOPICS:

This workshop will be held as a round table discussion and all participants are invited to bring their own questions and topics.

Typical topics might be but should not be limited to

  • Mechanism of action (MoA)
  • Assays for mAbs with more than one MoA
  • Stage-appropriate choice of assays (r.g. binding versus functional)
  • Bridging studies
  • Biosimilarity
  • Automation
  • Readout technologies
  • Platform methods

MODERATORS: 

Mike Sadick, Senior Director of Analytical, Precision Biosciences
Kristin Clement, Principal Consultant, Bio-Val Consulting

TOPICS:

This interest group discussion will include a few speakers to speak for 10 – 15 minutes regarding potency assays in the CGT space, and then open the floor to discussions.  Potential topics for discussion include the following, but all participants are welcome to suggest their own topics for discussion!

  • Determining and harnessing MoA-reflective bioassays
  • Potency assays for Gene therapies vs those for Cell therapies
  • Drug substance intermediates vs drug substance vs drug product
  • At what level do you assess the response (transcriptional, protein expression, cellular activity [e.g., target cell killing])?
  • Use of interpolative assays or full-curve relative potency assays.
  • And more!

MODERATORS: 

Nancy Niemuth, Consultant, Act Two Consulting
Perceval Sondag, Sr. Director of Data Science, Novo Nordisk

TOPICS:

The first part of the workshop will be an open discussion of statistical topics related to bioassay. Participants are invited to bring questions and topics for discussion. Some starter questions/topics follow:

  1. When do you average the response values for dose replicates?
  2. What’s the difference between pure error and residual error? Do I need both? What if my software doesn’t calculate them?
  3. What error (pure or residual) do you use when computing the confidence interval for the relative potency?

In the second part of the workshop, we will consider more general questions of how statisticians and assay scientists work together.

  1. At what point(s) should a statistician be involved in the assay life cycle?
  2. What statistical analysis can/should assay scientists do without statisticians?
  3. What can statisticians do to set assay scientists up for success?
  4. What topics and forums should BEBPA use to advance sound statistical practice for bioassays?

MODERATORS:

Therese Segerstein, Portfolio Director, Svar Life Sciences AB
Laureen Little, Consultant, Quality Services LLC

TOPICS:

In today’s environment of tight deadlines, various assay platforms and complex Mechanisms of Action (MoA), the ability of a therapeutic drug sponsor to utilize commercially available kits and rare reagents is considered a boon. However, establishing a reliable vendor source and confirming consistent performance can and often does represent real problems. The vendor of “your” kit is usually manufacturing the kit and/or components for a larger audience and therefore cannot completely ensure the consistency of their kit performance on your product.
This workshop explores the problems encountered in sourcing commercially available kits and components and some of the best practices evolving in companies to de-risk this type of potency assay. Items for discussion are:

• Technical Issues and in-house testing required to release components/kits
• Best practices for monitoring of kit performance
• Regulatory requirements for use of vendor materials in a GMP QC test laboratory
• Is the use of multiple kit vendor sources allowed? Doable?
• What happens if I have to change vendor or a critical reagent?
• Product references and/or Product QC samples in outside kits.

A 30-minute presentation by an industry consultant and a scientist involved in developing and manufacturing kits will kick-start the workshop. These presentations will give you an insight into the end-user struggles and some of the best practices evolved by companies who are tasked with manufacturing consistent reagents. An audience survey will help us understand the issues that we are facing in our potency development team, but please arrive with your questions and discussion points. This workshop is likely to become a BEBPA sponsored interest group with a potential white paper in the future. Come and be part of the recommended solutions for this problem.

MODERATORS:

Laureen Little, Quality Services, LLC
Laura Viviani, Humane Society International

TOPICS:

This workshop aims to facilitate the discussion among the participants on the scientific, business and regulatory opportunities and difficulties of developing and utilizing animal assays during product development and commercial release. Discussion topics include: Optimizing animal assays, how to embrace the three “Rs” (replace, reduction and refinement) in a commercial environment, switching from in-vivo to in-vitro methodologies for biopharmaceuticals pre-clinical, production or release testing.
The discussion will be guided by questions from the audience and may include:
• Do we really need animal assays during development? If so, when and why?
• What do animal assay validation strategies look like?
• What are the most interesting successful (or not) examples of the switch between in-vivo to in-vitro methods? and why?
• What lessons learnt could be drawn that could be useful to follow for methods in development?
What is the role of the scientific community and other actors like pharmaceutical industry, academia, regulators, and policy makers in making the switch completely away from animal assays a straightforward process?

TITLE: Collaboration not Competition will Deliver Therapies for Disease X

 

MODERATORS:

Luc Gagnon, VP Vaccine Sciences, Nexelis, a Q2 Solutions Company
Bassam Hallis, Interim Deputy Director, Vaccine Development and Evaluation Centre, UK Health Security Agency

Details:

This workshop will cover lessons learned during the COVID-19 pandemic response within the biopharmaceutical industry, public sector and not for profit organisations.
During the pandemic several laboratories (Nexelis, a Q2 Solutions Company/UKHSA/CEPI) worked with various sponsor organizations to run comparative biomarker tests. This involved establishing common protocols, reagents, reference materials and transferring methods around the world. Many technical issues were encountered, and solutions found at lightning speed. This workshop will discuss some of the hurdles, solutions and best practices developed during this time and explore how these solutions can inform us on how to better transfer and support assays in our global industry.

The following questions and more will be discussed during the workshop:

1. What were the complementary capabilities in assay development or vaccine deployment that helped you realize the value in a partnership between Nexelis and UKHSA?
2. What did you learn from each other and the other’s organization in the earlier years of the Nexelis/ UKHSA partnership, and specifically in relation to your early response to the COVID-19 pandemic? Have you adapted your ways of working or best practices as a result, and if so how?
3. Could you share any insights into any initial challenges you faced at the onset of working within the CEPI network and how you would address these in the future in a similar scenario?
4. What do you think were the strengths and weaknesses of the broader industry response to the COVID-19 pandemic, with a specific focus on the research and development of effective vaccines?
5. How would you describe the role of your organizations in preparing for/ responding to Disease X?
6. How does your work with CEPI to develop a library of standardized assays for Disease X differ to developing assays for sponsor programs?
7. What are the differences in developing standardized assays for an active infectious disease (e.g. malaria, RSV, influenza) vs. viruses we haven’t yet encountered/ new viruses from a known viral family?