Harmonising the Community of Genome Editing with the First Set of Standardised Terms. Interview: Samantha Maragh (NIST)
Samantha Maragh leads the genome-editing programme at NIST, a non-regulatory part of the US government that provides support to organisations through advancing standards and technology.
The NIST Genome Editing Consortium was formed with the goal of addressing standards and measurements required to strengthen confidence in genome-editing technologies for research and commercial production. The consortium is open to organisations across the globe, involving experts from a range of sectors, including but not limited to, pharmaceutical, academia, research, industry, and government. The consortium now consists of over 40 members, and is actively accepting more.
Lexicon, one of the three working groups within the NIST Genome Editing Consortium, is responsible for identifying common terms and definitions from across the world to give a unified genome-editing community lexicon. A recent article by NIST announced the publication of the Genome Editing Vocabulary by the International Organization for Standards (ISO). Here, Samantha discusses this news and the impact this could have on the community.
- Samantha, thank you for taking the time to speak with us. Genome editing is a relatively young field, but it is advancing quickly. What is the rationale behind developing standard terminology for this field?
That is correct, we are still at the beginning of this revolution, and technology is developing rapidly. We find that the terminology that is used within genome editing often differs across the world, and that the description of studies can lack precision and accuracy. Even the terms genome editing and gene editing themselves are often poorly defined and differ across studies, leading to confusion within the community; are they the same thing or are they different?
It is important to be precise when describing studies, and we believe a standard set of terms and definitions will enable this. We also felt that the community would massively benefit from somewhere to go to receive support for whatever measurements and standards may be required. This way, projects can ensure the most confidence moving forward with genome-editing technology.
- So, what role does NIST play in this?
Our role, as a non-regulatory part of the US government, is to provide support to technologies and help with standards, norms, and calibrations to support whatever community or technology may need it. The development of these standards can help to strengthen projects by ensuring that the results obtained can be trusted when used to make an important decision.
Our goal at NIST in regards to this particular project was to identify a few key terms that could be agreed upon by different members of the consortium and facilitate drafting and establishing consensus definitions.
Speaking the same language
- And what were the results?
So, firstly the list of terms is not meant to be an exhaustive list of every term used within the genome-editing field. We wanted to begin with something manageable, which was initially set at 20 key terms. We ended up with 42, including the proper definitions and differentiation of genome editing and gene editing, as well as some very general terms and some key technologies that were existing at the time.
We began in 2018 with the NIST-facilitated genome-editing consortium. The consortium was open to organisations internationally and included universities, research institutes, pharmaceutical companies and more. Initially, the consortium involved discussions between myself and these experts in the field that undeniably knew what they were talking about. Yet, I found we were using words differently and talking past each other! When I raised this issue to the community, there was a general agreement that errors in communication regarding genome editing were hindering the field. So, the working group Lexicon was formed to address this problem.
In the beginning, only consortium members were involved, brainstorming key terms and definitions, followed by a process of targeted expert feedback to try and capture the knowledge base and incorporate it. The daft terms and definitions were then opened up to the public for feedback, so that anyone anywhere could access a Google form and leave comments about the draft definitions. Based on that feedback, we updated the terms where necessary.
We really wanted this project to be a globally harmonised effort. So, we took our findings to the International Standards Organization, or ISO. ISO is responsible for standardisation across countries and has different technical committees with members from various countries. The genome-editing community is global, with individuals across the world that felt they would find value in all speaking the same language. So, we seeded the work that had come through the consortium as the first draft of a genome editing vocabulary standard, and then had it verified by all the countries that were part of the committee. The terms were modified where necessary, taking in the feedback of various experts. The result is this set of internationally synchronised ISO standards. By speaking the same language, we can harmonise the community and aid the forward momentum of the technology.
Confidence through controls
- What is the current status for the development of physical controls that could be used to relieve concerns about the safety of CRISPR medicine?
There are still concerns about safety at the DNA level when using CRISPR medicine and off-target effects remain a key area of concern. This is why we feel that the field could benefit from physical experimental controls.
Within the NIST genome-editing consortium, there is a working group concerned with looking at physical requirements that can enhance confidence in accuracy of measurements in labs. At the moment, we are in the process of developing prototype control materials. These are physical samples that have specific properties that make them useful as controls for genome-editing analysis. We have one initial schematic that we have deployed. This is a set of DNA and cell mixtures that were not edited, but have known DNA size variants as compared to the human reference genome, and mixed to be at a few different variant percentages across the set of mixture samples. These have now been deployed to consortium members in an interlab study for DNA detection of sequence and limit of detection.
The idea is that these un-edited samples of DNA can be used as a control to detect the capability of a DNA detection technology to analyze the sequence changes in edited cells. We are currently working on analysing the data received from this interlab study, and are looking at performance before moving towards public release. We hope to see our very first prototype concept described in a publication this coming spring, including how one could make the samples themselves. This prototype could be used as a positive control for assay analysis to confirm that the desired gene has been edited. We are also developing samples of edited cells and their extracted DNA to be used as controls that will better model the type of complexity that can result from editing experiments. These are still in the development stage and we hope to launch more information on these towards the end of 2023.
It is important to be precise when editing genes. To do this, you need to ensure that not only is the intended edit present, but also that sequences you did not intend to edit are intact. So, understanding precision is a multi-part approach. This may require different types of controls. We are still working to understand the precision and accuracy to nominate and detect off-target edits, and we are currently collaborating with scientists outside of the consortium to evaluate different assays for this.
- So, do you hope to one day have these prototypes as globally recognised controls for gene-editing technology?
Yes. Of course, NIST doesn’t enforce or mandate any project to use a particular standard. The goal is to work with communities to make controls and standards available, which may be used when required. We envision that the materials we make will be made available to the public either via a detailed how-to protocol, or that some of the physical samples that we make could be made publicly available for purchase, either through NIST directly or through a partner organisation.
- So, when is the expected launch of that data?
I can only speak for the first kind of prototype approach that we are analysing results on now, for which I would say hopefully by early 2023.
The future is automation
- With gene-editing technology developing rapidly, I understand you spoke about the benefits of automation within the field. I'm curious, is there any update on that?
While I can't speak for each individual lab, I can say that our Biosystems and Biomaterials Division unit within NIST has built a prototype automation system capable of integrating cell cultures and performing genome-editing experiments, including off-target assays, in an automated fashion.
I think that people are extremely valuable when it comes to gene editing, but when you're talking about hands-on precision and throughput, automation can enhance the technology and support the community. For instance, automation could be useful to alleviate the burden of often complicated, multi-step protocols.
In addition to performing a measurement faster or allowing for parallelism of experiments, an automated system could allow for intentional variation of experimental conditions to try and understand what factors can improve or inhibit the process, a step which may be cumbersome and potentially error-prone when performed by a human.
- Getting back to the standardised gene-editing terminology, what is the future focus area for NIST in this field?
Right now we're in the process of evaluating additional terms and trying to identify what the next phase of Lexicon or terminology should look like. We want members of the community to reach out with additional terms that they feel we need to add to the collective. We're hopeful to have the first interlab study from the consortium released soon, so do be on the lookout!
We are also working on developing metadata norms for the community. This is a set of descriptions that provides information regarding how a genome editing experiment was executed as well as data obtained from genome editing practice. So, not only is common terminology used, but key concepts on what is needed to be reported on experiments to become metadata norms of genome editing practice for the community. We also hope to compile metadata, protocols, and report full datasets from individual studies involved in the consortium.
We intend for consortium studies to report exactly what was done along with datasets, so these results can be meaningful and useful to the community. We're working towards a prototype of that general metadata norm for genome editing, and hoping for a test release sometime late in 2023.
Another area NIST is focusing on for the near future involves analysing ways to evaluate the quality of the gene-editing reagents that are going into products. Especially in response to recent FDA guidelines, we feel that this is an important aspect that we are more actively focusing on. We want to address how individual studies can properly determine if input reagents follow FDA guidelines. So, this is an area that NIST is currently looking into in hopes of further supporting studies and making it easier to follow FDA guidelines.
- Thank you, Samantha.
This interview has been condensed and edited for clarity.
Billie Pang is a Biochemistry graduate and freelance science writer based in Ireland.
To get more of the CRISPR Medicine News delivered to your inbox, sign up to the free weekly CMN Newsletter here.
Tags
ArticleInterviewNewsPolicyNational Institute of Standards and Technology (NIST)Standards