Unlocking the Future of Biotech: A Glimpse into SNIPR BIOME’s Journey and Denmark’s Growing Ecosystem

SNIPR BIOME, a clinical-stage biotechnology company established in 2015, is a testament to the power of diversity and innovation in the Danish biotech industry. With 50 employees representing 23 different nationalities, SNIPR BIOME applies Nobel Prize-winning CRISPR-Cas technology in gene therapies targeting the human microbiome. In 2019, SNIPR BIOME secured 50 million USD in Series A funding, the largest in Scandinavia, enabling us to expand from four to 50 team members. Our success is deeply rooted in Denmark’s international culture, which has fostered a thriving biotech ecosystem. In this article, we will examine four key elements of the Danish system which have contributed to SNIPR BIOME’s accomplishments and explore some potential areas for improvement, with an emphasis on positive change.

This article was originally published in Danish as part of a series about the Danish Life Science industry at Science Report and written by Eric van der Helm, VP of Scientific Affairs, Bioinformatics and Automation & Julie Tranberg Rasmussen, VP of People & Organization

1. A robust ecosystem supporting cutting-edge research

 SNIPR BIOME’s strategic location in Østerbro, close to academic institutions and hospitals, provides the company with exposure to the latest scientific developments. The Danish biotech landscape features numerous initiatives that promote business growth, such as the BioInnovation Institute (BII), which serves as an accelerator and incubator for biotech ideas and CPH Labs where ideas can find lab space. Events like the Danish Tech Challenge and TechBBQ also support life science startups, while funding bodies like Innovationsfonden and Industriensfond contribute to early-stage projects. SNIPR BIOME has benefited greatly from the Greater Copenhagen Microbiome Signature project, organized by Copenhagen Capacity, Invest in Skåne, and Medicon Valley Alliances. This initiative unites universities, small biotech startups, and large pharmaceutical companies to foster collaboration and innovation in microbiome research. Such a vibrant and collaborative environment creates a fertile ground for scientific breakthroughs and technological advancements.

2. Prioritizing fundamental research

SNIPR BIOME’s technology is built upon a solid foundation of basic scientific research. CRISPR systems were initially observed in the 1990s by a researcher studying bacteria in salt marshes. CRISPR was later identified as a natural immune system for bacteria, which eventually led to the genome engineering tool that earned the 2020 Nobel Prize. CRISPR’s accidental discovery, much like Alexander Fleming’s serendipitous finding of penicillin, highlights the importance of continued investment in basic science to fuel breakthroughs and foster innovation.

3. Expanding access to Danish capital

Denmark excels in biotech investment, and is second only to Switzerland on an investment per capita basis within Europe. SNIPR BIOME’s Series A funding round, backed by Danish-based LundbeckFonden – Biocapital and NEFO, as well as European-based EQT and Wellington Partners, demonstrates the strength of the European ecosystem. However, there is still a large gap in relative funding between Denmark and the US, so in order to bolster Denmark’s competitive edge in the biotech sector, additional efforts to enhance both volume and access to capital will be essential for encouraging growth and innovation.

4. Nurturing and attracting biotech talent

SNIPR BIOME’s diverse and talented workforce of 50 employees from 23 different nationalities reflects the company’s commitment to fostering an inclusive and dynamic team. Denmark’s relatively small size presents certain challenges particularly when it comes to recruitment into highly specialized fields such as biotechnology, where competition for top talent is fierce. SNIPR BIOME actively recruits talent from around the world, and we recognise the importance of making Denmark an even more attractive destination for skilled professionals. Efforts to improve employee incentive programs, such as the regulated warrant programs, which are currently more complex and less employee-friendly compared to those in the US, could contribute to Denmark’s appeal to skilled individuals.

Initiatives such as the Researcher Tax Scheme are critical tools for securing the attraction of international talent, as without it, many would be deterred by Denmark’s high tax levels. Additionally, there is a need for a softening of the rigid Danish system for residence permits and citizenship to retain talents in Denmark. Many of SNIPR BIOME’s non-EU employees are left frustrated by time-consuming and complex systems in place.

While SNIPR BIOME has had success in attracting international talent from universities, it is more challenging to attract experienced international employees from the industry, especially those with families. Factors such as tax levels, access to international schools, and support in integrating into Danish society become crucial in these cases. By addressing these challenges, Denmark can continue to attract and retain top biotech talent, ensuring continued growth and innovation in the industry.

In conclusion, Denmark is in an excellent position to fuel the next wave of biotech innovation. By embracing continuous improvement and maintaining a positive outlook, the Danish biotech ecosystem can continue to succeed and grow.

Leave a Comment

Filed under Publications

Behind the paper: Four novel anti-CRISPR proteins found using synthetic biology

Last week our article on new anti-CRISPR proteins was published in Cell Host & Microbe. Anti-CRISPRs are small proteins that inhibit the activity of CRISPR-Cas. These days a lot of research is focused on finding more active CRISPR-Cas systems. So why does anyone want to reduce CRISPR-Cas activity?

A big challenge with CRISPR-Cas is that it cleaves DNA where it is not supposed to, known as the “off-target effect”. This means CRISPR-Cas can introduce mutations that are unwanted. Unwanted mutations are just one of the many concerns about the recently edited “CRISPR babies”, as they could pose potential and unknown dangers to human health. 

A solution to prevent off-target effects is to temporarily turn CRISPR-Cas activity off. This can be done using proteins that inhibit CRISPR-Cas activity. Some proteins mimic the DNA segment CRISPR-Cas is supposed to cut, while others prevent Cas9 from changing conformation. Until now, only a dozen anti-CRISPR proteins were known and are specific for their CRISPR-Cas type (see the figure below). So, it would be very useful to find more proteins that can inhibit CRISPR-Cas.

On the bottom the latest classification of CRISPR-Cas systems modified from Makarova and on the top the number of anti-CRISPR families found today as tracked by the anti-CRISPR database led by Joe Bondy-Denomy. In green, the anti-CRISPRs we found against Cas9. CRISPR-Cas systems are first divided into two classes based on which enzymes they use to cleave DNA or RNA sequences. In Class 1, this is actually a complex of several enzymes. In class 2, a single, but much larger enzyme is responsible for the final action. The layout of the tree is roughly based on the most conserved CRISPR-Cas gene, Cas1, and colored in deep purple.

The Idea

Back in 2015 I was working on detecting anti-HIV protease small molecules in metagenomic libraries using genetic circuits. Around that time genetic circuits were described that had a CRISPR-Cas output. Back then, CRISPR-Cas was not widely used in genetic circuits. We combined these concepts to find genes in metagenomic libraries that can inhibit a CRISPR-Cas system. Theoretically, it was possible because anti-CRISPRs against CRISPR-Cas type I-F and I-E were found in a ubiquitous bacteria, Pseudomonas aeruginosa. However, no anti-CRISPRs were known for use against the widely-used type II, better known as CRISPR-Cas9. This came in December 2016.

Until now, anti-CRISPRS were found computationally or by cloning out phage genes individually. We sought to perform the anti-CRISPR search in a high-throughput manner and without the prior knowledge that is needed for in silico prediction. The bacterial selection system we used contains two components, as outlined in the figure below.

The primary component is a genetic circuit with a Cas9 protein and guide RNA. This circuit cuts an antibiotic resistance gene on a plasmid, rendering the bacterial cells susceptible to antibiotics when no anti-CRISPR protein is present. The second component in the selection system is an anti-CRISPR source, in this case a metagenomic library. As input material we used metagenomic libraries that were constructed from fecal samples from humans, cows and pigs. Additionally we also used a metagenomic library from a soil sample. I previously wrote how similar systems are used to find new antibiotic resistance genes or vitamins transporters.

The discovery workflow starts with fecal samples from humans, cows and pigs, and a soil sample. It  ends with a list of potential anti-CRISPR genes. The proteins encoded by the potential anti-CRISPR genes were then validated using various experimental methods. 

The Experiments

After a lot of tweaking, colonies appeared that were able to dodge the selection system (and potentially Cas9 activity). Instead of sequencing individual colonies with Sanger sequencing, we used Nanopore sequencing. For Nanopore sequencing we used our previously published poreFUME protocol that allowed us to multiplex the sequencing of colonies and various different DNA sources.

The resulting sequences contained hits that did not only inhibit Cas9 activity, but also contained antibiotic resistance genes or genes that turned off expression of Cas9. After removing these false-postive genes, and to validate whether the sequences are actually inhibiting Cas9, the DNA was re-cloned into an E. coli expression vector. The individual proteins were tested in two ways: by assaying if Cas9 can still cleave DNA, and by testing whether the potential anti-CRISPR protein actually worked.

We visualized this process by putting the resulting reaction on a gel and checked whether the DNA was still intact, indicating that an anti-CRISPR protein was potentially inhibiting Cas9. If the DNA was cut into two pieces it was assumed the protein did not prevent Cas9 from cutting the DNA.

In the second experiment we tested with biolayer interferometry whether the potential anti-CRISPR protein can bind to Cas9. Together these experiments led us to believe that we found four new anti-CRISPR proteins, which we named AcrA7, AcrA8, AcrA9 and AcrA10. (Acr = anti-CRISPR. A because Cas9 is a type II-A system and 7-10 because of the chronological order they were found in, as proposed).

We unleashed a whole suite of computational analyses to find out how widespread these new anti-CRISPR proteins are. For example, we expected that the anti-CRISPR genes would co-occur with the CRISPR-Cas systems they are inhibiting, however there appeared no specific correlation between the two. A possible explanation could be that anti-CRISPR genes are often on mobile elements (such as plasmids and phages) and move through the population more rapidly than CRISPR-Cas systems. However, we did find homologs of the anti-CRISPRS in seven different phyla, including Firmicutes, Proteobacteria, Bacteroidetes, Actinobacteria, Cyanobacteria, Spirochaetes, and Balneolaeota, with high sequence similarity. This hints that the anti-CRISPR genes were moved recently by horizontal gene transfer. 

Overall this was a very exciting research project, though we never optimized this selection system to find all of the possible anti-CRISPRs, nor did we analyze all the resulting clones extensively. We were initially just interested whether this setup could actually work to find new anti-CRISPR genes. I’m confident the anti-CRISPRs found thus far will eventually find their way into mainstream usage of CRISPR-based therapies and applications. Next, it will be exciting to look into more metagenomic libraries against all the other CRISPR types for which we don’t have a single anti-CRISPR yet. I’m sure they are out there!

Uribe, R. V, van der Helm, E., Misiakou, M., Lee, S., Kol, S., & Sommer, M. O. A. (2019). Discovery and Characterization of Cas9 Inhibitors Disseminated across Seven Bacterial Phyla Short Article Discovery and Characterization of Cas9 Inhibitors Disseminated across Seven Bacterial Phyla. Cell Host and Microbe, 25, 1–9. https://doi.org/10.1016/j.chom.2019.01.003




2 Comments

Filed under Publications

New paper out: functional metagenomics powered by synthetic biology

Why do functional metagenomics and synthetic biology (or synbio) make such an interesting combination? This week, our new article in Nature Chemical Biology, ‘The evolving interface between synthetic biology and functional metagenomics’, sheds light on how progress in synthetic biology can advance, and already have advanced, the field of functional metagenomics.

We are facing a growing and aging world population, and mankind thus needs new drug molecules and ways to produce nutrients. Instead of using chemical synthesis, drugs and nutrients can be sustainably produced by modified bacteria. Moreover, most of those interesting molecules are already produced by billions of bacteria in the environment. Unfortunately, it is difficult to grow most types of bacteria in a laboratory, and it is therefore not possible to harness their useful capabilities directly. However, bacteria contain all the information needed to produce these valuable molecules in their DNA. Using methods known collectively as ‘functional metagenomics,’ the DNA of these bacteria can be recovered from the environment and used by host bacteria that can be cultivated in a lab. This allows us to make use of the capabilities of the billions of bacteria that are present in the environment without actually growing them, but by directly utilizing their DNA instead.

Construction of a metagenomic library. Environmental DNA is extracted, purified, fragmented and cloned into a shuttle vector. The library of plasmids is then transformed into an expression host such as Escherichia coli. Finally, the resulting clones can be analyzed according to their genotype and/or phenotype.

Which kind of metagenomics should be used?

In practice, there are two ‘metagenomic’ approaches, sequence-based approaches (where environmental DNA is sequenced and a function is assigned computationally) and function-based approaches (where the environmental DNA is transformed into a host bacteria and the genes are expressed and interrogated). In our article, we focused on the functional approach by specifically interrogating metagenomic DNA functionally using a genetic circuit.

The term “metagenomics” can refer to many different techniques and procedures. In our new article, we focused on using genetic circuits to functionally mine a metagenomic library.

In our publication, we first surveyed the ways in which genetic circuits have been used in the recent past to interrogate metagenomic libraries. Though scientists have been quite creative, researchers need to move from a ‘screening’ method to a more high-throughput interrogation methods. These ‘screening’ methods require researchers to painstakingly examine each bacterial colony for a visual change associated with the production of a compound, for example. In more effective high-throughput methods, researchers couple the production of a compound of interest to the survival of the cell. The spontaneous death of cells lacking the target compound replaces the labor-intensive process of scrutinizing massive number of clones. My colleague Hans has utilized this approach previously to identify new vitamin transporters as outlined in the illustration below.

Example of a genetic circuit consisting of a riboswitch coupled with two selectable markers, which can be used to mine a metagenomic library for vitamin B1 transporting or producing genes. The genetic switch can also be formalized as an AND-gate with vitamin B1 as the input and cell survival as the output.

Insights for improving genetic selection circuits can also be obtained from biocontainment research, as it is notoriously difficult to perform experiments in which all cultured cells commit suicide. For example, the research group led by Farren Isaacs showed how multi-layered circuits can aid in this, and a recent review from the Collins lab summarizes the latest advances in biocontainment systems.

We anticipate that the expansion of synthetic biology tools, such as automated circuit design and computational design of proteins, will usher in greater efficiencies in the mining of functional metagenomics libraries. These advances in functional metagenomics and synthetic biology are already demonstrating remarkable potential in industrial and medical applications. Our full paper, available at Nature Chemical Biology, goes into more depth on all the previously constructed genetic circuits and new technologies that will continue to propel the field forward:

van der Helm, E. Genee, H.J. Sommer, M.O.A (2018) ‘The evolving interface between synthetic biology and functional metagenomics’ Nature Chemical Biology 10.1038/s41589-018-0100-x

Other resources

Gallagher, R. R., Patel, J. R., Interiano, A. L., Rovner, A. J., & Isaacs, F. J. (2015). Multilayered genetic safeguards limit growth of microorganisms to defined environments. Nucleic Acids Research, 43(3), 1945–54. https://doi.org/10.1093/nar/gku1378 [-]

Genee, H. J., Bali, A. P., Petersen, S. D., Siedler, S., Bonde, M. T., Gronenberg, L. S., Sommer, M. O. A. (2016). Functional mining of transporters using synthetic selections. Nature Chemical Biology, 12, 1015–1022. https://doi.org/10.1038/nchembio.218 [-]

Lee, J. W., Chan, C. T. Y., Slomovic, S., & Collins, J. J. (2018). Next-generation biocontainment systems for engineered organisms. Nature Chemical Biology, 14(6), 530–537. https://doi.org/10.1038/s41589-018-0056-x [-]

Nielsen, A. K., Der, B. S., Shin, J., Vaidyanathan, P., Densmore, D., & Voigt, C. A. (2016). Genetic circuit design automation. Science, 352(6281), 53–63. https://doi.org/10.1126/science.aac7341 [$]

Taylor, N. D., Garruss, A. S., Moretti, R., Chan, S., Arbing, M., Cascio, D., Raman, S. (2016). Engineering an allosteric transcription factor to respond to new ligands. Nature Methods, 13(2), 177–183. https://doi.org/10.1038/nmeth.3696 [-]

Note: parts of this blogpost are sourced from my PhD thesis

Leave a Comment

Filed under Publications

Biosecurity and synthetic biology: it is time to get serious

This blog post appeared previously on PLoS Synbio 

Last month, the SB7.0 conference attracted around 800 synthetic biology experts from all around the world to Singapore. I was attending as part of the SB7.0 biosecurity fellowship, together with 30 other early-career synthetic biologists and biosecurity researchers. The main goal of the conference was to start a dialogue on biosecurity policies geared specifically towards synthetic biology.

As Matt Watson from the Center for Health Security points out on his blog, the likely earliest account of biological warfare, was the one describing the 1346 attack on the Black Sea port of Caffa from an obscure memoire written in Latin. A lot has changed since then, and biosecurity is now subject of the mainstream media — as exemplified by the recently published Wired article “The Pentagon ponders the threat of synthetic bioweapons.”

Defining biosafety and biosecurity

It is important to first get the scope right; terms like biosecurity and biosafety are sometimes used interchangeably, but there is a meaningful difference.In a nutshell, ‘Biosafety protects people from germs – biosecurity protects germs from people’, as simplified during an UN meeting.

  • Biosafety refers to the protection of humans and the facilities that deal with biological agents and waste: this has also traditionally encompassed GMO regulations.
  • Biosecurity is the protection of biological agents that could be intentionally misused

Although the meanings of biosafety and biosecurity are often somewhat interchangeable in the remainder of this blog, I focus on biosecurity as this mainly involves the human component of policy making.

SB7.0 kickoff with Drew Endy and Matthew Chang

During the conference, Gigi Gronvall from the Center for Health Security illustrated a prime example of biosecurity from a 2010 WHO report on the Variola virus, the smallpox pathogen: “nobody anticipated that […] advances in genome sequencing and gene synthesis would render substantial portions of [Variola virus] accessible to anyone with an internet connection and access to a DNA synthesizer. That “anyone” could even be a wellintentioned researcher, unfamiliar with smallpox and lacking an appreciation of the special rules that govern access to [Variola virus] genes.”

The take home lesson? What might not look like a security issue now, may soon become a threat!

Biorisks are likely terrorism or nation-state driven

What are the most likely sources that pose a biorisk? According to Crystal Watson, the following risks demand scrutiny:

  • Natural occurring strains (e.g., the recent Ebola outbreak)
  • Accidental release (e.g. the 1979 accidental release of anthrax spores by the Sverdlovsk-19a military research facility in the USSR)
  • Terrorism (e.g., the 2001 anthrax-spore contaminated letters in the US)
  • State bioweapons (e.g., the US biological warfare program ultimately renounced by President Nixon)

From a biosecurity perspective, it is interesting to note which of these risks are most imminent. The same authors recently published a perspective in Science that describes the actors and organizations that pose a bioweapons threat. It describes the results of a Delphi study of 59 experts with backgrounds broadly ranging from biological and non-biological sciences, medicine, public health, and national security to political science, foreign policy and international affairs, economics, history, and law.

Although the results varied considerably, terrorism was rated as the most likely source of biothreats because of the “rapid technological advances in the biosciences, ease of acquiring pathogens, democratization of bioscience knowledge, information about a nonstate actors’ intent, and the demonstration of the chaos surrounding the Ebola epidemic in West Africa in 2014”. Another likely biorisk source would be a nation-state actor because of the “technological complexities of developing a bioweapon, the difficulty in obtaining pathogens, and ethical and/or cultural barriers to using biological weapons.”

According to the expert panel, some threats are particularly likely to impact society:

  • biological toxins (e.g., ricin, botulinum toxin)
  • spore-forming bacteria (e.g., Bacillus anthracis¸ which causes anthrax)
  • non–spore-forming bacteria (e.g., Yersinia pestis, which causes plague)
  • viruses (e.g., Variola virus, which causes smallpox)

This list essentially covers everything that has been weaponized — only fungi, prions, and synthetic pathogens were not predicted to become weaponized in the next decade.

Now that the threats are defined: how to counteract them? One of the safeguards that has been put in place is the Australia Group,“an informal forum of countries which, through the harmonisation of export controls, seeks to ensure that exports do not contribute to the development of chemical or biological weapons.” This organization seeks to develop international norms and procedures to strengthen export controls in service of chemical and biological nonproliferation aims. However, as Piers Millett from biosecu.re pointed out, these tools do not on their own adequately address our current needs for properly assessing and managing risks. For example, under the Australia agreement you need an export license to export the Ebola virus itself or a sample of prepped Ebola RNA. But you do not need one if you just want to download the sequence of the genome. In other words, access restriction in an inadequate biosecurity failsafe.

Transmission electron micrograph of a smallpox virus particle (CC BY-SA 4.0 by Dr. Beards)

Why resurrect the extinct horsepox virus?

Biosecurity is directly related to the challenge posed by the dual use of research: it both creates a risk while providing insights to mitigate that risk. A particularly illustrative example is the recent synthesis of the horsepox virus, which is from the same viral genus as smallpox, but is apparently extinct in nature. Last year, the lab of virologist David Evans at the University of Alberta in Canada reconstituted the horsepox virus, which is extinct. Synthesizing and cloning together almost 200 kb of DNA is not exceptionally challenging today, but it just hadn’t been attempted before for this family of viruses.

But why did Evans and his team set out to synthesize the horsepox virus in the first place? There were several motivating objectives:

  1. the development of a new smallpox vaccine
  2. the potential use of the horsepox virus as a carrier to target tumors
  3. a proof-of-concept for synthesizing extinct viruses using ‘mail-order DNA.’

Evans broadly defended his actions in a recent Science article: “Have I increased the risk by showing how to do this? I don’t know. Maybe yes. But the reality is that the risk was always there. The world just needs to accept the fact that you can do this and now we have to figure out what is the best strategy for dealing with that.” Tom Inglesby from the Center for Health Security reasoned that the proof-of-concept argument does not justify the research as “creating new risks to show that these risks are real is the wrong path.”

How well can the horsepox synthesis study be misused? Evans notes that his group did “provide sufficient details so that someone knowledgeable could follow what we did, but not a detailed recipe.” Unfortunately, there are no international regulations that control this kind of research. And many scholars argue it is now time to start discussing this on a global level.

Paul Keim from Northern Arizona University has proposed a permit system for researchers who want to recreate an extinct virus. And Nicholas Evans from the University of Massachusetts suggests that the WHO create a sharing mechanism that obliges any member state to inform the organization when a researcher plans to synthesize viruses related to smallpox. Both options are well-intentioned. However, anyone can already order a second-hand DNA synthesizer on eBay and countless pathogenic DNA sequences are readily available, so these proposals do not contribute significantly to biosecurity. But, while these rules would increase the amount of red-tape for researchers, they would also contribute to the development of norms and cultural expectations around acceptable practice of the life sciences. The bottom line, which is not novel but very much worth restating, is that scientists should constantly be aware of what they create as well as any associated risks.

The future of synthetic biology and biosecurity

Synthetic biology has only been recently recognized as a mature subject in the context of biological risk assessment — and the core focus has been infectious diseases. The main idea, to build resilience and a readiness to respond, was reiterated by several speakers at the SB7.0 conference. For example, Reshma Shetty, co-founder of Ginkgo Bioworks, explained that in cybersecurity, we didn’t really think a lot about security issues until computers were already ubiquitous. In the case of biosecurity, we’re already dependent on biology [with respect to food, health etc.] but we still have an opportunity to develop biosecurity strategies before synthetic biology is ubiquitous. There is still an opportunity to act now and put norms and practices in place because the community is still relatively small.

Another remark from Shetty was also on point: “We are getting better at engineering biology, so that also means that we can use this technology to engineer preventative or response mechanisms.” For example, we used to stockpile countermeasures such as vaccines. With biotechnological advances, it now possible to move to a rapid-response model, in which we can couple the detection of threats as they emerge via public health initiatives and then develop custom countermeasures using in part synthetic biology approaches. Shetty envisioned that foundries — with next-generation sequencing and synthesis capabilities — are going to play a key role in such rapid responses. Governments should be prepared to support and enable such foundries to rapidly manufacture vaccines for smallpox or any other communicable disease, on-demand. While it is not clear that the details of these processes and the countermeasures themselves can be made public and still maintain their effectiveness, the communication and decision-making processes should be transparent.

Elizabeth Cameron, Senior Director for Global Biological Policy and Programs at the Nuclear Threat Initiative, similarly warned that “if scientists are not taking care of biosecurity now, other people will start taking care of it, and they most likely will start preventing researchers from doing good science.” A shrewd starting point for this development was noted by Matt Watson: “one reason we as a species survived the Cold War was that nuclear scientists—on both sides of the Iron Curtain—went into government and advised policymakers about the nature of the threat they faced. It’s imperative for our collective security that biologists do the same.”

In other words, it is time to start having these serious discussions about imminently needed biosecurity measures during events or conferences such as SB7.0.

Leave a Comment

Filed under Talk

Highlights of a two days nanopore conference

This week Oxford Nanopore Technologies organized the third London Calling conference, gathering around 400 attendees (200 more than last year) in the Old Billingsgate Market directly at the Thames. This year there was not a MinION in the goodie bag (I thought because everyone already had one, but there were a lot of new users as well) instead the bag contained a voucher for a flowcell and 1D^2 sequencing kit*.

I’ll not cover each individual talk, as James Hadfield did a great job of posting a detailed writeup on enseqlopedia (day 1, day 2). Furthermore David Eccles has a very thorough transcript of Clive Browns (CTO Oxford Nanopore) talk and I’m expecting a blog from Keith Robison at OmicsOmics soon. Videos of all the talks are supposed to be online later this month.

Technology

  • Read length, or more specific long reads, was an often-mentioned topic the past days. Whereas 100 kb reads were previously classified as ‘long’. These days the record is 950 kb. Long reads all hinge on the DNA extraction method. This has been described on Nick Lomans blog, as well as in the human genome seq paper. The latter paper (Fig 5a) also nicely forecasts how long reads can tremendously aid (human) genome assembly reaching a predicted N50 of> 80 Mbs (basically a full chromosome)
  • Clive announced (although I don’t have the exact wording) ONT would not discontinue pore chemistries any more. Which was previously flagged by quite a few attendees as limiting the implementation of nanopore sequencing in the ‘production’ environment.
  • Most of the users get stable results with R9 compared to the more variable R7.x chemistry of last year (but apparently not everyone, so ONT is trying to help individual users and also organizes hands on workshops etc.).
  • Direct RNA Seq is available. Although the throughput is not as high as the cDNA version (“which is just very great”). However, direct RNA seq does allow users to map base modifications as showcased by this cool direct 16S preprint from Smith et al.
  • The dCas9 enrichment looks really promising, although this is not publicly available yet. Slides presented by Andy Heron from ONT included a few old ones from last year in New York, but spiced up with more recent data. For example work on increasing the local concentration of DNA at the pore using beads. On an E. coli sample this makes a 300x target enrichment possible.
  • Mick Watson showed it is possible to do complete genome assembly from a metagenomic sample.

Devices

ONT now has a whole portfolio of products at different stages of the development process. I’ll segment them by their availability

  • In use
    • MinION, currently R9.4, will switch later this month to R9.5 pore to support 1D^2. However the 1D kits will still run on the R9.5 pore. I assume there are just a few modifications made to the pore protein that attract/guide the tether from the 1D^2 complement strand to the pore. Currently users routinely get out between 5-10 Gbase, 15-20 Gbase is in-house possible
    • First PromethION flowcells are running in the field, but the users are asked for their patience as all the hardware is new (flowcell, chips, box) compared to the MinION. (This is not the case for the flonge which is just ‘reusing’ MinION hardware, see below). A full running setup with 48 PromethION flowcells is supposed to generate far more data than Illuminas Novoseq flagship.
  • First shipment later this month:
    • GridION is marked as a device for users who want to be a service provider. Basically it is 5 MinIONs in one box + basecaller, so no hassle with updating 5 computers. The GridION will in the future be compatible with the high-performance PromethION flowcells.
    • VolTRAX (the automated sample prep) is already deployed in the field, but not yet with the reagents to actually carry out a library prep. However the release of the reagents is imminent. It will be very exciting to see first results from this, also as a way for the community to share and standardize DNA extraction protocols. Next stage are lyophilized reagents, which are scheduled for end 2017 and will be most welcomed by users doing in-field experiments.
  • Somewhere in the pipeline
    • Flonge is an adapter that allows a down-scaled version of the MinION flowcell to be used, thereby lowering the flowcell costs significantly. The device is in the process for regulatory approval and thus the main entrance for ONT into the healthcare market, which Gordon Sanghera (CEO) described as much harder to get a hold on than the R&D market.
    • SmidgION uses the same lower pore density flowcell as the flonge but allows direct connection to a phone.
    • An unnamed-basecall-dongle. Basecalling will in the future be done on dedicated hardware, a field programmable array (FPGA), which should be able to basecall 1M bases per second. This will initially make users without access to clusters or remote use pretty happy.

What will the coming year bring?

Compared to two years ago I saw a lot of cool applications and trials. Zamin Iqbal tuberculosis sequencing, Justin O’Grady urinary tract infection sequencing, Nick Loman and Josh Quick Zika Brazil project  and Richard Leggett pre-term infant microbiome sequencing. It is clear the ONT platform is starting to mature and the initial hicks up are over. From a healthcare perspective these technologies are just waiting to be tried in the clinic, as Nick also mentions “Why has nobody sequenced yet in a NHS (National Health Service) lab?” So I expect presentations to be in this clinical  direction at the 2018 conference. I also believe we will see large (nanopore only) genome assemblies of plants, funky eukaryotes, phased human genomes as well as metagenome assemblies being produced by the platform due to the increased throughput and read length. Eventually I expect the base modifications (both on RNA and DNA) to receive quite some coverage because of the improvements in the basecallers and kit chemistries.

In conclusion, I’m very much look forward to the coming developments as its clear that ONT is very passionate about R&D and continues to crank out improvements.

Disclaimer: I was an invited speaker at LC17 and received travel and accommodation subsidy.

*Update 05-09: Apparently new users did receive a MinION

2 Comments

Filed under Talk

The latest hardware for biology

As part of the synbio revolution lab-as-a-service providers such as Transcriptic and Emerald cloud labs are popping up, enabling researchers to perform experiments remotely. On the other hand, locally deployed low-cost setups are also gaining ground. An example is a paper published last year in Nature Biotechnology by the Riedel-Kruse lab. The authors developed a microscope coupled to a small flow chamber to observe Euglena swimming around. Via a web interface LEDs that surround the flow chamber can be turned on, so you can actually remotely control the movement of the Euglena (as they like to move to the light). The whole setup only costs $1000 a year, so an low-cost and accessible option for the educational field. The project seems a follow-up on a previous educational device from the same group called the LudusScope, a Gameboy like smartphone microscope.

In 2015 the TU Delft iGEM team won the grand prize with their biolink 3D printer. Last month a write up of an improved version was published in ACS Synthetic Biology. Instead of building a 3D printer from K’nex (as the iGEM team did), this version is a modification of the CoLiDo DIY 3D printer. Structures can be build by dissolving bacteria together with alginate and depositing this ‘bioink’ on a buildplate containing calcium. The combination of alginate and calcium triggers a cross-linking process leading to solidification of the extruded mixture. Using the technology a 14-layer high structure (of around 2 mm) containing two different bacterial strains was printed in various shapes.

Bacterial 3D printing based on the modified CoLiDo DIY framework, right a close up of the extruder head. (Source: http://pubs.acs.org/doi/abs/10.1021/acssynbio.6b00395 CC-BY-NC-ND)

Bacterial 3D printing based on the modified CoLiDo DIY framework, right a close up of the extruder head. (Source: 10.1021/acssynbio.6b00395 CC-BY-NC-ND)

The Maerkl lab published a preprint on bioRxiv last month on a microfluidic biodisplay with 768 programmable biopixels. Of this biodisplay each individual compartment (or pixel) can be inoculated with a different strain. As a proof-of-concept the pixels were loaded with previously developed arsinicum sensing strains. The WHO states a maximum of 10 μg/L of arsenite in tap water, so water spiked with various amounts of arsine were flown over the biodisplay. After 10 hours a skull-and-cross-bones symbol is visible using a microscope when as little as 20 μg/L arsinite spiked water is flow over the biodisplay. As there is room for 768 different strains, this setup can actually be used to do some pretty powerful analysis.

Response of the biodisplay to tap water after 24 hours of induction with 100 µg/l of sodium-arsenite. (Source: http://biorxiv.org/content/early/2017/02/27/112110, CC-BY 4.0)

Response of the biodisplay to tap water after 24 hours of induction with 100 µg/l of sodium-arsenite. (Source: 10.1101/112110, CC-BY 4.0)

In the Journal of Laboratory Automation an article describes an open source (although the article itself is not open access) peptide synthesizer named Pepsy. Peptide synthesizers often cost  more than $20.000, whereas Pepsy can be assembled for  less than $4000. The author put the complete  Fmoc solid phase peptide synthesis process under the control of an Arduino (an open source prototyping platform). As an example, a ten residue peptide was synthesized that can be used as a contrast agent for nuclear medicine. The source code for Pepsy is available here on Github.

The fully assembled PepSy system with the reaction syringe in the middle. Courtesy of Dr. Gali

The fully assembled PepSy system with the reaction syringe in the middle. Courtesy of Dr. Gali

Do you have more exciting examples? Let me know!

Leave a Comment

Filed under Science Article

Micropia – a microbe museum

lichensThis month I had the chance to visit the ‘smallest’ museum in the world: Micropia in Amsterdam, The Netherlands. The goal of Micropia, opened in 2014, is to distribute knowledge about microbes to the general public. The museum is part of the zoo Artis but can be visited independently and has a separate entrance. The museum offers a great introduction into the wonderful world of microorganisms. Below an impression of the exhibition.

The tree of life at the entrance showing a 'representative selection of 1500 species, 500 of each domain' the data comes from NCBI. A neat feature; the species lighting up in UV light are only visible by microscope whereas the non illuminated branches (ie. mammels in the bottom right corner) do not.

The tree of life at the entrance showing a ‘representative selection of 1500 species, 500 of each domain’ the data comes from NCBI. A neat feature; the species lighting up in UV light are only visible by microscope whereas the non illuminated branches (ie. mammals in the bottom right corner) do not.

A tardigrade ~6,000x enlarged, living tardigrades are also present and visible under the microscope

A tardigrade ~6,000x enlarged, living tardigrades are also present and visible under the microscope. I’m wondering wether its genome is also contaminated?

Micropia also features an in-house lab used to maintain the living components of the collection.

Micropia also features an in-house lab used to maintain the living components of the collection.

In a separate room a stir flask with Photobacterium phosphoreum produced a beautiful glow

In a separate room a stir flask with Photobacterium phosphoreum produced a beautiful glow.

'Wall-of-fame' with more than 100 micro organisms in large petri dishes

‘Wall of fame’ with more than 100 microorganisms in large petri dishes

Close-up on the wall of fame, Aspergillus oryzae (used to ferment soybeans to produce soy sauce), Aspergillus arachidicola (discovered on peanuts), Klebsiella (this one was only named by genus) and a specimen just named 'yeast'

Close-up on the wall of fame, Aspergillus oryzae (used to ferment soybeans to produce soy sauce), Aspergillus arachidicola (discovered on peanuts), Klebsiella (this one was only named by genus) and a specimen just named ‘yeast’

Downstairs several product were featured that could not exist without micro-organisms such as yoghurt, kimchi and 'delicious' pickled herring.

Downstairs several product were featured that could not exist without microorganisms such as yoghurt, kimchi and ‘delicious’ pickled herring.

Overall the museum does a great job in showing the presence and use of microbes in daily life. For example the ‘wall of fame’ contains all kind of household attributes together the microorganisms that are commonly found on the objects. Furthermore there is a nice collection of examples of useful microorganisms to breakdown waste or produce medicine. All this is vividly illustrated with a wealth of interactive installations.

I was a bit time constrained so I might have missed it, but there was little emphasis for potential of engineered microbes. With museum sponsors such as BASF, DSM, Galapagos, MSD, I would expect that a significant portion of the exhibition would be dedicated to GMOs and the endless possibilities of synthetic biology and metabolic engineering. For example by showcasing the bio-production of insulin, artemisinin, or biofuel using microbes. I think the museum would be a great platform to continue the discussion in society on the use of GMOs and highlight the positive aspects.

In conclusion a great way to spend a few hours and get to know more about the more invisible forms of life.

Leave a Comment

Filed under Exhibition

Background on the poreFUME pre-print

porefumlogoLast week our pre-print on nanopore sequencing came online at bioRxiv. Nanopore sequencing is a relatively new sequencing technology that is starting to come of age. As part of this process we last year started playing with the ONT MinION sequencer. This post summarizes a bit of the background behind the pre-print.

Previously I covered the London Calling 2015 event  where a lot of progress on the development of the MinION was showcased. We were keen to find out how the MinION could contribute to our daily lab work, but also to see what new ground can be covered with this new sequencing technology.

One of the aspects colleagues in the lab are working on is the dissemination of antibiotic resistance genes, as a major healthcare challenge is the emergence of pathogens that are resistant against antibiotics. Therefor we thought of combining the MinION with antibiotic resistance gene profiling. More specifically; coupling functional metagenomic selections with nanopore sequencing.

Previous work in this field, for example by Justin O’Grady and colleagues, showed the use of the MinION [$] to identify the structure and chromosomal insertion site of a bacterial antibiotic resistance island in Salmonella Typhi.

Instead of going after single isolates, we set out the map the antibiotic resistance genes that are present in the gut (resistome) of a hospitalized patient. The resistome can influence the outcome of antibiotic treatment and it is therefor highly interesting to get insights in this complex network.   Through a collaboration under the EvoTAR programma with Willem van Schaik of the University of Utrecht we had a clinical fecal sample available of an ICU patient, which we used in the experiments.

Typical workflow of the construction and selection of a metagenomic workflow.

Typical functional metagenomic workflow where metagenomic DNA is isolated from a (complex) environment, in this case a fecal sample. The DNA is sheared, ligated and transformed in E. coli. When profiling for antibiotic resistance genes, the cells are plated on agar containing various antibiotics. Finally the metagenomic inserts are sequenced an annotated.

Key in the whole experimental setup to capture the resistome is the use of functional metagenomic selections. In contrast to culturing individual microorganisms directly from a fecal sample, metagenomic DNA is extracted from the sample. This metagenomic DNA is subsequently sheared, ligated and transformed in E. coli and finally plated out on solid agar containing various antibiotics. Only E. coli cells that harbor a metagenomic DNA fragment that encodes for an antibiotic resistant phenotype can survive. With these functional metagenomic selections in hand, the complexity of the resistome can be rapidly mapped.

And this is were the MinION comes in. Although other sequencing technologies, such as the Illumina and the PacBio platform, are available, they do not provide both long reads and low capital requirements.

 

 

After some initial failed attempts to get the MinION sequencer running in our lab, we started to see >100 Mbase runs in October last year. Also PoreCamp last December in Birmingham provided, on top of a great experience and nice people, some useful data (next week a new round of PoreCamp takes place).

In order to analyze the sequencing data that Metrichor generates we developed the poreFUME pipeline, which automates the process of barcode demultiplexing, error correction (using nanocorrect) and antibiotic resistance gene annotation (using CARD). The poreFUMe software is available on Github as a python script. The subsequent analysis is as well available on Github in a Jupyter notebook.

The jupyter notebook is available here

The Jupyter notebook with the analysis in the pre-print is available here.

In order to benchmark the nanopore sequencing data we also Sanger and PacBio sequenced the sample. From these results we could achieve a >97% sequence accuracy and we were able to identify all the 26 antibiotic resistance genes in both the Pacbio and nanopore set.

Since the whole workflow can be performed relatively quickly, it would be really interesting to move these techniques to the next stage and do in-situ resistome profiling. Especially integrating Matt Loose’s read-until functionally could open up new avenues. Furthermore these experiments were done with the R7 chemistry, however it seems that the new R9 chemistry is able to deliver even higher accuracies and faster turn-around.

The fasta files and poreFUME output used in the analysis are already online, the raw PacBio and MinION data is available at ENA

Update 2016-11-01: Added the ENA link to the raw data

2 Comments

Filed under Publications

SynBioBeta ’16 packed with innovation

sblogoLast Wednesday the SynBioBeta conference got kicked off at Imperial College. Central topic was the current state of synthetic biology and how (commercial) value can be gained by supplying tools and platforms. In the keynote by Tom Knight from Ginkgo Bioworks, and the afterwards chat with his old PhD student Ron Weiss (now professor at MIT), a few interesting points came by that illustrate the path synbio has taken over de last two decades.

Ginkgo Bioworks founder Tom Knight

Ginkgo Bioworks founder Tom Knight (Photo courtesy of Twist Bioscience)

Tom started of with a quote from Douglas Adams  “if you try and take a cat apart to see how it works, the first thing you have on your hands is a non-working cat” to illustrate the current (or not so far in the past) state of biology in general. He used the old ‘systems engineering’ of a Boeing 777 example to highlight where synbio should be going in his opinion. As in: 1. design using CAD 2. build 3. it works. So no more tinkering and endless design-build-test cycles. In order to do so he argued for an extra loop in the cycle, the simulate component. This would allow the end-user to design and simulate a layout before actually building and testing it.  However, he was quickly to note that we are currently lacking a lot of insights into the biology of a single simple cell, for exmample the Mycoplasma mycoides of which 149 of the 473 remain of unknown function but are essential for cell survival.

An improved version of the design cycle proposed by Tom Knight

An improved version of the design cycle proposed by Tom Knight

Also the VSLI analog was brought up and the panel noted that Voigts group last week came a step closer to this paradigm by rationally designing circuits and building them.

On the questions whether synbio is progressing fast enough Ron Weiss replied that it is not “as fast as we want”, he recalled the last chapter of his thesis describing a synthetic biology program language, which he laughingly categorized as “completely useless back then”. However the state of mind back in the 2000’s was “that within a year or 5” we would be able to build circuits with at least 30 gates (Voigts paper from last week showed a ‘Consensus circuit’ containing 12 regulated promoters). Tom was a bit more optimistic saying that “You overestimate what is going to happen in 5 and underestimate what happens in 10 years”. Bottom line was the central need to be able to make robust systems that can work in the real world and in order to do so more information is needed such as whole cell models. The session ended with a spot-on question from riboswitch pioneer Justin Gallivan, now at DARPA; “who is going to fund research this research to gain basic knowledge?”. For example, who is going to elucidate the function of the 149 proteins of unknown functions? One suggestion was that Venter should just pull out his checkbook again…

The investors’ perspective

Next on the program was the investors round table geared towards the commercialization aspect of synthetic biology. It was debated whether the use of the term ‘synbio’ would negatively affect your final product or whether it would boost sales, Veronique de Bruijn from IcosCapital argued that the “uneducated audience will definitely judge you” so she suggested to use the term ‘synbio’ cautiously. Business models, an ever debated topic, stroke more consensus among the investors, they all agreed that it is difficult for a platform technology to go out, hence it can be extremely difficult to apply the technology to the optimal specific product. Karl Handelsman from Codon Capital noted that when you do have a product company it is important to engage with customers early, so you build something they really want. Related to this he recalled that a product company at the West Coast typically exits for 60-80 million USD, so you should be aware that you can never raise more than ~9 mUSD throughout the lifetime of a company. When it came to engaging with Corporate Venture Capital, the panel unanimously appraised them for their expertise came, but care should be taken that your exit strategies are not getting limited by partnering up with them. The session was rounded off with a yes/no on the positive impact of Trump as president on synbio, only Karl was positive because this would definitely direct lots and lots of funding towards Life-On-Mars projects.

Applications of synbio by the industry

In the ‘Application Stack’ session five companies pitched their take on synbio and how this can be used as a value creator. Ranging from bacterial vitamin production by Biosyntia to harnessing the power of Deinoccocus. Particular interesting was Darren Platts’ talk who showing one of Amyris in-house developed tools on language specification in synthetic biology. The actual challenge here was not to write the software “pretty straightforward”, it was more difficult to get the users engaged in the project and adapting the tool. Their paper was published recently in ASC Synbio and the code is soon released on Github.

Is there place for synbio in big pharma?

The final session of the first day was titled ‘ Synthetic Biology for Biopharmaceuticals’ and here if found the talks of Marcelo Kern from GSK and Mark Wigglesworth from AstraZeneca especially interesting, they gave their ‘big pharma’ view on how to incorporate synthetic biology into the established workflows. GSK for example focused on reducing the carbon footprint by replacing chemical synthesis with enzyme catalysis. Another great example was the use of CRISPR to generate drug resistant cell lines to for direct use by the in-house screening department.

The first day was rounded of by Emily Leproust from Twist Bioscience, announcing that they would be happy to take new orders from June (!) on.

The future of synbio

The second day started of with a discussion on ‘Futures: New technologies and Applications’ by Gen9 CEO Kevin Mundanely and Sean Sutcliffe from Green biologics. Both showed examples of partnering by their company with academic institutions to get FTO into place. Sean also made an interesting comment that it took them about 4 years to commercialize “technology from the ’70” so he estimated it would take around 12 years before the CRISPR technology, now trickling into the labs, can be used on production scale in the fermenters.

A fun-and-fast-paced ‘Lightning Talks’ session gave industry and non-profit captains a platform of exactly 5 minutes to pitch their vision. Randy Rettberg gave a fabulous speech about the impact of iGEM on the synbio sector and concluded that iGEM helps cultivating the future leaders of the field. Gernot Abel from Novozymes highlighted a ‘citizen science’ project where the ‘corporate’ Novozymes worked together with biohacker space Biologigaragen in Copenhagen to successfully construct an ethanol assay. Along these lines Ellen Jorgensen from the non-profit Genspace pitched their “why a new generation of bio-entrepreneurs are choosing community labs over incubators/accelerators” at a price point of 100$/month versus 1000$/month. Dek Woolfson (known for his computationally designed peptides and cages) gave an academically tasting talk about BrisSynBio but finished his pitch that they are looking for a seasoned business person to help making their tools available for a broader public.

d

Dek Woolfson was one of the few (still) academics on stage. (Photo by: Edinburgh iGEM)

What happens when synthetic biology and hardware meet?

The hardware and robot session showcased, among others, Biorealize who are constructing a tabletop device to transform cells and incubate and lyse them, Synthase who just released an open source data management platform Antha and Bento Lab (currently running a very succesfull kickstarter campaign) highlighting their mobile PCR workstation. An interesting question was posed at the end as to how much responsibility Bento Lab was putting on the DNA oligo synthesis companies by democratizing and making PCR available to the general public. Bento Lab defended that they are supplying an extensive ethical guide with their product and that they don’t supply any reagents. Unfortunately this very interesting discussing was terminated due to a tight conference schedule.

d

Tabletop transformations, incubations and lysis in one go using Biorealize

A healthy microbiome using GMO’s?

In the final session of SynBioBeta a few examples of synbio applied to the microbiome came by. Boston based Synlogic is planning on starting the IND (Investigational New Drug) process on their E.coli equipped with ammonia degrading capabilities to combat urea cycle disorder. Xavier Duportet showed an example of Eligo Bioscience using CRISPR systems delivered by phages that selectively kill pathogens, such as Staphylococcus aureus, part of this exciting work was published in 2014 in Nature Biotech using mice models.

ddd

Eligo Bioscience and their CRISPR-delivered-by-phage technology (Photo by: Edinburgh iGEM)

After all these dazzling applications of synthetic biology, captain John Cumbers wrapped up SynBioBeta by also announcing the next event in San Fransico at 4th-6th October and in London next year again around April.

Personally I think the conference did a great job at gathering together the industrial synthetic biology community, from both early start-up to big pharma. Although the sentiment is that we are not as far as we want to be, there have been some considerable advancements over the last 15 years. From an investors perspective there is still a lot of uncertainty surrounding the run-time (and the inherently coupled rate of return) of synbio projects, however the recent numbers on VC funding are indicating there is an eagerness to take the leap. Taking together, a jam packed two days with high end exciting synthetic biology applications, it will be very interesting to see if Moore’s law also applies to synbio.

Disclaimer: The above write up is strongly biased by my own interests, so revert to the twitter hashtag #SBBUK16 to get a more colorful overview of the past two days.

3 Comments

Filed under Talk

How many new drugs does the FDA approve?

Recently a question was floated on Twitter as to how many drug approvals the FDA has done. Quickly a few answers came in where it heavily depends on what one counts as a ‘new drug’, is a registered generic molecule also a new drug or is this not innovative enough?

For the sake of doing statistics on these numbers I’ve extracted the datapoints of a few data resources. First from the FDA itself, they have a funky table showing the number of new drug application (NDA) approved, received and the number of new molecular entities. I’ve extracted the data and plotted this below from 1944-2011 and can be downloaded here.

FDA NDA Approvals & Receipts from 1944-2011 (data)

Another interesting categorisation is the source of the molecules. In 2009 John Vederas published a highly cited article on the origin of the FDA approved drugs between 1981 -2007. Unfortunately the raw data behind this plot is not available so I’ve interpolated the numbers from this article figure and plotted the data below, again can be downloaded here. It is pretty clear that the number of natural (derived) molecules is declining.

 Number of drugs approved in the US split up by source from 1981 to 2007 interpolated from Vederas et al. (data)

feature in Drug Discovery Today  by Kinch et al. shows extensive analysis of new molecular entities as well as the ones one paid for the R&D. As a commenter notes on PubMed Commons it is too bad the underlying data is not available. Therefor the graph shown below is an interpolation of their figure.

Number of new molecular entities (NME) approved by the FDA from  1930-2013  interpolated from Kinch et al. (data)

A quick comparison shows that the FDA NME numbers and the numbers by Kinch et al. are in the same ballpark, deviations can be due to my interpolation or a difference in counting NMEs, for example Kinch et al. are  “excluding imaging and diagnostic agents.

Comparison of new molecular entities as reported by the FDA and  Kinch et al. 

If anyone has a more comprehensive article or publicly available numbers that would greatly be appreciated.

 

Leave a Comment

Filed under Science Article