Category Archives: Talk

Biosecurity and synthetic biology: it is time to get serious

This blog post appeared previously on PLoS Synbio 

Last month, the SB7.0 conference attracted around 800 synthetic biology experts from all around the world to Singapore. I was attending as part of the SB7.0 biosecurity fellowship, together with 30 other early-career synthetic biologists and biosecurity researchers. The main goal of the conference was to start a dialogue on biosecurity policies geared specifically towards synthetic biology.

As Matt Watson from the Center for Health Security points out on his blog, the likely earliest account of biological warfare, was the one describing the 1346 attack on the Black Sea port of Caffa from an obscure memoire written in Latin. A lot has changed since then, and biosecurity is now subject of the mainstream media — as exemplified by the recently published Wired article “The Pentagon ponders the threat of synthetic bioweapons.”

Defining biosafety and biosecurity

It is important to first get the scope right; terms like biosecurity and biosafety are sometimes used interchangeably, but there is a meaningful difference.In a nutshell, ‘Biosafety protects people from germs – biosecurity protects germs from people’, as simplified during an UN meeting.

  • Biosafety refers to the protection of humans and the facilities that deal with biological agents and waste: this has also traditionally encompassed GMO regulations.
  • Biosecurity is the protection of biological agents that could be intentionally misused

Although the meanings of biosafety and biosecurity are often somewhat interchangeable in the remainder of this blog, I focus on biosecurity as this mainly involves the human component of policy making.

SB7.0 kickoff with Drew Endy and Matthew Chang

During the conference, Gigi Gronvall from the Center for Health Security illustrated a prime example of biosecurity from a 2010 WHO report on the Variola virus, the smallpox pathogen: “nobody anticipated that […] advances in genome sequencing and gene synthesis would render substantial portions of [Variola virus] accessible to anyone with an internet connection and access to a DNA synthesizer. That “anyone” could even be a wellintentioned researcher, unfamiliar with smallpox and lacking an appreciation of the special rules that govern access to [Variola virus] genes.”

The take home lesson? What might not look like a security issue now, may soon become a threat!

Biorisks are likely terrorism or nation-state driven

What are the most likely sources that pose a biorisk? According to Crystal Watson, the following risks demand scrutiny:

  • Natural occurring strains (e.g., the recent Ebola outbreak)
  • Accidental release (e.g. the 1979 accidental release of anthrax spores by the Sverdlovsk-19a military research facility in the USSR)
  • Terrorism (e.g., the 2001 anthrax-spore contaminated letters in the US)
  • State bioweapons (e.g., the US biological warfare program ultimately renounced by President Nixon)

From a biosecurity perspective, it is interesting to note which of these risks are most imminent. The same authors recently published a perspective in Science that describes the actors and organizations that pose a bioweapons threat. It describes the results of a Delphi study of 59 experts with backgrounds broadly ranging from biological and non-biological sciences, medicine, public health, and national security to political science, foreign policy and international affairs, economics, history, and law.

Although the results varied considerably, terrorism was rated as the most likely source of biothreats because of the “rapid technological advances in the biosciences, ease of acquiring pathogens, democratization of bioscience knowledge, information about a nonstate actors’ intent, and the demonstration of the chaos surrounding the Ebola epidemic in West Africa in 2014”. Another likely biorisk source would be a nation-state actor because of the “technological complexities of developing a bioweapon, the difficulty in obtaining pathogens, and ethical and/or cultural barriers to using biological weapons.”

According to the expert panel, some threats are particularly likely to impact society:

  • biological toxins (e.g., ricin, botulinum toxin)
  • spore-forming bacteria (e.g., Bacillus anthracis¸ which causes anthrax)
  • non–spore-forming bacteria (e.g., Yersinia pestis, which causes plague)
  • viruses (e.g., Variola virus, which causes smallpox)

This list essentially covers everything that has been weaponized — only fungi, prions, and synthetic pathogens were not predicted to become weaponized in the next decade.

Now that the threats are defined: how to counteract them? One of the safeguards that has been put in place is the Australia Group,“an informal forum of countries which, through the harmonisation of export controls, seeks to ensure that exports do not contribute to the development of chemical or biological weapons.” This organization seeks to develop international norms and procedures to strengthen export controls in service of chemical and biological nonproliferation aims. However, as Piers Millett from biosecu.re pointed out, these tools do not on their own adequately address our current needs for properly assessing and managing risks. For example, under the Australia agreement you need an export license to export the Ebola virus itself or a sample of prepped Ebola RNA. But you do not need one if you just want to download the sequence of the genome. In other words, access restriction in an inadequate biosecurity failsafe.

Transmission electron micrograph of a smallpox virus particle (CC BY-SA 4.0 by Dr. Beards)

Why resurrect the extinct horsepox virus?

Biosecurity is directly related to the challenge posed by the dual use of research: it both creates a risk while providing insights to mitigate that risk. A particularly illustrative example is the recent synthesis of the horsepox virus, which is from the same viral genus as smallpox, but is apparently extinct in nature. Last year, the lab of virologist David Evans at the University of Alberta in Canada reconstituted the horsepox virus, which is extinct. Synthesizing and cloning together almost 200 kb of DNA is not exceptionally challenging today, but it just hadn’t been attempted before for this family of viruses.

But why did Evans and his team set out to synthesize the horsepox virus in the first place? There were several motivating objectives:

  1. the development of a new smallpox vaccine
  2. the potential use of the horsepox virus as a carrier to target tumors
  3. a proof-of-concept for synthesizing extinct viruses using ‘mail-order DNA.’

Evans broadly defended his actions in a recent Science article: “Have I increased the risk by showing how to do this? I don’t know. Maybe yes. But the reality is that the risk was always there. The world just needs to accept the fact that you can do this and now we have to figure out what is the best strategy for dealing with that.” Tom Inglesby from the Center for Health Security reasoned that the proof-of-concept argument does not justify the research as “creating new risks to show that these risks are real is the wrong path.”

How well can the horsepox synthesis study be misused? Evans notes that his group did “provide sufficient details so that someone knowledgeable could follow what we did, but not a detailed recipe.” Unfortunately, there are no international regulations that control this kind of research. And many scholars argue it is now time to start discussing this on a global level.

Paul Keim from Northern Arizona University has proposed a permit system for researchers who want to recreate an extinct virus. And Nicholas Evans from the University of Massachusetts suggests that the WHO create a sharing mechanism that obliges any member state to inform the organization when a researcher plans to synthesize viruses related to smallpox. Both options are well-intentioned. However, anyone can already order a second-hand DNA synthesizer on eBay and countless pathogenic DNA sequences are readily available, so these proposals do not contribute significantly to biosecurity. But, while these rules would increase the amount of red-tape for researchers, they would also contribute to the development of norms and cultural expectations around acceptable practice of the life sciences. The bottom line, which is not novel but very much worth restating, is that scientists should constantly be aware of what they create as well as any associated risks.

The future of synthetic biology and biosecurity

Synthetic biology has only been recently recognized as a mature subject in the context of biological risk assessment — and the core focus has been infectious diseases. The main idea, to build resilience and a readiness to respond, was reiterated by several speakers at the SB7.0 conference. For example, Reshma Shetty, co-founder of Ginkgo Bioworks, explained that in cybersecurity, we didn’t really think a lot about security issues until computers were already ubiquitous. In the case of biosecurity, we’re already dependent on biology [with respect to food, health etc.] but we still have an opportunity to develop biosecurity strategies before synthetic biology is ubiquitous. There is still an opportunity to act now and put norms and practices in place because the community is still relatively small.

Another remark from Shetty was also on point: “We are getting better at engineering biology, so that also means that we can use this technology to engineer preventative or response mechanisms.” For example, we used to stockpile countermeasures such as vaccines. With biotechnological advances, it now possible to move to a rapid-response model, in which we can couple the detection of threats as they emerge via public health initiatives and then develop custom countermeasures using in part synthetic biology approaches. Shetty envisioned that foundries — with next-generation sequencing and synthesis capabilities — are going to play a key role in such rapid responses. Governments should be prepared to support and enable such foundries to rapidly manufacture vaccines for smallpox or any other communicable disease, on-demand. While it is not clear that the details of these processes and the countermeasures themselves can be made public and still maintain their effectiveness, the communication and decision-making processes should be transparent.

Elizabeth Cameron, Senior Director for Global Biological Policy and Programs at the Nuclear Threat Initiative, similarly warned that “if scientists are not taking care of biosecurity now, other people will start taking care of it, and they most likely will start preventing researchers from doing good science.” A shrewd starting point for this development was noted by Matt Watson: “one reason we as a species survived the Cold War was that nuclear scientists—on both sides of the Iron Curtain—went into government and advised policymakers about the nature of the threat they faced. It’s imperative for our collective security that biologists do the same.”

In other words, it is time to start having these serious discussions about imminently needed biosecurity measures during events or conferences such as SB7.0.

Leave a Comment

Filed under Talk

Highlights of a two days nanopore conference

This week Oxford Nanopore Technologies organized the third London Calling conference, gathering around 400 attendees (200 more than last year) in the Old Billingsgate Market directly at the Thames. This year there was not a MinION in the goodie bag (I thought because everyone already had one, but there were a lot of new users as well) instead the bag contained a voucher for a flowcell and 1D^2 sequencing kit*.

I’ll not cover each individual talk, as James Hadfield did a great job of posting a detailed writeup on enseqlopedia (day 1, day 2). Furthermore David Eccles has a very thorough transcript of Clive Browns (CTO Oxford Nanopore) talk and I’m expecting a blog from Keith Robison at OmicsOmics soon. Videos of all the talks are supposed to be online later this month.

Technology

  • Read length, or more specific long reads, was an often-mentioned topic the past days. Whereas 100 kb reads were previously classified as ‘long’. These days the record is 950 kb. Long reads all hinge on the DNA extraction method. This has been described on Nick Lomans blog, as well as in the human genome seq paper. The latter paper (Fig 5a) also nicely forecasts how long reads can tremendously aid (human) genome assembly reaching a predicted N50 of> 80 Mbs (basically a full chromosome)
  • Clive announced (although I don’t have the exact wording) ONT would not discontinue pore chemistries any more. Which was previously flagged by quite a few attendees as limiting the implementation of nanopore sequencing in the ‘production’ environment.
  • Most of the users get stable results with R9 compared to the more variable R7.x chemistry of last year (but apparently not everyone, so ONT is trying to help individual users and also organizes hands on workshops etc.).
  • Direct RNA Seq is available. Although the throughput is not as high as the cDNA version (“which is just very great”). However, direct RNA seq does allow users to map base modifications as showcased by this cool direct 16S preprint from Smith et al.
  • The dCas9 enrichment looks really promising, although this is not publicly available yet. Slides presented by Andy Heron from ONT included a few old ones from last year in New York, but spiced up with more recent data. For example work on increasing the local concentration of DNA at the pore using beads. On an E. coli sample this makes a 300x target enrichment possible.
  • Mick Watson showed it is possible to do complete genome assembly from a metagenomic sample.

Devices

ONT now has a whole portfolio of products at different stages of the development process. I’ll segment them by their availability

  • In use
    • MinION, currently R9.4, will switch later this month to R9.5 pore to support 1D^2. However the 1D kits will still run on the R9.5 pore. I assume there are just a few modifications made to the pore protein that attract/guide the tether from the 1D^2 complement strand to the pore. Currently users routinely get out between 5-10 Gbase, 15-20 Gbase is in-house possible
    • First PromethION flowcells are running in the field, but the users are asked for their patience as all the hardware is new (flowcell, chips, box) compared to the MinION. (This is not the case for the flonge which is just ‘reusing’ MinION hardware, see below). A full running setup with 48 PromethION flowcells is supposed to generate far more data than Illuminas Novoseq flagship.
  • First shipment later this month:
    • GridION is marked as a device for users who want to be a service provider. Basically it is 5 MinIONs in one box + basecaller, so no hassle with updating 5 computers. The GridION will in the future be compatible with the high-performance PromethION flowcells.
    • VolTRAX (the automated sample prep) is already deployed in the field, but not yet with the reagents to actually carry out a library prep. However the release of the reagents is imminent. It will be very exciting to see first results from this, also as a way for the community to share and standardize DNA extraction protocols. Next stage are lyophilized reagents, which are scheduled for end 2017 and will be most welcomed by users doing in-field experiments.
  • Somewhere in the pipeline
    • Flonge is an adapter that allows a down-scaled version of the MinION flowcell to be used, thereby lowering the flowcell costs significantly. The device is in the process for regulatory approval and thus the main entrance for ONT into the healthcare market, which Gordon Sanghera (CEO) described as much harder to get a hold on than the R&D market.
    • SmidgION uses the same lower pore density flowcell as the flonge but allows direct connection to a phone.
    • An unnamed-basecall-dongle. Basecalling will in the future be done on dedicated hardware, a field programmable array (FPGA), which should be able to basecall 1M bases per second. This will initially make users without access to clusters or remote use pretty happy.

What will the coming year bring?

Compared to two years ago I saw a lot of cool applications and trials. Zamin Iqbal tuberculosis sequencing, Justin O’Grady urinary tract infection sequencing, Nick Loman and Josh Quick Zika Brazil project  and Richard Leggett pre-term infant microbiome sequencing. It is clear the ONT platform is starting to mature and the initial hicks up are over. From a healthcare perspective these technologies are just waiting to be tried in the clinic, as Nick also mentions “Why has nobody sequenced yet in a NHS (National Health Service) lab?” So I expect presentations to be in this clinical  direction at the 2018 conference. I also believe we will see large (nanopore only) genome assemblies of plants, funky eukaryotes, phased human genomes as well as metagenome assemblies being produced by the platform due to the increased throughput and read length. Eventually I expect the base modifications (both on RNA and DNA) to receive quite some coverage because of the improvements in the basecallers and kit chemistries.

In conclusion, I’m very much look forward to the coming developments as its clear that ONT is very passionate about R&D and continues to crank out improvements.

Disclaimer: I was an invited speaker at LC17 and received travel and accommodation subsidy.

*Update 05-09: Apparently new users did receive a MinION

2 Comments

Filed under Talk

SynBioBeta ’16 packed with innovation

sblogoLast Wednesday the SynBioBeta conference got kicked off at Imperial College. Central topic was the current state of synthetic biology and how (commercial) value can be gained by supplying tools and platforms. In the keynote by Tom Knight from Ginkgo Bioworks, and the afterwards chat with his old PhD student Ron Weiss (now professor at MIT), a few interesting points came by that illustrate the path synbio has taken over de last two decades.

Ginkgo Bioworks founder Tom Knight

Ginkgo Bioworks founder Tom Knight (Photo courtesy of Twist Bioscience)

Tom started of with a quote from Douglas Adams  “if you try and take a cat apart to see how it works, the first thing you have on your hands is a non-working cat” to illustrate the current (or not so far in the past) state of biology in general. He used the old ‘systems engineering’ of a Boeing 777 example to highlight where synbio should be going in his opinion. As in: 1. design using CAD 2. build 3. it works. So no more tinkering and endless design-build-test cycles. In order to do so he argued for an extra loop in the cycle, the simulate component. This would allow the end-user to design and simulate a layout before actually building and testing it.  However, he was quickly to note that we are currently lacking a lot of insights into the biology of a single simple cell, for exmample the Mycoplasma mycoides of which 149 of the 473 remain of unknown function but are essential for cell survival.

An improved version of the design cycle proposed by Tom Knight

An improved version of the design cycle proposed by Tom Knight

Also the VSLI analog was brought up and the panel noted that Voigts group last week came a step closer to this paradigm by rationally designing circuits and building them.

On the questions whether synbio is progressing fast enough Ron Weiss replied that it is not “as fast as we want”, he recalled the last chapter of his thesis describing a synthetic biology program language, which he laughingly categorized as “completely useless back then”. However the state of mind back in the 2000’s was “that within a year or 5” we would be able to build circuits with at least 30 gates (Voigts paper from last week showed a ‘Consensus circuit’ containing 12 regulated promoters). Tom was a bit more optimistic saying that “You overestimate what is going to happen in 5 and underestimate what happens in 10 years”. Bottom line was the central need to be able to make robust systems that can work in the real world and in order to do so more information is needed such as whole cell models. The session ended with a spot-on question from riboswitch pioneer Justin Gallivan, now at DARPA; “who is going to fund research this research to gain basic knowledge?”. For example, who is going to elucidate the function of the 149 proteins of unknown functions? One suggestion was that Venter should just pull out his checkbook again…

The investors’ perspective

Next on the program was the investors round table geared towards the commercialization aspect of synthetic biology. It was debated whether the use of the term ‘synbio’ would negatively affect your final product or whether it would boost sales, Veronique de Bruijn from IcosCapital argued that the “uneducated audience will definitely judge you” so she suggested to use the term ‘synbio’ cautiously. Business models, an ever debated topic, stroke more consensus among the investors, they all agreed that it is difficult for a platform technology to go out, hence it can be extremely difficult to apply the technology to the optimal specific product. Karl Handelsman from Codon Capital noted that when you do have a product company it is important to engage with customers early, so you build something they really want. Related to this he recalled that a product company at the West Coast typically exits for 60-80 million USD, so you should be aware that you can never raise more than ~9 mUSD throughout the lifetime of a company. When it came to engaging with Corporate Venture Capital, the panel unanimously appraised them for their expertise came, but care should be taken that your exit strategies are not getting limited by partnering up with them. The session was rounded off with a yes/no on the positive impact of Trump as president on synbio, only Karl was positive because this would definitely direct lots and lots of funding towards Life-On-Mars projects.

Applications of synbio by the industry

In the ‘Application Stack’ session five companies pitched their take on synbio and how this can be used as a value creator. Ranging from bacterial vitamin production by Biosyntia to harnessing the power of Deinoccocus. Particular interesting was Darren Platts’ talk who showing one of Amyris in-house developed tools on language specification in synthetic biology. The actual challenge here was not to write the software “pretty straightforward”, it was more difficult to get the users engaged in the project and adapting the tool. Their paper was published recently in ASC Synbio and the code is soon released on Github.

Is there place for synbio in big pharma?

The final session of the first day was titled ‘ Synthetic Biology for Biopharmaceuticals’ and here if found the talks of Marcelo Kern from GSK and Mark Wigglesworth from AstraZeneca especially interesting, they gave their ‘big pharma’ view on how to incorporate synthetic biology into the established workflows. GSK for example focused on reducing the carbon footprint by replacing chemical synthesis with enzyme catalysis. Another great example was the use of CRISPR to generate drug resistant cell lines to for direct use by the in-house screening department.

The first day was rounded of by Emily Leproust from Twist Bioscience, announcing that they would be happy to take new orders from June (!) on.

The future of synbio

The second day started of with a discussion on ‘Futures: New technologies and Applications’ by Gen9 CEO Kevin Mundanely and Sean Sutcliffe from Green biologics. Both showed examples of partnering by their company with academic institutions to get FTO into place. Sean also made an interesting comment that it took them about 4 years to commercialize “technology from the ’70” so he estimated it would take around 12 years before the CRISPR technology, now trickling into the labs, can be used on production scale in the fermenters.

A fun-and-fast-paced ‘Lightning Talks’ session gave industry and non-profit captains a platform of exactly 5 minutes to pitch their vision. Randy Rettberg gave a fabulous speech about the impact of iGEM on the synbio sector and concluded that iGEM helps cultivating the future leaders of the field. Gernot Abel from Novozymes highlighted a ‘citizen science’ project where the ‘corporate’ Novozymes worked together with biohacker space Biologigaragen in Copenhagen to successfully construct an ethanol assay. Along these lines Ellen Jorgensen from the non-profit Genspace pitched their “why a new generation of bio-entrepreneurs are choosing community labs over incubators/accelerators” at a price point of 100$/month versus 1000$/month. Dek Woolfson (known for his computationally designed peptides and cages) gave an academically tasting talk about BrisSynBio but finished his pitch that they are looking for a seasoned business person to help making their tools available for a broader public.

d

Dek Woolfson was one of the few (still) academics on stage. (Photo by: Edinburgh iGEM)

What happens when synthetic biology and hardware meet?

The hardware and robot session showcased, among others, Biorealize who are constructing a tabletop device to transform cells and incubate and lyse them, Synthase who just released an open source data management platform Antha and Bento Lab (currently running a very succesfull kickstarter campaign) highlighting their mobile PCR workstation. An interesting question was posed at the end as to how much responsibility Bento Lab was putting on the DNA oligo synthesis companies by democratizing and making PCR available to the general public. Bento Lab defended that they are supplying an extensive ethical guide with their product and that they don’t supply any reagents. Unfortunately this very interesting discussing was terminated due to a tight conference schedule.

d

Tabletop transformations, incubations and lysis in one go using Biorealize

A healthy microbiome using GMO’s?

In the final session of SynBioBeta a few examples of synbio applied to the microbiome came by. Boston based Synlogic is planning on starting the IND (Investigational New Drug) process on their E.coli equipped with ammonia degrading capabilities to combat urea cycle disorder. Xavier Duportet showed an example of Eligo Bioscience using CRISPR systems delivered by phages that selectively kill pathogens, such as Staphylococcus aureus, part of this exciting work was published in 2014 in Nature Biotech using mice models.

ddd

Eligo Bioscience and their CRISPR-delivered-by-phage technology (Photo by: Edinburgh iGEM)

After all these dazzling applications of synthetic biology, captain John Cumbers wrapped up SynBioBeta by also announcing the next event in San Fransico at 4th-6th October and in London next year again around April.

Personally I think the conference did a great job at gathering together the industrial synthetic biology community, from both early start-up to big pharma. Although the sentiment is that we are not as far as we want to be, there have been some considerable advancements over the last 15 years. From an investors perspective there is still a lot of uncertainty surrounding the run-time (and the inherently coupled rate of return) of synbio projects, however the recent numbers on VC funding are indicating there is an eagerness to take the leap. Taking together, a jam packed two days with high end exciting synthetic biology applications, it will be very interesting to see if Moore’s law also applies to synbio.

Disclaimer: The above write up is strongly biased by my own interests, so revert to the twitter hashtag #SBBUK16 to get a more colorful overview of the past two days.

3 Comments

Filed under Talk

Wrapup of Visualizing Biological Data ’15

Screen Shot 2014-03-09 at 7.46.53 PMFrom the 24th till the 27th of March I visited the Broad Institute of Harvard and MIT in Boston to attend the VizBi 2015 conference. The scope of this conference is to advance the knowledge in the visualization of biological data, the 2015 iteration was the 6th international meeting that took place. Hereby a long overdue recap of two talks that I thought were particular interesting.

On Wednesday John Stasko kicked off as a keynote speaker with some very interesting notions about the different applications of visualization; this should either be for presentation (=explanatory) or for analysis (=exploratory). This difference is important since they both have their own goals, for example when presenting results the goals are: to clarify, focus, highlight, simplify and persuade. However when analyzing data the goal is to explore, make decisions and use statistic descriptors.

However a good quote also passed by here “IF you know what you are looking for, you probably don’t need visualizations”.

So when you do decide you need a visualization it is most useful for analysis (=exploratory), in this case it can help you:

  • If you don’t know what you are looking for
  • Don’t have an a priori questions
  • Want to know what questions to ask

So typically these kind of visualizations; show all variables, illustrate overview and detail and facilitate comparison. A result of this setup is that “analysis visualizations” are difficult to understand, because the underlying data is complex, so the visualization is probably also difficult to understand. This is not a bad thing, however the user needs to invest time to decode the visualization.

A perfect example of a exploratory visualization is the Attribute Explorer from 1998[1]. Here the authors used the notion of compromise to analyze a dataset. For example when searching for a new house you might look at the price, the commuting time and the amount of bedrooms. However when setting a particular limit on each of these attributes you might miss the house that has a perfect price and number of bedrooms but is just a 5-minute longer commute. The paper shows that by implementing coupled histograms the user is still able to see these “compromise solutions”. The PDF of the article is available here showing some old school histograms.

The concepts of the Attribute Explorer from 1998 are nowadays still relevant

The concepts of the Attribute Explorer from 1998 are nowadays still relevant

The takeaway: a visualization of radically different if one presents the data or when one analyses the data

An often encountered problem with visualization is high data complexity; too high to visualize in one go. There are a few options to tackle this:

  • pack all the data in one complex representation
  • spread the data into multiple coordinated views (pixels are Johns friend)
  • use interaction to reveal different subsets of the data

When interaction with data users have different intends in a 2007 InfoVis paper by Stasko [2] there are 7 intends described:

  1. Select
  2. Explore
  3. Reconfigure
  4. Encode
  5. Abstract/Elaborate
  6. Filter
  7. Connect

However 95% of the intends are made up by Tooltip&Selection in order to get details, Navigation and Brushing&linking. This gives rise to a chicken-egg problem, why are only those 4 intends used so extensively and how can one make a visualization more effective?

An example Stasko showed was the use of a tablet[3] where there is a whole wealth of new gestures available, as is best illustrated in this video:

As a conclusion Stasko gives his own formula that captures the value of visualization.

Value of Visualization = Time + Insight + Essence + Confidence:

  • T: Ability to minimize the total time needed to answer a wide variety of questions about the data
  • I: Ability to spur and discover insights or insightful questions about the data
  • E: Ability to convey an overall essence or take-away sense of the data
  • C: Ability to generate confidence and trust about the data, its domain and context

download (2)

On Friday Daniel Evanko (@devanko) from the Nature Publishing spoke about the future of visualizations in publications. There is currently a big gap between all the rich data sets that people publish and the way these are incorporated in scientific articles. Evanko made some interesting points from a publisher perspective.

The current “rich” standards such as pdf are probably good for a dozen of years to come, however new formats such as D3, Java and R can break or could become unsupported at any time in the future. On the other hand the basic print format such as paper or microfilm can be kept for 100 years. Although this is a conservative standpoint in my opinion it indeed makes sense to keep the long term perspective in mind when releasing new publication formats, because who says Java will be supported in 20 years. However I think with thorough design (the community) should be able to come up with some defined standards that have the lifetime of a microfilm.

Another argument Evanko used was the fact that the few papers that are published with interactive visualization do not generate a lot of traffic from which the conclusion was drawn that the audience doesn’t want these kind of visualization so publishers will not offer them. Again I feel we can be dealing here with a chicken-egg problem.

I’m grateful to the Otto Mønsteds Fond for providing support to attend Vizbi ’15.skjold-otto-moensteds-fond

 

References

  1. Spence R, Tweedie L: The Attribute Explorer: information synthesis via exploration. Interact Comput 1998, 11:137–146.
  2. Yi Jsyjs, Kang Yakya, Stasko JT, Jacko J.: Toward a Deeper Understanding of the Role of Interaction in Information Visualization. IEEE Trans Vis Comput Graph 2007, 13:1224–1231.
  3. Sadana R, Stasko J: Designing and implementing an interactive scatterplot visualization for a tablet computer. Proc. 2014 Int Work. 2014:265–272.

 

Leave a Comment

Filed under Talk

Recap of the Nanopore sequencing conference ‘London Calling’ by ONT

MinionLast Thursday and Friday Oxford Nanopore Technologies (ONT) hosted it’s first conference ‘London Calling’ where participants of the MinION Access Program (MAP) presented their results and experiences after 11 months of the program. The CTO of ONT also delivered  a session where the future directions where outlined. Below a quick recap of two days of London Calling.

There were about 20 talks (agenda) by a broad range of scientist from microbiologists to bioinformaticians. A few observations I found interesting to share:

  • John Tyson (University of British Columbia) wrote a script that slightly alters the voltage along the run to keep the yield curve linear, he uses this method standard for each of his runs
  • The majority of the presenters just only use the 2D reads
  • A nice month-by-month overview of the MAP program can be found in Nick Lomans talk here
  • Miles Carroll (Public Health England), Josh Quick (University of Birmingham) and Thomas Hoenen, NIH/NIAID) went to Africa last year to sequence the Ebola virus outbreak and were able to map the outbreak on phylogenetic timescale, they used RT-PCR to generate the input material. Main conclusion here was that field sequencing with the MinION works, the Ebola mutation rate is not higher than other viruses, key drug targets are not mutating.
  • People are exploring a lot of options to use it in clinical setting, for example for rapid identification of bacterial infections (Justin O’Grady, University of East Anglia) or for pharmacogenomics (Ron Ammar, University of Toronto); in short which drugs not to prescribe to patients because their liver cannot metabolise them due to a genetic modification, read the paper here.
  • A detailed account on how to assemble a bacterial genome with only Nanopore data by Jared Simpson can be found on Slideshare, it’s an interaction version of this pre-print
  • Currently MinION + MiSeq data is the way to go short-term future (according to Mick Watson) for genome assembly. Alistair Darby, University of Liverpool argued to just use 1 sequencing technology to perform the whole genome assembly because to much time can/is wasted to integrate all the different sequencing methods with different algorithms.
Minion

DNA sequencing becomes really personal now

During the talks some requests were put forward:

  • More automation for lib prep / faster lib prep protocol (this will be tackled either with VolTRaxx and/or a bead protocol for low input material and a 10 minute protocol for 1D reads announced by CTO Clive Brown)
  • More stable performance between individual flow cells
  • Base calling off-line so no need to connect to the cloud
  • Tweaking the base caller for base pair modifications (for example methylation)

On Thursday afternoon there was the talk of Clive Brown the CTO of ONT. On Twitter the talk was compared with a “Steve Jobs style” way to reveal the new products.

A few points he presented:

  • There will be at the end of the year/next year a new MinION release that has the ASIC electronics not in the flow cell but in the MinION itself, this would drastically cut the price of the flow cells (from 1000$ -> 25$). Another big change here is the chip will contain 3000 channels instead of 512. Furthermore runtime of these device will also be around 2 weeks.
  • All the shipments should be room temperature soon
  • A “fast mode“ will be available within the next 3 months where a typical run will not generate 2Gbase of data but 40Gbase of data.
  • VoltTRAX is developed which can be clicked on a flow cell and will automate the full lib prep process, they imagine users can load a mL of blood sample on the VolTRAX and it will be prepped automatically.
  • At the same time ONT will implement a different price structure where you pay per hour of sequencing instead of per flow cell, so you can just run a MinION for 3 hours and pay, say 270$ and don’t pay anything else.
  • The PromethION (kind of 48 MinIONs in 1 machine and more channels per chip) will be launched with Sequencing Core facilities as their main costumer in mind, however they will create a MAP for this (PEAP) as well. The PromethION It will include the above improvements as well, making it potentially more productive than a HiSeq.
Oxford Nanopore Tcchnologies CTO Clive Brown showcasing the VolTraxx automatic sample preparation unit

Oxford Nanopore Tcchnologies CTO Clive Brown showcasing the automatic sample preparation device VolTRAXX.

In conclusion the conference atmosphere was very upbeat with a lot of enthusiasm for the future of nanopore sequencing. Can’t wait to get this MinION started.

 

 

1 Comment

Filed under Talk

Highlights of the International Synthetic and Systems Biology Summer School 2014

sicily_etnaLast week I joined the International Synthetic and Systems Biology Summer School in Taormina, Italy and as the title describes it was all about Synthetic and Systems biology with some pretty cool speakers.  Weiss talked about the general principles of genetic circuits and the current limitations (record is currently 12 different synthetic promoters in 1 designed network). Sarpeshkar focused on the stochastic nature and the associated noise of cells, he showed how they can be simulated or mirrored using analog circuits. Paul Freemont took Ron Weiss’ design principles and showed how to apply them on different examples, he also elaborated on an efficient way of characterizing new circuits and parts. Tanja Kortemme, a former postdoc from the Baker lab, gave an introduction to the capabilities of computational protein design and using some neat examples showed the power (and limitations) of computational design. Below some highlights and the relevant links of the literature that was discussed.

  Continue reading

Leave a Comment

Filed under Talk

Papers of VizBi ’14 keynote by Jeff Heer

Screen Shot 2014-03-09 at 7.46.53 PMLast week I attended the Visualizing Biological Data (VizBi) conference in Heidelberg. According to the website the mission of the conference is to “ bring together scientists, illustrators, and designers actively using or developing computational visualization to study a diverse range of biological data.”. I can only say the organisers more than succeeded in this mission, it was indeed a very interdisciplinary, creative and interactive crowd. The first keynote of VizBi ’14 was presented by Jeffrey Heer from the University of Washington, since the papers he referred to are mostly published in non-PubMed journals I tried to collect links to the pdfs here. Update: Added two more references supplied by Heer.

Continue reading

Leave a Comment

Filed under Talk

Untangling the Hairball by O’Donoghue (ICSB13)

F1.largeToday Sean O’Donoghue talked at the 14th International Conference on Systems Biology (ICSB) in Copenhagen. O’Donoghue is affiliated with the Australia’s Commonwealth Scientific and Industrial Research Organization  (CSIRO) and the Garvan Institute of Medical Research both located in Sydney.  His talk was officially titled “Visual Analytics: A New Approach for Systems Biology” but he immediately after the start admitted it could better be named “Untangling the Hairball“. Using 6 guidelines he quickly showed the basic principles of data visualization for scientists. Since his talk contained quite some references to journal articles, webservers and online tools I thought it would be useful to put everything together in a post.

Continue reading

Leave a Comment

Filed under Talk