Tag Archives: conference

Highlights of a two days nanopore conference

This week Oxford Nanopore Technologies organized the third London Calling conference, gathering around 400 attendees (200 more than last year) in the Old Billingsgate Market directly at the Thames. This year there was not a MinION in the goodie bag (I thought because everyone already had one, but there were a lot of new users as well) instead the bag contained a voucher for a flowcell and 1D^2 sequencing kit*.

I’ll not cover each individual talk, as James Hadfield did a great job of posting a detailed writeup on enseqlopedia (day 1, day 2). Furthermore David Eccles has a very thorough transcript of Clive Browns (CTO Oxford Nanopore) talk and I’m expecting a blog from Keith Robison at OmicsOmics soon. Videos of all the talks are supposed to be online later this month.

Technology

  • Read length, or more specific long reads, was an often-mentioned topic the past days. Whereas 100 kb reads were previously classified as ‘long’. These days the record is 950 kb. Long reads all hinge on the DNA extraction method. This has been described on Nick Lomans blog, as well as in the human genome seq paper. The latter paper (Fig 5a) also nicely forecasts how long reads can tremendously aid (human) genome assembly reaching a predicted N50 of> 80 Mbs (basically a full chromosome)
  • Clive announced (although I don’t have the exact wording) ONT would not discontinue pore chemistries any more. Which was previously flagged by quite a few attendees as limiting the implementation of nanopore sequencing in the ‘production’ environment.
  • Most of the users get stable results with R9 compared to the more variable R7.x chemistry of last year (but apparently not everyone, so ONT is trying to help individual users and also organizes hands on workshops etc.).
  • Direct RNA Seq is available. Although the throughput is not as high as the cDNA version (“which is just very great”). However, direct RNA seq does allow users to map base modifications as showcased by this cool direct 16S preprint from Smith et al.
  • The dCas9 enrichment looks really promising, although this is not publicly available yet. Slides presented by Andy Heron from ONT included a few old ones from last year in New York, but spiced up with more recent data. For example work on increasing the local concentration of DNA at the pore using beads. On an E. coli sample this makes a 300x target enrichment possible.
  • Mick Watson showed it is possible to do complete genome assembly from a metagenomic sample.

Devices

ONT now has a whole portfolio of products at different stages of the development process. I’ll segment them by their availability

  • In use
    • MinION, currently R9.4, will switch later this month to R9.5 pore to support 1D^2. However the 1D kits will still run on the R9.5 pore. I assume there are just a few modifications made to the pore protein that attract/guide the tether from the 1D^2 complement strand to the pore. Currently users routinely get out between 5-10 Gbase, 15-20 Gbase is in-house possible
    • First PromethION flowcells are running in the field, but the users are asked for their patience as all the hardware is new (flowcell, chips, box) compared to the MinION. (This is not the case for the flonge which is just ‘reusing’ MinION hardware, see below). A full running setup with 48 PromethION flowcells is supposed to generate far more data than Illuminas Novoseq flagship.
  • First shipment later this month:
    • GridION is marked as a device for users who want to be a service provider. Basically it is 5 MinIONs in one box + basecaller, so no hassle with updating 5 computers. The GridION will in the future be compatible with the high-performance PromethION flowcells.
    • VolTRAX (the automated sample prep) is already deployed in the field, but not yet with the reagents to actually carry out a library prep. However the release of the reagents is imminent. It will be very exciting to see first results from this, also as a way for the community to share and standardize DNA extraction protocols. Next stage are lyophilized reagents, which are scheduled for end 2017 and will be most welcomed by users doing in-field experiments.
  • Somewhere in the pipeline
    • Flonge is an adapter that allows a down-scaled version of the MinION flowcell to be used, thereby lowering the flowcell costs significantly. The device is in the process for regulatory approval and thus the main entrance for ONT into the healthcare market, which Gordon Sanghera (CEO) described as much harder to get a hold on than the R&D market.
    • SmidgION uses the same lower pore density flowcell as the flonge but allows direct connection to a phone.
    • An unnamed-basecall-dongle. Basecalling will in the future be done on dedicated hardware, a field programmable array (FPGA), which should be able to basecall 1M bases per second. This will initially make users without access to clusters or remote use pretty happy.

What will the coming year bring?

Compared to two years ago I saw a lot of cool applications and trials. Zamin Iqbal tuberculosis sequencing, Justin O’Grady urinary tract infection sequencing, Nick Loman and Josh Quick Zika Brazil project  and Richard Leggett pre-term infant microbiome sequencing. It is clear the ONT platform is starting to mature and the initial hicks up are over. From a healthcare perspective these technologies are just waiting to be tried in the clinic, as Nick also mentions “Why has nobody sequenced yet in a NHS (National Health Service) lab?” So I expect presentations to be in this clinical  direction at the 2018 conference. I also believe we will see large (nanopore only) genome assemblies of plants, funky eukaryotes, phased human genomes as well as metagenome assemblies being produced by the platform due to the increased throughput and read length. Eventually I expect the base modifications (both on RNA and DNA) to receive quite some coverage because of the improvements in the basecallers and kit chemistries.

In conclusion, I’m very much look forward to the coming developments as its clear that ONT is very passionate about R&D and continues to crank out improvements.

Disclaimer: I was an invited speaker at LC17 and received travel and accommodation subsidy.

*Update 05-09: Apparently new users did receive a MinION

2 Comments

Filed under Talk

SynBioBeta ’16 packed with innovation

sblogoLast Wednesday the SynBioBeta conference got kicked off at Imperial College. Central topic was the current state of synthetic biology and how (commercial) value can be gained by supplying tools and platforms. In the keynote by Tom Knight from Ginkgo Bioworks, and the afterwards chat with his old PhD student Ron Weiss (now professor at MIT), a few interesting points came by that illustrate the path synbio has taken over de last two decades.

Ginkgo Bioworks founder Tom Knight

Ginkgo Bioworks founder Tom Knight (Photo courtesy of Twist Bioscience)

Tom started of with a quote from Douglas Adams  “if you try and take a cat apart to see how it works, the first thing you have on your hands is a non-working cat” to illustrate the current (or not so far in the past) state of biology in general. He used the old ‘systems engineering’ of a Boeing 777 example to highlight where synbio should be going in his opinion. As in: 1. design using CAD 2. build 3. it works. So no more tinkering and endless design-build-test cycles. In order to do so he argued for an extra loop in the cycle, the simulate component. This would allow the end-user to design and simulate a layout before actually building and testing it.  However, he was quickly to note that we are currently lacking a lot of insights into the biology of a single simple cell, for exmample the Mycoplasma mycoides of which 149 of the 473 remain of unknown function but are essential for cell survival.

An improved version of the design cycle proposed by Tom Knight

An improved version of the design cycle proposed by Tom Knight

Also the VSLI analog was brought up and the panel noted that Voigts group last week came a step closer to this paradigm by rationally designing circuits and building them.

On the questions whether synbio is progressing fast enough Ron Weiss replied that it is not “as fast as we want”, he recalled the last chapter of his thesis describing a synthetic biology program language, which he laughingly categorized as “completely useless back then”. However the state of mind back in the 2000’s was “that within a year or 5” we would be able to build circuits with at least 30 gates (Voigts paper from last week showed a ‘Consensus circuit’ containing 12 regulated promoters). Tom was a bit more optimistic saying that “You overestimate what is going to happen in 5 and underestimate what happens in 10 years”. Bottom line was the central need to be able to make robust systems that can work in the real world and in order to do so more information is needed such as whole cell models. The session ended with a spot-on question from riboswitch pioneer Justin Gallivan, now at DARPA; “who is going to fund research this research to gain basic knowledge?”. For example, who is going to elucidate the function of the 149 proteins of unknown functions? One suggestion was that Venter should just pull out his checkbook again…

The investors’ perspective

Next on the program was the investors round table geared towards the commercialization aspect of synthetic biology. It was debated whether the use of the term ‘synbio’ would negatively affect your final product or whether it would boost sales, Veronique de Bruijn from IcosCapital argued that the “uneducated audience will definitely judge you” so she suggested to use the term ‘synbio’ cautiously. Business models, an ever debated topic, stroke more consensus among the investors, they all agreed that it is difficult for a platform technology to go out, hence it can be extremely difficult to apply the technology to the optimal specific product. Karl Handelsman from Codon Capital noted that when you do have a product company it is important to engage with customers early, so you build something they really want. Related to this he recalled that a product company at the West Coast typically exits for 60-80 million USD, so you should be aware that you can never raise more than ~9 mUSD throughout the lifetime of a company. When it came to engaging with Corporate Venture Capital, the panel unanimously appraised them for their expertise came, but care should be taken that your exit strategies are not getting limited by partnering up with them. The session was rounded off with a yes/no on the positive impact of Trump as president on synbio, only Karl was positive because this would definitely direct lots and lots of funding towards Life-On-Mars projects.

Applications of synbio by the industry

In the ‘Application Stack’ session five companies pitched their take on synbio and how this can be used as a value creator. Ranging from bacterial vitamin production by Biosyntia to harnessing the power of Deinoccocus. Particular interesting was Darren Platts’ talk who showing one of Amyris in-house developed tools on language specification in synthetic biology. The actual challenge here was not to write the software “pretty straightforward”, it was more difficult to get the users engaged in the project and adapting the tool. Their paper was published recently in ASC Synbio and the code is soon released on Github.

Is there place for synbio in big pharma?

The final session of the first day was titled ‘ Synthetic Biology for Biopharmaceuticals’ and here if found the talks of Marcelo Kern from GSK and Mark Wigglesworth from AstraZeneca especially interesting, they gave their ‘big pharma’ view on how to incorporate synthetic biology into the established workflows. GSK for example focused on reducing the carbon footprint by replacing chemical synthesis with enzyme catalysis. Another great example was the use of CRISPR to generate drug resistant cell lines to for direct use by the in-house screening department.

The first day was rounded of by Emily Leproust from Twist Bioscience, announcing that they would be happy to take new orders from June (!) on.

The future of synbio

The second day started of with a discussion on ‘Futures: New technologies and Applications’ by Gen9 CEO Kevin Mundanely and Sean Sutcliffe from Green biologics. Both showed examples of partnering by their company with academic institutions to get FTO into place. Sean also made an interesting comment that it took them about 4 years to commercialize “technology from the ’70” so he estimated it would take around 12 years before the CRISPR technology, now trickling into the labs, can be used on production scale in the fermenters.

A fun-and-fast-paced ‘Lightning Talks’ session gave industry and non-profit captains a platform of exactly 5 minutes to pitch their vision. Randy Rettberg gave a fabulous speech about the impact of iGEM on the synbio sector and concluded that iGEM helps cultivating the future leaders of the field. Gernot Abel from Novozymes highlighted a ‘citizen science’ project where the ‘corporate’ Novozymes worked together with biohacker space Biologigaragen in Copenhagen to successfully construct an ethanol assay. Along these lines Ellen Jorgensen from the non-profit Genspace pitched their “why a new generation of bio-entrepreneurs are choosing community labs over incubators/accelerators” at a price point of 100$/month versus 1000$/month. Dek Woolfson (known for his computationally designed peptides and cages) gave an academically tasting talk about BrisSynBio but finished his pitch that they are looking for a seasoned business person to help making their tools available for a broader public.

d

Dek Woolfson was one of the few (still) academics on stage. (Photo by: Edinburgh iGEM)

What happens when synthetic biology and hardware meet?

The hardware and robot session showcased, among others, Biorealize who are constructing a tabletop device to transform cells and incubate and lyse them, Synthase who just released an open source data management platform Antha and Bento Lab (currently running a very succesfull kickstarter campaign) highlighting their mobile PCR workstation. An interesting question was posed at the end as to how much responsibility Bento Lab was putting on the DNA oligo synthesis companies by democratizing and making PCR available to the general public. Bento Lab defended that they are supplying an extensive ethical guide with their product and that they don’t supply any reagents. Unfortunately this very interesting discussing was terminated due to a tight conference schedule.

d

Tabletop transformations, incubations and lysis in one go using Biorealize

A healthy microbiome using GMO’s?

In the final session of SynBioBeta a few examples of synbio applied to the microbiome came by. Boston based Synlogic is planning on starting the IND (Investigational New Drug) process on their E.coli equipped with ammonia degrading capabilities to combat urea cycle disorder. Xavier Duportet showed an example of Eligo Bioscience using CRISPR systems delivered by phages that selectively kill pathogens, such as Staphylococcus aureus, part of this exciting work was published in 2014 in Nature Biotech using mice models.

ddd

Eligo Bioscience and their CRISPR-delivered-by-phage technology (Photo by: Edinburgh iGEM)

After all these dazzling applications of synthetic biology, captain John Cumbers wrapped up SynBioBeta by also announcing the next event in San Fransico at 4th-6th October and in London next year again around April.

Personally I think the conference did a great job at gathering together the industrial synthetic biology community, from both early start-up to big pharma. Although the sentiment is that we are not as far as we want to be, there have been some considerable advancements over the last 15 years. From an investors perspective there is still a lot of uncertainty surrounding the run-time (and the inherently coupled rate of return) of synbio projects, however the recent numbers on VC funding are indicating there is an eagerness to take the leap. Taking together, a jam packed two days with high end exciting synthetic biology applications, it will be very interesting to see if Moore’s law also applies to synbio.

Disclaimer: The above write up is strongly biased by my own interests, so revert to the twitter hashtag #SBBUK16 to get a more colorful overview of the past two days.

3 Comments

Filed under Talk

Wrapup of Visualizing Biological Data ’15

Screen Shot 2014-03-09 at 7.46.53 PMFrom the 24th till the 27th of March I visited the Broad Institute of Harvard and MIT in Boston to attend the VizBi 2015 conference. The scope of this conference is to advance the knowledge in the visualization of biological data, the 2015 iteration was the 6th international meeting that took place. Hereby a long overdue recap of two talks that I thought were particular interesting.

On Wednesday John Stasko kicked off as a keynote speaker with some very interesting notions about the different applications of visualization; this should either be for presentation (=explanatory) or for analysis (=exploratory). This difference is important since they both have their own goals, for example when presenting results the goals are: to clarify, focus, highlight, simplify and persuade. However when analyzing data the goal is to explore, make decisions and use statistic descriptors.

However a good quote also passed by here “IF you know what you are looking for, you probably don’t need visualizations”.

So when you do decide you need a visualization it is most useful for analysis (=exploratory), in this case it can help you:

  • If you don’t know what you are looking for
  • Don’t have an a priori questions
  • Want to know what questions to ask

So typically these kind of visualizations; show all variables, illustrate overview and detail and facilitate comparison. A result of this setup is that “analysis visualizations” are difficult to understand, because the underlying data is complex, so the visualization is probably also difficult to understand. This is not a bad thing, however the user needs to invest time to decode the visualization.

A perfect example of a exploratory visualization is the Attribute Explorer from 1998[1]. Here the authors used the notion of compromise to analyze a dataset. For example when searching for a new house you might look at the price, the commuting time and the amount of bedrooms. However when setting a particular limit on each of these attributes you might miss the house that has a perfect price and number of bedrooms but is just a 5-minute longer commute. The paper shows that by implementing coupled histograms the user is still able to see these “compromise solutions”. The PDF of the article is available here showing some old school histograms.

The concepts of the Attribute Explorer from 1998 are nowadays still relevant

The concepts of the Attribute Explorer from 1998 are nowadays still relevant

The takeaway: a visualization of radically different if one presents the data or when one analyses the data

An often encountered problem with visualization is high data complexity; too high to visualize in one go. There are a few options to tackle this:

  • pack all the data in one complex representation
  • spread the data into multiple coordinated views (pixels are Johns friend)
  • use interaction to reveal different subsets of the data

When interaction with data users have different intends in a 2007 InfoVis paper by Stasko [2] there are 7 intends described:

  1. Select
  2. Explore
  3. Reconfigure
  4. Encode
  5. Abstract/Elaborate
  6. Filter
  7. Connect

However 95% of the intends are made up by Tooltip&Selection in order to get details, Navigation and Brushing&linking. This gives rise to a chicken-egg problem, why are only those 4 intends used so extensively and how can one make a visualization more effective?

An example Stasko showed was the use of a tablet[3] where there is a whole wealth of new gestures available, as is best illustrated in this video:

As a conclusion Stasko gives his own formula that captures the value of visualization.

Value of Visualization = Time + Insight + Essence + Confidence:

  • T: Ability to minimize the total time needed to answer a wide variety of questions about the data
  • I: Ability to spur and discover insights or insightful questions about the data
  • E: Ability to convey an overall essence or take-away sense of the data
  • C: Ability to generate confidence and trust about the data, its domain and context

download (2)

On Friday Daniel Evanko (@devanko) from the Nature Publishing spoke about the future of visualizations in publications. There is currently a big gap between all the rich data sets that people publish and the way these are incorporated in scientific articles. Evanko made some interesting points from a publisher perspective.

The current “rich” standards such as pdf are probably good for a dozen of years to come, however new formats such as D3, Java and R can break or could become unsupported at any time in the future. On the other hand the basic print format such as paper or microfilm can be kept for 100 years. Although this is a conservative standpoint in my opinion it indeed makes sense to keep the long term perspective in mind when releasing new publication formats, because who says Java will be supported in 20 years. However I think with thorough design (the community) should be able to come up with some defined standards that have the lifetime of a microfilm.

Another argument Evanko used was the fact that the few papers that are published with interactive visualization do not generate a lot of traffic from which the conclusion was drawn that the audience doesn’t want these kind of visualization so publishers will not offer them. Again I feel we can be dealing here with a chicken-egg problem.

I’m grateful to the Otto Mønsteds Fond for providing support to attend Vizbi ’15.skjold-otto-moensteds-fond

 

References

  1. Spence R, Tweedie L: The Attribute Explorer: information synthesis via exploration. Interact Comput 1998, 11:137–146.
  2. Yi Jsyjs, Kang Yakya, Stasko JT, Jacko J.: Toward a Deeper Understanding of the Role of Interaction in Information Visualization. IEEE Trans Vis Comput Graph 2007, 13:1224–1231.
  3. Sadana R, Stasko J: Designing and implementing an interactive scatterplot visualization for a tablet computer. Proc. 2014 Int Work. 2014:265–272.

 

Leave a Comment

Filed under Talk

Papers of VizBi ’14 keynote by Jeff Heer

Screen Shot 2014-03-09 at 7.46.53 PMLast week I attended the Visualizing Biological Data (VizBi) conference in Heidelberg. According to the website the mission of the conference is to “ bring together scientists, illustrators, and designers actively using or developing computational visualization to study a diverse range of biological data.”. I can only say the organisers more than succeeded in this mission, it was indeed a very interdisciplinary, creative and interactive crowd. The first keynote of VizBi ’14 was presented by Jeffrey Heer from the University of Washington, since the papers he referred to are mostly published in non-PubMed journals I tried to collect links to the pdfs here. Update: Added two more references supplied by Heer.

Continue reading

Leave a Comment

Filed under Talk