Biotechnology is one field that has profited immensely from the advent of ‘-omics’ technologies. For example, fluxomics brought a better understanding of the flow of metabolites through the cellular metabolic network, transcriptomics and proteomics helped us appreciate the multiple consequences of gene overexpression, and genomics allowed us to catalogue mutations in high-performer strains and transfer them to new strains. But, while our understanding of the system-wide effects of our current rather subtle strain modifications continuously grows, this change in scope does not yet extend to the manipulation of bio(techno)logical systems. Broadly speaking, we still paste a few genes into a plasmid, insert it into our pet-strain, and hope for the best.
However, if it is system-wide consequences that we need to take into consideration, it is most probably system-wide action that we need to take to design truly effective biotechnological systems. I argue that a major line of research in the next years and decades will deal with our enabling of biological system engineering and providing the corresponding arsenal of tools. I predict enabling on three levels: technical, theoretical and organizational.
The first step in this transformation to system level manipulation is easy to spot: de novo DNA synthesis. The technology as such is not new, but it has become now so cheap that it is about to make the crucial step out of the industrial laboratories into the world of academic research as a routine tool. Moreover, the success of de novo DNA synthesis by assembling entire genes from oligonucleotides has re-ignited the search for novel DNA synthesis technologies that might in the future help to bring costs down further and directly accessible DNA sequences longer. Currently, the price of a bp in a synthesized gene halves every 2–3 years, and it is only a question of time when the full force of this technology will drive the art of cloning out of our laboratory.
Of course, the next step will then be to go from single genes to novel entire pathways or even novel genomic sections or entire genomes. The required methods are not yet routinely available, but improvements in DNA synthesis technology and the recruitment of the proper biological tools – such as homologous DNA recombination to assemble DNA fragments in vivo and in vitro– suggest that the corresponding problems will be solved rather quickly (Gibson et al., 2008).
But where a slow assembly process used to allow the step-by-step verification of underlying scientific assumptions, a 50 kbp sequence that will be delivered 4 weeks from now does no longer allow such luxury. It will become exceedingly important (i) to integrate all available information into the sequence already at the start; (ii) to use predictive tools to substitute for the missing information; (iii) to develop the corresponding experimental technology to obtain the remaining indispensable data rapidly; and (iv) to make sure that the host that is to receive the DNA sequence can read out the information in a predictable and reliable fashion.
The first point is at its heart an organizational challenge. The design-relevant information for one promoter, one ribosome binding site, one RNAse site or one transcriptional terminator sequence might be available in the literature, but locating and exploiting it is currently an achievement in itself, and it is even more so for the 50 genes on the ordered DNA sequence. To make it available for engineering– that is a rational selection of a standard element based on quantitative criteria – this information needs to be made available centrally, such as it is the goal of the Registry of Standard Biological Parts (http://partsregistry.org). Of course, the ‘standard’ part would not only encompass requirements for the presentation and completeness of data and information, but it will extend to the data's generation, preferably as a part of the operations of such a facility.
Reliable standardization will also be of crucial importance in the reliable use of computational tools to predict the behaviour of the functions encoded on our artificial DNA sequence (Marchisio and Stelling, 2008). But even more, just as CAD technology helps designing anything from houses to mechanical engineering artifacts by hiding a huge body of knowledge behind the interface, a biotech-CAD will help to recruit the system design knowledge that is available from, for example, electrical engineering into a best practice for biological systems design: What is the most effective way to engineer an oscillation? Or a regulatory circuit that makes signal output dependent on the concomitant availability of two signals (an AND gate)?
Clearly, much of the work that is required for the predicted transformation to systems level is in a sense repetitive: for example, results on ribosome binding site strength for many sites need to be verified under various growth conditions and with sufficient redundancy to be statistically relevant. Or long DNA sequences need to be assembled step-by-step from shorter DNA elements. This work is in principle excellently suitable to automation. Its reduction to micrometer dimensions and its integration into microfluidic systems is then the crucial step that will make it affordable and allow the required parallelization.
While all of the three points above are well underway already today, the future of (iv) is much less clear. From an engineering point of view, the notion that every designed DNA sequence requires a tailor made host to interact with acts as a real deterrent. It will be much more attractive to have hosts available that provide required resources (e.g. protein synthesis) but other than that behave orthogonal to (not or hardly influenced by) the introduced DNA sequence. We are currently far from understanding the central rules of orthogonal design in biotechnology, but it seems safe to say that it will depend on our ability to manipulate chemical interfaces to remove and introduce interactions at will. Already a range of techniques is available that points the way to orthogonal engineering, either by removing unwanted interactions through genome reduction (Posfai et al., 2006) or working with in vitro systems (Jewett et al., 2008), or by engineering novel orthogonal interactions by designing smart selection schemes and then recruiting evolution (Rackham and Chin, 2005).
In my view, to truly flourish, systems biotechnology will need the future toolbox of synthetic biology. The corresponding changes will turn biotechnology into a true engineering discipline and finally produce in full the industry we have been dreaming of for the last 30 years.