|Research center||University of California Santa Cruz|
|Laboratory||Center for Biomolecular Science and Engineering|
|Authors||Brian J Raney|
|Primary citation||PMID 21037257|
The project involves a worldwide consortium of research groups, and data generated from this project can be accessed through public databases.
Humans are estimated to have approximately 20,000 protein-coding genes (collectively known as the exome), which account for only about 1.5% of DNA in the human genome. The primary goal of the ENCODE project is to determine the role of the remaining component of the genome, much of which was traditionally regarded as "junk" (i.e. DNA that is not transcribed).
The activity and expression of protein-coding genes can be modulated by the regulome - a variety of DNA elements, such as promoter, transcriptional regulatory sequences and regions of chromatin structure and histone modification. It is thought that changes in the regulation of gene activity can disrupt protein production and cell processes and result in disease (ENCODE Project Background). Determining the location of these regulatory elements and how they influence gene transcription could reveal links between variations in the expression of certain genes and the development of disease.
ENCODE is intended as a comprehensive resource to allow the scientific community to better understand how the genome can affect human health, and to "stimulate the development of new therapies to prevent and treat these diseases".
To date, the project has facilitated the identification of novel DNA regulatory elements, providing new insights into the organization and regulation of our genes and genome, and how differences in DNA sequence could influence disease. One main accomplishment described by the Consortium has been that 80% of the human genome is now "associated with at least one biochemical function". Much of this functional non-coding DNA is involved in the regulation of the expression of coding genes. Furthermore the expression of each coding gene is controlled by multiple regulatory sites located both near and distant from the gene. These results demonstrate that gene regulation is far more complex than was previously believed.
ENCODE is implemented in three phases: the pilot phase, the technology development phase and the production phase.
Along the pilot phase, the ENCODE Consortium evaluated strategies for identifying various types of genomic elements. The goal of the pilot phase was to identify a set of procedures that, in combination, could be applied cost-effectively and at high-throughput to accurately and comprehensively characterize large regions of the human genome. The pilot phase had to reveal gaps in the current set of tools for detecting functional sequences, and was also thought to reveal whether some methods used by that time were inefficient or unsuitable for large-scale utilization. Some of these problems had to be addressed in the ENCODE technology development phase (being executed concurrently with the pilot phase), which aimed to devise new laboratory and computational methods that would improve our ability to identify known functional sequences or to discover new functional genomic elements. The results of the first two phases determined the best path forward for analysing the remaining 99% of the human genome in a cost-effective and comprehensive production phase.
The pilot phase tested and compared existing methods to rigorously analyze a defined portion of the human genome sequence. It was organized as an open consortium and brought together investigators with diverse backgrounds and expertise to evaluate the relative merits of each of a diverse set of techniques, technologies and strategies. The concurrent technology development phase of the project aimed to develop new high throughput methods to identify functional elements. The goal of these efforts was to identify a suite of approaches that would allow the comprehensive identification of all the functional elements in the human genome. Through the ENCODE pilot project, National Human Genome Research Institute (NHGRI) assessed the abilities of different approaches to be scaled up for an effort to analyse the entire human genome and to find gaps in the ability to identify functional elements in genomic sequence.
The ENCODE pilot project process involved close interactions between computational and experimental scientists to evaluate a number of methods for annotating the human genome. A set of regions representing approximately 1% (30 Mb) of the human genome was selected as the target for the pilot project and was analyzed by all ENCODE pilot project investigators. All data generated by ENCODE participants on these regions was rapidly released into public databases.
For use in the ENCODE pilot project, defined regions of the human genome - corresponding to 30Mb, roughly 1% of the total human genome - were selected. These regions served as the foundation on which to test and evaluate the effectiveness and efficiency of a diverse set of methods and technologies for finding various functional elements in human DNA.
Prior to embarking upon the target selection, it was decided that 50% of the 30Mb of sequence would be selected manually while the remaining sequence would be selected randomly. The two main criteria for manually selected regions were: 1) the presence of well-studied genes or other known sequence elements, and 2) the existence of a substantial amount of comparative sequence data. A total of 14.82Mb of sequence was manually selected using this approach, consisting of 14 targets that range in size from 500kb to 2Mb.
The remaining 50% of the 30Mb of sequence were composed of thirty, 500kb regions selected according to a stratified random-sampling strategy based on gene density and level of non-exonic conservation. The decision to use these particular criteria was made in order to ensure a good sampling of genomic regions varying widely in their content of genes and other functional elements. The human genome was divided into three parts - top 20%, middle 30%, and bottom 50% - along each of two axes: 1) gene density and 2) level of non-exonic conservation with respect to the orthologous mouse genomic sequence (see below), for a total of nine strata. From each stratum, three random regions were chosen for the pilot project. For those strata underrepresented by the manual picks, a fourth region was chosen, resulting in a total of 30 regions. For all strata, a "backup" region was designated for use in the event of unforeseen technical problems.
In greater detail, the stratification criteria were as follows:
The above scores were computed within non-overlapping 500 kb windows of finished sequence across the genome, and used to assign each window to a stratum.
The pilot phase was successfully finished and the results were published in June 2007 in Nature and in a special issue of Genome Research; the results published in the first paper mentioned advanced the collective knowledge about human genome function in several major areas, included in the following highlights:
In September 2007, National Human Genome Research Institute (NHGRI) began funding the production phase of the ENCODE project. In this phase, the goal was to analyze the entire genome and to conduct "additional pilot-scale studies".
As in the pilot project, the production effort is organized as an open consortium. In October 2007, NHGRI awarded grants totaling more than $80 million over four years. The production phase also includes a Data Coordination Center, a Data Analysis Center, and a Technology Development Effort. At that time the project evolved into a truly global enterprise, involving 440 scientists from 32 laboratories worldwide. Once pilot phase was completed, the project “scaled up” in 2007, profiting immensely from new-generation sequencing machines. And the data was, indeed, big; researchers generated around 15 terabytes of raw data.
By 2010, over 1,000 genome-wide data sets had been produced by the ENCODE project. Taken together, these data sets show which regions are transcribed into RNA, which regions are likely to control the genes that are used in a particular type of cell, and which regions are associated with a wide variety of proteins. The primary assays used in ENCODE are ChIP-seq, DNase I Hypersensitivity, RNA-seq, and assays of DNA methylation.
In September 2012, the project released a much more extensive set of results, in 30 papers published simultaneously in several journals, including six in Nature, six in Genome Biology and a special issue with 18 publications of Genome Research.
The authors described the production and the initial analysis of 1,640 data sets designed to annotate functional elements in the entire human genome, integrating results from diverse experiments within cell types, related experiments involving 147 different cell types, and all ENCODE data with other resources, such as candidate regions from genome-wide association studies (GWAS) and evolutionary constrained regions. Together, these efforts revealed important features about the organization and function of the human genome, which were summarized in an overview paper as follows:
The most striking finding was that the fraction of human DNA that is biologically active is considerably higher than even the most optimistic previous estimates. In an overview paper, the ENCODE Consortium reported that its members were able to assign biochemical functions to over 80% of the genome. Much of this was found to be involved in controlling the expression levels of coding DNA, which makes up less than 1% of the genome.
The most important new elements of the "encyclopedia" include:
Capturing, storing, integrating, and displaying the diverse data generated is challenging. The ENCODE Data Coordination Center (DCC) organizes and displays the data generated by the labs in the consortium, and ensures that the data meets specific quality standards when it is released to the public. Before a lab submits any data, the DCC and the lab draft a data agreement that defines the experimental parameters and associated metadata. The DCC validates incoming data to ensure consistency with the agreement. It then loads the data onto a test server for preliminary inspection, and coordinates with the labs to organize the data into a consistent set of tracks. When the tracks are ready, the DCC Quality Assurance team performs a series of integrity checks, verifies that the data is presented in a manner consistent with other browser data, and perhaps most importantly, verifies that the metadata and accompanying descriptive text are presented in a way that is useful to our users. The data is released on the public UCSC Genome Browser website only after all of these checks have been satisfied. In parallel, data is analyzed by the ENCODE Data Analysis Center, a consortium of analysis teams from the various production labs plus other researchers. These teams develop standardized protocols to analyze data from novel assays, determine best practices, and produce a consistent set of analytic methods such as standardized peak callers and signal generation from alignment pile-ups.
The National Human Genome Research Institute (NHGRI) has identified ENCODE as a "community resource project". This important concept was defined at an international meeting held in Ft. Lauderdale in January 2003 as a research project specifically devised and implemented to create a set of data, reagents, or other material whose primary utility will be as a resource for the broad scientific community. Accordingly, the ENCODE data release policy stipulates that data, once verified, will be deposited into public databases and made available for all to use without restriction.
To date, ENCODE has sampled 119 of 1,800 known TFs and general components of the transcriptional machinery on a limited number of cell types and 13 of more than 60 currently known histone or DNA modifications across 147 cell types. DNaseI, FAIRE and extensive RNA assays across subcellular fractionations have been undertaken on many cell types, but overall these data reflect a minor fraction of the potential functional information encoded in the human genome. An important future goal will be to enlarge this dataset to additional factors, modifications and cell types, complementing the other related projects in this area (e.g., Roadmap Epigenomics Project and International Human Epigenome (HEP) Consortium). These projects will constitute foundational resources for human genomics, allowing a deeper interpretation of the organization of gene and regulatory information and the mechanisms of regulation and thereby provide important insights in human health and disease.
The ENCODE Consortium is composed primarily of scientists who were funded by US National Human Genome Research Institute (NHGRI). Other participants contributing to the project are brought up into the Consortium or Analysis Working Group.
The pilot phase consisted of eight research groups and twelve groups participating in the ENCODE Technology Development Phase (ENCODE Pilot Project: Participants and Projects). After 2007, the number of participants grew up to 440 scientists from 32 laboratories worldwide as the pilot phase was officially over. At the moment the consortium consists of different centers which perform different tasks (ENCODE Participants and Projects):
Although the consortium claims they are far from finished with the ENCODE project, many reactions to the published papers and the news coverage that accompanied the release were favorable. The Nature editors and ENCODE authors "... collaborated over many months to make the biggest splash possible and capture the attention of not only the research community but also of the public at large". The ENCODE project's claim that 80% of the human genome has biochemical function was rapidly picked up by the popular press who described the results of the project as leading to the death of junk DNA.
However the conclusion that most of the genome is "functional" has been criticized on the grounds that ENCODE project used a far too liberal definition of "functional", namely anything that is transcribed must be functional. This conclusion was arrived at despite the widely accepted view that many DNA elements such as pseudogenes that are transcribed are nevertheless non-functional. Furthermore the ENCODE project has emphasized sensitivity over specificity leading to the detection of many false positives. Somewhat arbitrary choice of cell lines and transcription factors as well as lack of appropriate control experiments were additional major criticisms of ENCODE as random DNA mimics ENCODE-like 'functional' behavior. In response to these criticisms, it has been argued that the wide spread transcription and splicing that is observed in the human genome is a more accurate indicator of genetic function than sequence conservation. Furthermore much of the apparent junk DNA is involved in epigenetic regulation and was a necessary prerequisite for the development of complex organisms. In response to the complaints about the definition of the word "function" some have observed that this particular issue is more about definitional differences than about the strength of the project, which was in providing data for further research on biochemical activity of non-protein coding parts of DNA. Though definitions are important and science is bounded by the limits of language, it seems that ENCODE has been well received for its purpose since there are now more research papers using ENCODE data than there are papers arguing over the definition of function, as of March 2013.  Ewan Birney, one of the researchers for ENCODE commented on some of the reactions on the project. He comments that "function" was used pragmatically to mean "specific biochemical activity" which included different classes of assays: RNA, "broad" histone modifications, "narrow" histone modifications, DNaseI hypersensitive sites, Transcription Factor ChIP-seq peaks, DNaseI Footprints, Transcription Factor bound motifs, and Exons. 
The project has also been criticized for its high cost (~$400 million in total) and favoring big science which takes money away from highly productive investigator-initiated research. The pilot ENCODE project cost an estimated $55 million; the scale-up was about $130 million and the US National Human Genome Research Institute NHGRI could award up to $123 million for the next phase. Some researchers argue that a solid return on that investment has yet to be seen. There have been attempts to scour the literature for the papers in which ENCODE plays a significant part and since 2012 there have been 300 papers, 110 of which come from labs without ENCODE funding. An additional problem is that ENCODE is not a unique name dedicated to the ENCODE project exclusively, so the word 'encode' comes up in many genetics and genomics literature.
Another major critique is that the results do not justify the amount of time spent on the project and that the project itself is essentially unfinishable. Although often compared to Human Genome Project (HGP) and even termed as the HGP next step, the HGP had a clear endpoint which ENCODE currently lacks.
The authors seem to sympathize with the scientific concerns and at the same time try to justify their efforts by giving interviews and explaining ENCODE details not just to the scientific public, but also to mass media. They also claim that it took more than half a century from the realization that DNA is the hereditary material of life to the human genome sequence, so that their plan for the next century would be to really understand the sequence itself.
The Model Organism ENCyclopedia Of DNA Elements (modENCODE) project is a continuation of the original ENCODE project targeting the identification of functional elements in selected model organism genomes, specifically, Drosophila melanogaster and Caenorhabditis elegans. The extension to model organisms permits biological validation of the computational and experimental findings of the ENCODE project, something that is difficult or impossible to do in humans.
In late 2010, the modENCODE consortium unveiled its first set of results with publications on annotation and integrative analysis of the worm and fly genomes in Science. Data from these publications is available from the modENCODE web site.
At the moment, modENCODE is run as a Research Network and the consortium is formed by 11 primary projects, divided between worm and fly. The projects spans the following:
The analysis of transcription factor binding data generated by the ENCODE project is currently available in the web-accessible repository FactorBook. Essentially, Factorbook.org is a Wiki-based database for transcription factor-binding data generated by the ENCODE consortium. In the first release, Factorbook contains: