Hubbry Logo
Transcriptomics technologiesTranscriptomics technologiesMain
Open search
Transcriptomics technologies
Community hub
Transcriptomics technologies
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Transcriptomics technologies
Transcriptomics technologies
from Wikipedia

Transcriptomics technologies are the techniques used to study an organism's transcriptome, the sum of all of its RNA transcripts. The information content of an organism is recorded in the DNA of its genome and expressed through transcription. Here, mRNA serves as a transient intermediary molecule in the information network, whilst non-coding RNAs perform additional diverse functions. A transcriptome captures a snapshot in time of the total transcripts present in a cell. Transcriptomics technologies provide a broad account of which cellular processes are active and which are dormant. A major challenge in molecular biology is to understand how a single genome gives rise to a variety of cells. Another is how gene expression is regulated.

The first attempts to study whole transcriptomes began in the early 1990s. Subsequent technological advances since the late 1990s have repeatedly transformed the field and made transcriptomics a widespread discipline in biological sciences. There are two key contemporary techniques in the field: microarrays, which quantify a set of predetermined sequences, and RNA-Seq, which uses high-throughput sequencing to record all transcripts. As the technology improved, the volume of data produced by each transcriptome experiment increased. As a result, data analysis methods have steadily been adapted to more accurately and efficiently analyse increasingly large volumes of data. Transcriptome databases have consequently been growing bigger and more useful as transcriptomes continue to be collected and shared by researchers. It would be almost impossible to interpret the information contained in a transcriptome without the knowledge of previous experiments.

Measuring the expression of an organism's genes in different tissues or conditions, or at different times, gives information on how genes are regulated and reveals details of an organism's biology. It can also be used to infer the functions of previously unannotated genes. Transcriptome analysis has enabled greater insight into gene expression changes in different organisms and has been instrumental in the understanding of human disease. An analysis of gene expression in its entirety allows detection of broad coordinated trends which cannot be discerned by more targeted assays.

History

[edit]
Transcriptomics method use over time. Published papers referring to RNA-Seq (black), RNA microarray (red), expressed sequence tag (blue), digital differential display (green), and serial/cap analysis of gene expression (yellow) since 1990.[1]

Transcriptomics has been characterised by the development of new techniques which have redefined what is possible every decade or so and rendered previous technologies obsolete. The first attempt at capturing a partial human transcriptome was published in 1991 and reported 609 mRNA sequences from the human brain.[2] In 2008, two human transcriptomes, composed of millions of transcript-derived sequences covering 16,000 genes, were published,[3][4] and by 2015 transcriptomes had been published for hundreds of individuals.[5][6] Transcriptomes of different disease states, tissues, or even single cells are now routinely generated.[6][7][8] This explosion in transcriptomics has been driven by the rapid development of new technologies with improved sensitivity and economy.[9][10][11][12]

Before transcriptomics

[edit]

Studies of individual transcripts were being performed several decades before any transcriptomics approaches were available. Libraries of silkmoth mRNA transcripts were collected and converted to complementary DNA (cDNA) for storage using reverse transcriptase in the late 1970s.[13] In the 1980s, low-throughput sequencing using the Sanger method was used to sequence random transcripts, producing expressed sequence tags (ESTs).[2][14][15][16] The Sanger method of sequencing was predominant until the advent of high-throughput methods such as sequencing by synthesis (Solexa/Illumina). ESTs came to prominence during the 1990s as an efficient method to determine the gene content of an organism without sequencing the entire genome.[16] Amounts of individual transcripts were quantified using Northern blotting, nylon membrane arrays, and later reverse transcriptase quantitative PCR (RT-qPCR) methods,[17][18] but these methods are laborious and can only capture a tiny subsection of a transcriptome.[12] Consequently, the manner in which a transcriptome as a whole is expressed and regulated remained unknown until higher-throughput techniques were developed.

Early attempts

[edit]

The word "transcriptome" was first used in the 1990s.[19][20] In 1995, one of the earliest sequencing-based transcriptomic methods was developed, serial analysis of gene expression (SAGE), which worked by Sanger sequencing of concatenated random transcript fragments.[21] Transcripts were quantified by matching the fragments to known genes. A variant of SAGE using high-throughput sequencing techniques, called digital gene expression analysis, was also briefly used.[9][22] However, these methods were largely overtaken by high throughput sequencing of entire transcripts, which provided additional information on transcript structure such as splice variants.[9]

Development of contemporary techniques

[edit]
Comparison of contemporary methods[23][24][10]
RNA-Seq Microarray
Throughput 1 day to 1 week per experiment[10] 1–2 days per experiment[10]
Input RNA amount Low ~ 1 ng total RNA[25] High ~ 1 μg mRNA[26]
Labour intensity High (sample preparation and data analysis)[10][23] Low[10][23]
Prior knowledge None required, although a reference genome/transcriptome sequence is useful[23] Reference genome/transcriptome is required for design of probes[23]
Quantitation accuracy ~90% (limited by sequence coverage)[27] >90% (limited by fluorescence detection accuracy)[27]
Sequence resolution RNA-Seq can detect SNPs and splice variants (limited by sequencing accuracy of ~99%)[27] Specialised arrays can detect mRNA splice variants (limited by probe design and cross-hybridisation)[27]
Sensitivity 1 transcript per million (approximate, limited by sequence coverage)[27] 1 transcript per thousand (approximate, limited by fluorescence detection)[27]
Dynamic range 100,000:1 (limited by sequence coverage)[28] 1,000:1 (limited by fluorescence saturation)[28]
Technical reproducibility >99%[29][30] >99%[31][32]

The dominant contemporary techniques, microarrays and RNA-Seq, were developed in the mid-1990s and 2000s.[9][33] Microarrays that measure the abundances of a defined set of transcripts via their hybridisation to an array of complementary probes were first published in 1995.[34][35] Microarray technology allowed the assay of thousands of transcripts simultaneously and at a greatly reduced cost per gene and labour saving.[36] Both spotted oligonucleotide arrays and Affymetrix high-density arrays were the method of choice for transcriptional profiling until the late 2000s.[12][33] Over this period, a range of microarrays were produced to cover known genes in model or economically important organisms. Advances in design and manufacture of arrays improved the specificity of probes and allowed more genes to be tested on a single array. Advances in fluorescence detection increased the sensitivity and measurement accuracy for low abundance transcripts.[35][37]

RNA-Seq is accomplished by reverse transcribing RNA in vitro and sequencing the resulting cDNAs.[10] Transcript abundance is derived from the number of counts from each transcript. The technique has therefore been heavily influenced by the development of high-throughput sequencing technologies.[9][11] Massively parallel signature sequencing (MPSS) was an early example based on generating 16–20 bp sequences via a complex series of hybridisations,[38][note 1] and was used in 2004 to validate the expression of ten thousand genes in Arabidopsis thaliana.[39] The earliest RNA-Seq work was published in 2006 with one hundred thousand transcripts sequenced using 454 technology.[40] This was sufficient coverage to quantify relative transcript abundance. RNA-Seq began to increase in popularity after 2008 when new Solexa/Illumina technologies allowed one billion transcript sequences to be recorded.[4][10][41][42] This yield now allows for the quantification and comparison of human transcriptomes.[43]

Data gathering

[edit]

Generating data on RNA transcripts can be achieved via either of two main principles: sequencing of individual transcripts (ESTs, or RNA-Seq) or hybridisation of transcripts to an ordered array of nucleotide probes (microarrays).[23]

Isolation of RNA

[edit]

All transcriptomic methods require RNA to first be isolated from the experimental organism before transcripts can be recorded. Although biological systems are incredibly diverse, RNA extraction techniques are broadly similar and involve mechanical disruption of cells or tissues, disruption of RNase with chaotropic salts,[44] disruption of macromolecules and nucleotide complexes, separation of RNA from undesired biomolecules including DNA, and concentration of the RNA via precipitation from solution or elution from a solid matrix.[44][45] Isolated RNA may additionally be treated with DNase to digest any traces of DNA.[46] It is necessary to enrich messenger RNA as total RNA extracts are typically 98% ribosomal RNA.[47] Enrichment for transcripts can be performed by poly-A affinity methods or by depletion of ribosomal RNA using sequence-specific probes.[48] Degraded RNA may affect downstream results; for example, mRNA enrichment from degraded samples will result in the depletion of 5' mRNA ends and an uneven signal across the length of a transcript. Snap-freezing of tissue prior to RNA isolation is typical, and care is taken to reduce exposure to RNase enzymes once isolation is complete.[45]

Expressed sequence tags

[edit]

An expressed sequence tag (EST) is a short nucleotide sequence generated from a single RNA transcript. RNA is first copied as complementary DNA (cDNA) by a reverse transcriptase enzyme before the resultant cDNA is sequenced.[16] Because ESTs can be collected without prior knowledge of the organism from which they come, they can be made from mixtures of organisms or environmental samples.[49][16] Although higher-throughput methods are now used, EST libraries commonly provided sequence information for early microarray designs; for example, a barley microarray was designed from 350,000 previously sequenced ESTs.[50]

Serial and cap analysis of gene expression (SAGE/CAGE)

[edit]
Summary of SAGE. Within the organisms, genes are transcribed and spliced (in eukaryotes) to produce mature mRNA transcripts (red). The mRNA is extracted from the organism, and reverse transcriptase is used to copy the mRNA into stable double-stranded–cDNA (ds-cDNA; blue). In SAGE, the ds-cDNA is digested by restriction enzymes (at location 'X' and 'X'+11) to produce 11-nucleotide "tag" fragments. These tags are concatenated and sequenced using long-read Sanger sequencing (different shades of blue indicate tags from different genes). The sequences are deconvoluted to find the frequency of each tag. The tag frequency can be used to report on transcription of the gene that the tag came from.[51]

Serial analysis of gene expression (SAGE) was a development of EST methodology to increase the throughput of the tags generated and allow some quantitation of transcript abundance.[21] cDNA is generated from the RNA but is then digested into 11 bp "tag" fragments using restriction enzymes that cut DNA at a specific sequence, and 11 base pairs along from that sequence. These cDNA tags are then joined head-to-tail into long strands (>500 bp) and sequenced using low-throughput, but long read-length methods such as Sanger sequencing. The sequences are then divided back into their original 11 bp tags using computer software in a process called deconvolution.[21] If a high-quality reference genome is available, these tags may be matched to their corresponding gene in the genome. If a reference genome is unavailable, the tags can be directly used as diagnostic markers if found to be differentially expressed in a disease state.[21]

The cap analysis gene expression (CAGE) method is a variant of SAGE that sequences tags from the 5' end of an mRNA transcript only.[52] Therefore, the transcriptional start site of genes can be identified when the tags are aligned to a reference genome. Identifying gene start sites is of use for promoter analysis and for the cloning of full-length cDNAs.

SAGE and CAGE methods produce information on more genes than was possible when sequencing single ESTs, but sample preparation and data analysis are typically more labour-intensive.[52]

Microarrays

[edit]
Summary of DNA Microarrays. Within the organisms, genes are transcribed and spliced (in eukaryotes) to produce mature mRNA transcripts (red). The mRNA is extracted from the organism and reverse transcriptase is used to copy the mRNA into stable ds-cDNA (blue). In microarrays, the ds-cDNA is fragmented and fluorescently labelled (orange). The labelled fragments bind to an ordered array of complementary oligonucleotides, and measurement of fluorescent intensity across the array indicates the abundance of a predetermined set of sequences. These sequences are typically specifically chosen to report on genes of interest within the organism's genome.[51]

Principles and advances

[edit]

Microarrays usually consist of a grid of short nucleotide oligomers, known as "probes", typically arranged on a glass slide.[53] Transcript abundance is determined by hybridisation of fluorescently labelled transcripts to these probes.[54] The fluorescence intensity at each probe location on the array indicates the transcript abundance for that probe sequence.[54] Groups of probes designed to measure the same transcript (i.e., hybridizing a specific transcript in different positions) are usually referred to as "probesets".

Microarrays require some genomic knowledge from the organism of interest, for example, in the form of an annotated genome sequence, or a library of ESTs that can be used to generate the probes for the array.[36]

Methods

[edit]

Microarrays for transcriptomics typically fall into one of two broad categories: low-density spotted arrays or high-density short probe arrays. Transcript abundance is inferred from the intensity of fluorescence derived from fluorophore-tagged transcripts that bind to the array.[36]

Spotted low-density arrays typically feature picolitre[note 2] drops of a range of purified cDNAs arrayed on the surface of a glass slide.[55] These probes are longer than those of high-density arrays and cannot identify alternative splicing events. Spotted arrays use two different fluorophores to label the test and control samples, and the ratio of fluorescence is used to calculate a relative measure of abundance.[56] High-density arrays use a single fluorescent label, and each sample is hybridised and detected individually.[57] High-density arrays were popularised by the Affymetrix GeneChip array, where each transcript is quantified by several short 25-mer probes that together assay one gene.[58]

NimbleGen arrays were a high-density array produced by a maskless-photochemistry method, which permitted flexible manufacture of arrays in small or large numbers. These arrays had 100,000s of 45 to 85-mer probes and were hybridised with a one-colour labelled sample for expression analysis.[59] Some designs incorporated up to 12 independent arrays per slide.

RNA-Seq

[edit]
Summary of RNA-Seq. Within the organisms, genes are transcribed and spliced (in eukaryotes) to produce mature mRNA transcripts (red). The mRNA is extracted from the organism, fragmented, and copied into stable ds-cDNA (blue). The ds-cDNA is sequenced using high-throughput, short-read sequencing methods. These sequences can then be aligned to a reference genome sequence to reconstruct which genome regions were being transcribed. This data can be used to annotate where expressed genes are, their relative expression levels, and any alternative splice variants.[51]

Principles and advances

[edit]

RNA-Seq refers to the combination of a high-throughput sequencing methodology with computational methods to capture and quantify transcripts present in an RNA extract.[10] The nucleotide sequences generated are typically around 100 bp in length, but can range from 30 bp to over 10,000 bp depending on the sequencing method used. RNA-Seq leverages deep sampling of the transcriptome with many short fragments from a transcriptome to allow computational reconstruction of the original RNA transcript by aligning reads to a reference genome or to each other (de novo assembly).[9] Both low-abundance and high-abundance RNAs can be quantified in an RNA-Seq experiment (dynamic range of 5 orders of magnitude)—a key advantage over microarray transcriptomes. In addition, input RNA amounts are much lower for RNA-Seq (nanogram quantity) compared to microarrays (microgram quantity), which allow examination of the transcriptome even at a single-cell resolution when combined with amplification of cDNA.[25][60] Theoretically, there is no upper limit of quantification in RNA-Seq, and background noise is very low for 100 bp reads in non-repetitive regions.[10]

RNA-Seq may be used to identify genes within a genome, or identify which genes are active at a particular point in time, and read counts can be used to accurately model the relative gene expression level. RNA-Seq methodology has constantly improved, primarily through the development of DNA sequencing technologies to increase throughput, accuracy, and read length.[61] Since the first descriptions in 2006 and 2008,[40][62] RNA-Seq has been rapidly adopted and overtook microarrays as the dominant transcriptomics technique in 2015.[63]

The quest for transcriptome data at the level of individual cells has driven advances in RNA-Seq library preparation methods, resulting in dramatic advances in sensitivity. Single-cell transcriptomes are now well described and have even been extended to in situ RNA-Seq where transcriptomes of individual cells are directly interrogated in fixed tissues.[64]

Methods

[edit]

RNA-Seq was established in concert with the rapid development of a range of high-throughput DNA sequencing technologies.[65] However, before the extracted RNA transcripts are sequenced, several key processing steps are performed. Methods differ in the use of transcript enrichment, fragmentation, amplification, single or paired-end sequencing, and whether to preserve strand information.[65]

The sensitivity of an RNA-Seq experiment can be increased by enriching classes of RNA that are of interest and depleting known abundant RNAs. The mRNA molecules can be separated using oligonucleotides probes which bind their poly-A tails. Alternatively, ribo-depletion can be used to specifically remove abundant but uninformative ribosomal RNAs (rRNAs) by hybridisation to probes tailored to the taxon's specific rRNA sequences (e.g. mammal rRNA, plant rRNA). However, ribo-depletion can also introduce some bias via non-specific depletion of off-target transcripts.[66] Small RNAs, such as micro RNAs, can be purified based on their size by gel electrophoresis and extraction.

Since mRNAs are longer than the read-lengths of typical high-throughput sequencing methods, transcripts are usually fragmented prior to sequencing.[67] The fragmentation method is a key aspect of sequencing library construction. Fragmentation may be achieved by chemical hydrolysis, nebulisation, sonication, or reverse transcription with chain-terminating nucleotides.[67] Alternatively, fragmentation and cDNA tagging may be done simultaneously by using transposase enzymes.[68]

During preparation for sequencing, cDNA copies of transcripts may be amplified by PCR to enrich for fragments that contain the expected 5' and 3' adapter sequences.[69] Amplification is also used to allow sequencing of very low input amounts of RNA, down to as little as 50 pg in extreme applications.[70] Spike-in controls of known RNAs can be used for quality control assessment to check library preparation and sequencing, in terms of GC-content, fragment length, as well as the bias due to fragment position within a transcript.[71] Unique molecular identifiers (UMIs) are short random sequences that are used to individually tag sequence fragments during library preparation so that every tagged fragment is unique.[72] UMIs provide an absolute scale for quantification, the opportunity to correct for subsequent amplification bias introduced during library construction, and accurately estimate the initial sample size. UMIs are particularly well-suited to single-cell RNA-Seq transcriptomics, where the amount of input RNA is restricted and extended amplification of the sample is required.[73][74][75]

Once the transcript molecules have been prepared they can be sequenced in just one direction (single-end) or both directions (paired-end). A single-end sequence is usually quicker to produce, cheaper than paired-end sequencing and sufficient for quantification of gene expression levels. Paired-end sequencing produces more robust alignments/assemblies, which is beneficial for gene annotation and transcript isoform discovery.[10] Strand-specific RNA-Seq methods preserve the strand information of a sequenced transcript.[76] Without strand information, reads can be aligned to a gene locus but do not inform in which direction the gene is transcribed. Stranded-RNA-Seq is useful for deciphering transcription for genes that overlap in different directions and to make more robust gene predictions in non-model organisms.[76]

Sequencing technology platforms commonly used for RNA-Seq[77][78]
Platform Commercial release Typical read length Maximum throughput per run Single read accuracy RNA-Seq runs deposited in the NCBI SRA (Oct 2016)[79]
454 Life Sciences 2005 700 bp 0.7 Gbp 99.9% 3548
Illumina 2006 50–300 bp 900 Gbp 99.9% 362903
SOLiD 2008 50 bp 320 Gbp 99.9% 7032
Ion Torrent 2010 400 bp 30 Gbp 98% 1953
PacBio 2011 10,000 bp 2 Gbp 87% 160

Legend: NCBI SRA – National center for biotechnology information sequence read archive.

Currently RNA-Seq relies on copying RNA molecules into cDNA molecules prior to sequencing; therefore, the subsequent platforms are the same for transcriptomic and genomic data. Consequently, the development of DNA sequencing technologies has been a defining feature of RNA-Seq.[78][80][81] Direct sequencing of RNA using nanopore sequencing represents a current state-of-the-art RNA-Seq technique.[82][83] Nanopore sequencing of RNA can detect modified bases that would be otherwise masked when sequencing cDNA and also eliminates amplification steps that can otherwise introduce bias.[11][84]

The sensitivity and accuracy of an RNA-Seq experiment are dependent on the number of reads obtained from each sample.[85][86] A large number of reads are needed to ensure sufficient coverage of the transcriptome, enabling detection of low abundance transcripts. Experimental design is further complicated by sequencing technologies with a limited output range, the variable efficiency of sequence creation, and variable sequence quality. Added to those considerations is that every species has a different number of genes and therefore requires a tailored sequence yield for an effective transcriptome. Early studies determined suitable thresholds empirically, but as the technology matured suitable coverage was predicted computationally by transcriptome saturation. Somewhat counter-intuitively, the most effective way to improve detection of differential expression in low expression genes is to add more biological replicates rather than adding more reads.[87] The current benchmarks recommended by the Encyclopedia of DNA Elements (ENCODE) Project are for 70-fold exome coverage for standard RNA-Seq and up to 500-fold exome coverage to detect rare transcripts and isoforms.[88][89][90]

Data analysis

[edit]

Transcriptomics methods are highly parallel and require significant computation to produce meaningful data for both microarray and RNA-Seq experiments.[91][92][93][94][95] Microarray data is recorded as high-resolution images, requiring feature detection and spectral analysis.[96] Microarray raw image files are each about 750 MB in size, while the processed intensities are around 60 MB in size. Multiple short probes matching a single transcript can reveal details about the intron-exon structure, requiring statistical models to determine the authenticity of the resulting signal. RNA-Seq studies produce billions of short DNA sequences, which must be aligned to reference genomes composed of millions to billions of base pairs. De novo assembly of reads within a dataset requires the construction of highly complex sequence graphs.[97] RNA-Seq operations are highly repetitious and benefit from parallelised computation but modern algorithms mean consumer computing hardware is sufficient for simple transcriptomics experiments that do not require de novo assembly of reads.[98] A human transcriptome could be accurately captured using RNA-Seq with 30 million 100 bp sequences per sample.[85][86] This example would require approximately 1.8 gigabytes of disk space per sample when stored in a compressed fastq format. Processed count data for each gene would be much smaller, equivalent to processed microarray intensities. Sequence data may be stored in public repositories, such as the Sequence Read Archive (SRA).[99] RNA-Seq datasets can be uploaded via the Gene Expression Omnibus.[100]

Image processing

[edit]
Microarray and sequencing flow cell. Microarrays and RNA-seq rely on image analysis in different ways. In a microarray chip, each spot on a chip is a defined oligonucleotide probe, and fluorescence intensity directly detects the abundance of a specific sequence (Affymetrix). In a high-throughput sequencing flow cell, spots are sequenced one nucleotide at a time, with the colour at each round indicating the next nucleotide in the sequence (Illumina Hiseq). Other variations of these techniques use more or fewer colour channels.[51][101]

Microarray image processing must correctly identify the regular grid of features within an image and independently quantify the fluorescence intensity for each feature. Image artefacts must be additionally identified and removed from the overall analysis. Fluorescence intensities directly indicate the abundance of each sequence, since the sequence of each probe on the array is already known.[102]

The first steps of RNA-seq also include similar image processing; however, conversion of images to sequence data is typically handled automatically by the instrument software. The Illumina sequencing-by-synthesis method results in an array of clusters distributed over the surface of a flow cell.[103] The flow cell is imaged up to four times during each sequencing cycle, with tens to hundreds of cycles in total. Flow cell clusters are analogous to microarray spots and must be correctly identified during the early stages of the sequencing process. In Roche's pyrosequencing method, the intensity of emitted light determines the number of consecutive nucleotides in a homopolymer repeat. There are many variants on these methods, each with a different error profile for the resulting data.[104]

RNA-Seq data analysis

[edit]

RNA-Seq experiments generate a large volume of raw sequence reads which have to be processed to yield useful information. Data analysis usually requires a combination of bioinformatics software tools (see also List of RNA-Seq bioinformatics tools) that vary according to the experimental design and goals. The process can be broken down into four stages: quality control, alignment, quantification, and differential expression.[105] Most popular RNA-Seq programs are run from a command-line interface, either in a Unix environment or within the R/Bioconductor statistical environment.[94]

Quality control

[edit]

Sequence reads are not perfect, so the accuracy of each base in the sequence needs to be estimated for downstream analyses. Raw data is examined to ensure: quality scores for base calls are high, the GC content matches the expected distribution, short sequence motifs (k-mers) are not over-represented, and the read duplication rate is acceptably low.[86] Several software options exist for sequence quality analysis, including FastQC and FaQCs.[106][107] Abnormalities may be removed (trimming) or tagged for special treatment during later processes.

Alignment

[edit]

In order to link sequence read abundance to the expression of a particular gene, transcript sequences are aligned to a reference genome or de novo aligned to one another if no reference is available.[108][109][110] The key challenges for alignment software include sufficient speed to permit billions of short sequences to be aligned in a meaningful timeframe, flexibility to recognise and deal with intron splicing of eukaryotic mRNA, and correct assignment of reads that map to multiple locations. Software advances have greatly addressed these issues, and increases in sequencing read length reduce the chance of ambiguous read alignments. A list of currently available high-throughput sequence aligners is maintained by the EBI.[111][112]

Alignment of primary transcript mRNA sequences derived from eukaryotes to a reference genome requires specialised handling of intron sequences, which are absent from mature mRNA.[113] Short read aligners perform an additional round of alignments specifically designed to identify splice junctions, informed by canonical splice site sequences and known intron splice site information. Identification of intron splice junctions prevents reads from being misaligned across splice junctions or erroneously discarded, allowing more reads to be aligned to the reference genome and improving the accuracy of gene expression estimates. Since gene regulation may occur at the mRNA isoform level, splice-aware alignments also permit detection of isoform abundance changes that would otherwise be lost in a bulked analysis.[114]

De novo assembly can be used to align reads to one another to construct full-length transcript sequences without use of a reference genome.[115] Challenges particular to de novo assembly include larger computational requirements compared to a reference-based transcriptome, additional validation of gene variants or fragments, and additional annotation of assembled transcripts. The first metrics used to describe transcriptome assemblies, such as N50, have been shown to be misleading[116] and improved evaluation methods are now available.[117][118] Annotation-based metrics are better assessments of assembly completeness, such as contig reciprocal best hit count. Once assembled de novo, the assembly can be used as a reference for subsequent sequence alignment methods and quantitative gene expression analysis.

RNA-Seq de novo assembly software
Software Released Last updated Computational efficiency Strengths and weaknesses
Velvet-Oases[119][120] 2008 2011 Low, single-threaded, high RAM requirement The original short read assembler. It is now largely superseded.
SOAPdenovo-trans[109] 2011 2014 Moderate, multi-thread, medium RAM requirement An early example of a short read assembler. It has been updated for transcriptome assembly.
Trans-ABySS[121] 2010 2016 Moderate, multi-thread, medium RAM requirement Suited to short reads, can handle complex transcriptomes, and an MPI-parallel version is available for computing clusters.
Trinity[122][97] 2011 2017 Moderate, multi-thread, medium RAM requirement Suited to short reads. It can handle complex transcriptomes but is memory intensive.
miraEST[123] 1999 2016 Moderate, multi-thread, medium RAM requirement Can process repetitive sequences, combine different sequencing formats, and a wide range of sequence platforms are accepted.
Newbler[124] 2004 2012 Low, single-thread, high RAM requirement Specialised to accommodate the homo-polymer sequencing errors typical of Roche 454 sequencers.
CLC genomics workbench[125] 2008 2014 High, multi-thread, low RAM requirement Has a graphical user interface, can combine diverse sequencing technologies, has no transcriptome-specific features, and a licence must be purchased before use.
SPAdes[126] 2012 2017 High, multi-thread, low RAM requirement Used for transcriptomics experiments on single cells.
RSEM[127] 2011 2017 High, multi-thread, low RAM requirement Can estimate frequency of alternatively spliced transcripts. User friendly.
StringTie[98][128] 2015 2019 High, multi-thread, low RAM requirement Can use a combination of reference-guided and de novo assembly methods to identify transcripts.

Legend: RAM – random access memory; MPI – message passing interface; EST – expressed sequence tag.

Quantification

[edit]
Heatmap identification of gene co-expression patterns across different samples. Each column contains the measurements for gene expression change for a single sample. Relative gene expression is indicated by colour: high-expression (red), median-expression (white) and low-expression (blue). Genes and samples with similar expression profiles can be automatically grouped (left and top trees). Samples may be different individuals, tissues, environments or health conditions. In this example, expression of gene set 1 is high and expression of gene set 2 is low in samples 1, 2, and 3.[51][129]

Quantification of sequence alignments may be performed at the gene, exon, or transcript level.[91][87] Typical outputs include a table of read counts for each feature supplied to the software; for example, for genes in a general feature format file. Gene and exon read counts may be calculated quite easily using HTSeq, for example.[130] Quantitation at the transcript level is more complicated and requires probabilistic methods to estimate transcript isoform abundance from short read information; for example, using cufflinks software.[114] Reads that align equally well to multiple locations must be identified and either removed, aligned to one of the possible locations, or aligned to the most probable location.

Some quantification methods can circumvent the need for an exact alignment of a read to a reference sequence altogether. The kallisto software method combines pseudoalignment and quantification into a single step that runs 2 orders of magnitude faster than contemporary methods such as those used by tophat/cufflinks software, with less computational burden.[131]

Differential expression

[edit]

Once quantitative counts of each transcript are available, differential gene expression is measured by normalising, modelling, and statistically analysing the data.[108] Most tools will read a table of genes and read counts as their input, but some programs, such as cuffdiff, will accept binary alignment map format read alignments as input. The final outputs of these analyses are gene lists with associated pair-wise tests for differential expression between treatments and the probability estimates of those differences.[132]

RNA-Seq differential gene expression software
Software Environment Specialisation
Cuffdiff2[108] Unix-based Transcript analysis that tracks alternative splicing of mRNA
EdgeR[93] R/Bioconductor Any count-based genomic data
DEseq2[133] R/Bioconductor Flexible data types, low replication
Limma/Voom[92] R/Bioconductor Microarray or RNA-Seq data, flexible experiment design
Ballgown[134] R/Bioconductor Efficient and sensitive transcript discovery, flexible.

Legend: mRNA - messenger RNA.

Validation

[edit]

Transcriptomic analyses may be validated using an independent technique, for example, quantitative PCR (qPCR), which is recognisable and statistically assessable.[135] Gene expression is measured against defined standards both for the gene of interest and control genes. The measurement by qPCR is similar to that obtained by RNA-Seq wherein a value can be calculated for the concentration of a target region in a given sample. qPCR is, however, restricted to amplicons smaller than 300 bp, usually toward the 3' end of the coding region, avoiding the 3'UTR.[136] If validation of transcript isoforms is required, an inspection of RNA-Seq read alignments should indicate where qPCR primers might be placed for maximum discrimination. The measurement of multiple control genes along with the genes of interest produces a stable reference within a biological context.[137] qPCR validation of RNA-Seq data has generally shown that different RNA-Seq methods are highly correlated.[62][138][139]

Functional validation of key genes is an important consideration for post transcriptome planning. Observed gene expression patterns may be functionally linked to a phenotype by an independent knock-down/rescue study in the organism of interest.[140]

Applications

[edit]

Diagnostics and disease profiling

[edit]

Transcriptomic strategies have seen broad application across diverse areas of biomedical research, including disease diagnosis and profiling.[10][141] RNA-Seq approaches have allowed for the large-scale identification of transcriptional start sites, uncovered alternative promoter usage, and novel splicing alterations. These regulatory elements are important in human disease and, therefore, defining such variants is crucial to the interpretation of disease-association studies.[142] RNA-Seq can also identify disease-associated single nucleotide polymorphisms (SNPs), allele-specific expression, and gene fusions, which contributes to the understanding of disease causal variants.[143]

Retrotransposons are transposable elements which proliferate within eukaryotic genomes through a process involving reverse transcription. RNA-Seq can provide information about the transcription of endogenous retrotransposons that may influence the transcription of neighboring genes by various epigenetic mechanisms that lead to disease.[144] Similarly, the potential for using RNA-Seq to understand immune-related disease is expanding rapidly due to the ability to dissect immune cell populations and to sequence T cell and B cell receptor repertoires from patients.[145][146]

Human and pathogen transcriptomes

[edit]

RNA-Seq of human pathogens has become an established method for quantifying gene expression changes, identifying novel virulence factors, predicting antibiotic resistance, and unveiling host-pathogen immune interactions.[147][148] A primary aim of this technology is to develop optimised infection control measures and targeted individualised treatment.[146]

Transcriptomic analysis has predominantly focused on either the host or the pathogen. Dual RNA-Seq has been applied to simultaneously profile RNA expression in both the pathogen and host throughout the infection process. This technique enables the study of the dynamic response and interspecies gene regulatory networks in both interaction partners from initial contact through to invasion and the final persistence of the pathogen or clearance by the host immune system.[149][150]

Responses to environment

[edit]

Transcriptomics allows identification of genes and pathways that respond to and counteract biotic and abiotic environmental stresses.[151][140] The non-targeted nature of transcriptomics allows the identification of novel transcriptional networks in complex systems. For example, comparative analysis of a range of chickpea lines at different developmental stages identified distinct transcriptional profiles associated with drought and salinity stresses, including identifying the role of transcript isoforms of AP2-EREBP.[151] Investigation of gene expression during biofilm formation by the fungal pathogen Candida albicans revealed a co-regulated set of genes critical for biofilm establishment and maintenance.[152]

Transcriptomic profiling also provides crucial information on mechanisms of drug resistance. Analysis of over 1000 isolates of Plasmodium falciparum, a virulent parasite responsible for malaria in humans,[153] identified that upregulation of the unfolded protein response and slower progression through the early stages of the asexual intraerythrocytic developmental cycle were associated with artemisinin resistance in isolates from Southeast Asia.[154]

The use of transcriptomics is also important to investigate responses in the marine environment.[155] In marine ecology, "stress" and "adaptation" have been among the most common research topics, especially related to anthropogenic stress, such as global change and pollution.[155] Most of the studies in this area have been done in animals, although invertebrates have been underrepresented.[155] One issue still is a deficiency in functional genetic studies, which hamper gene annotations, especially for non-model species, and can lead to vague conclusions on the effects of responses studied.[155]

Gene function annotation

[edit]

All transcriptomic techniques have been particularly useful in identifying the functions of genes and identifying those responsible for particular phenotypes. Transcriptomics of Arabidopsis ecotypes that hyperaccumulate metals correlated genes involved in metal uptake, tolerance, and homeostasis with the phenotype.[156] Integration of RNA-Seq datasets across different tissues has been used to improve annotation of gene functions in commercially important organisms (e.g. cucumber)[157] or threatened species (e.g. koala).[158]

Assembly of RNA-Seq reads is not dependent on a reference genome[122] and so is ideal for gene expression studies of non-model organisms with non-existing or poorly developed genomic resources. For example, a database of SNPs used in Douglas fir breeding programs was created by de novo transcriptome analysis in the absence of a sequenced genome.[159] Similarly, genes that function in the development of cardiac, muscle, and nervous tissue in lobsters were identified by comparing the transcriptomes of the various tissue types without use of a genome sequence.[160] RNA-Seq can also be used to identify previously unknown protein coding regions in existing sequenced genomes.

Non-coding RNA

[edit]

Transcriptomics is most commonly applied to the mRNA content of the cell. However, the same techniques are equally applicable to non-coding RNAs (ncRNAs) that are not translated into a protein, but instead have direct functions (e.g. roles in protein translation, DNA replication, RNA splicing, and transcriptional regulation).[161][162][163][164] Many of these ncRNAs affect disease states, including cancer, cardiovascular, and neurological diseases.[165]

Transcriptome databases

[edit]

Transcriptomics studies generate large amounts of data that have potential applications far beyond the original aims of an experiment. As such, raw or processed data may be deposited in public databases to ensure their utility for the broader scientific community. For example, as of 2018, the Gene Expression Omnibus contained millions of experiments.[166]

Transcriptomic databases
Name Host Data Description
Gene Expression Omnibus[100] NCBI Microarray RNA-Seq First transcriptomics database to accept data from any source. Introduced MIAME and MINSEQE community standards that define necessary experiment metadata to ensure effective interpretation and repeatability.[167][168]
ArrayExpress[169] ENA Microarray Imports datasets from the Gene Expression Omnibus and accepts direct submissions. Processed data and experiment metadata is stored at ArrayExpress, while the raw sequence reads are held at the ENA. Complies with MIAME and MINSEQE standards.[167][168]
Expression Atlas[170] EBI Microarray RNA-Seq Tissue-specific gene expression database for animals and plants. Displays secondary analyses and visualisation, such as functional enrichment of Gene Ontology terms, InterPro domains, or pathways. Links to protein abundance data where available.
Genevestigator[171] Privately curated Microarray RNA-Seq Contains manual curations of public transcriptome datasets, focusing on medical and plant biology data. Individual experiments are normalised across the full database to allow comparison of gene expression across diverse experiments. Full functionality requires licence purchase, with free access to a limited functionality.
RefEx[172] DDBJ All Human, mouse, and rat transcriptomes from 40 different organs. Gene expression visualised as heatmaps projected onto 3D representations of anatomical structures.
NONCODE[173] noncode.org RNA-Seq Non-coding RNAs (ncRNAs) excluding tRNA and rRNA.

Legend: NCBI – National Center for Biotechnology Information; EBI – European Bioinformatics Institute; DDBJ – DNA Data Bank of Japan; ENA – European Nucleotide Archive; MIAME – Minimum Information About a Microarray Experiment; MINSEQE – Minimum Information about a high-throughput nucleotide SEQuencing Experiment.

See also

[edit]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Transcriptomics technologies refer to the suite of methods and tools designed to study the , defined as the complete set of transcripts—primarily RNAs (mRNAs), but also non-coding RNAs—produced by the in a cell, tissue, or at a specific point in time, offering a dynamic snapshot of and . These technologies have evolved to enable high-throughput of transcript abundance, variations, and spatial distributions, revolutionizing fields like , , and by revealing how genes are activated or repressed in response to developmental cues, environmental stresses, or diseases. The origins of transcriptomics trace back to the early , with initial efforts focusing on sequencing small sets of mRNAs, such as the first partial transcriptome comprising 609 sequences from brain tissue in 1991, which laid the groundwork for understanding diversity. By the late , advancements in hybridization-based techniques like microarrays allowed simultaneous quantification of thousands of predefined sequences through probe hybridization, marking a shift toward genome-wide profiling and enabling early applications in annotation and disease discovery. The advent of next-generation sequencing (NGS) in the mid-2000s introduced RNA sequencing (RNA-seq), which uses high-throughput sequencing to capture and quantify all transcripts without prior knowledge of sequences, providing unprecedented resolution for detecting , low-abundance transcripts, and novel isoforms. Since 2017, transcriptomics has seen rapid innovation, particularly with single-cell RNA sequencing (scRNA-seq), which profiles s from individual cells to uncover cellular heterogeneity, such as in tumor microenvironments or developmental processes, using droplet-based platforms like for scalable, unique molecular identifier (UMI)-tagged measurements. Long-read sequencing technologies, including Oxford Nanopore and PacBio platforms, have further advanced the field by enabling full-length transcript assembly with error rates below 0.02%, improving de novo transcriptome reconstruction and isoform resolution in complex genomes. Spatial transcriptomics emerged as a pivotal extension around 2020, integrating sequencing or imaging with positional barcoding to map transcripts in their native tissue context, facilitating studies of organ development, neurological disorders, and tissue architecture without dissociating cells. Applications of these technologies span diagnostics, where RNA-seq signatures aid in precision oncology by identifying therapeutic targets; functional genomics, through co-expression network analysis for inferring gene functions; and multi-omics integration, combining transcript data with or to model biological responses in pathogens, , and health. By 2008, human transcriptome studies had sequenced over 16,000 genes across millions of reads, and by 2015, datasets encompassed hundreds of individuals, underscoring the that continues to drive discoveries in and . Computational tools like DESeq2 for differential expression analysis and Seurat for scRNA-seq clustering have become essential for handling the vast data volumes, ensuring reproducibility through standards like MINSEQE and public repositories such as GEO and ENA.

History

Pre-transcriptomics era

The concept of (mRNA) as an intermediary carrier of genetic information from DNA to proteins was first proposed in 1961 by François Jacob and in their seminal work on genetic regulation in bacteria, where they described mRNA as a short-lived template directing protein synthesis. This theoretical framework was experimentally validated later that year through pulse-labeling experiments by , Jacob, and , who identified an unstable RNA species in Escherichia coli that carried genetic information from genes to ribosomes. These discoveries established the and shifted focus toward studying RNA as a key regulator of , though initial methods were confined to microbial systems and indirect measurements like enzyme activity assays. In the 1970s, Northern blotting emerged as the first direct technique for detecting and quantifying specific mRNA transcripts in eukaryotic cells. Developed by James Alwine, David Kemp, and George Stark in 1977, the method involves separating RNA by size via , transferring it to a , and hybridizing it with radioactively labeled DNA or RNA probes complementary to the target transcript. This hybridization-based approach allowed researchers to assess mRNA abundance and size for individual genes, providing qualitative and semi-quantitative insights into expression patterns under different conditions. However, Northern blotting was inherently low-throughput, requiring separate experiments for each transcript and relying on hazardous radioactive materials, which limited its scalability. Parallel advancements in the enabled the creation of (cDNA) libraries, which facilitated the cloning and sequencing of expressed genes from mRNA templates. Building on the discovery of by Howard Temin and in 1970, which allowed synthesis of DNA from , researchers like —pioneer of technology—and Norman Davidson developed methods to insert cDNA into bacterial plasmids for propagation and analysis. Early cDNA libraries, such as those constructed from globin mRNA in rabbit reticulocytes, permitted the isolation of full-length coding sequences and initial studies of , marking a transition from biochemical assays to . These libraries were instrumental in characterizing eukaryotic but suffered from biases toward highly abundant transcripts and incomplete reverse transcription. Pre-high-throughput methods like Northern blotting and cDNA libraries shared critical limitations that precluded comprehensive transcriptome profiling: they were labor-intensive, requiring manual handling and weeks of processing per experiment; offered low throughput, analyzing only a handful of genes at a time; and lacked the sensitivity to detect low-abundance or rare transcripts across an entire genome. These constraints highlighted the need for systematic approaches to understand dynamic gene expression on a global scale. The initiation of the Human Genome Project in 1990 further underscored this gap, as sequencing the human genome revealed the static DNA blueprint but emphasized the necessity of expression data to elucidate gene function, regulation, and disease mechanisms. This realization paved the way for early transcriptomics methods focused on genome-wide profiling.

Early transcriptomics methods

The early transcriptomics methods of the 1990s marked a transition from gene-specific analyses to -scale profiling of expressed sequences, enabling the systematic identification of transcripts without complete genome knowledge. A foundational technique was the (EST) approach, introduced in 1991 by J. Craig Venter's team at the . This method involved constructing cDNA libraries from mRNA isolated from tissue, followed by automated partial sequencing (typically 200–500 base pairs from the 5' end) of randomly selected clones to generate short, unique tags representing transcribed genes. ESTs facilitated rapid gene discovery, with the initial effort yielding over 600 tags that identified 337 novel human genes, including homologs to known proteins like RNA polymerase subunits. By 1995, expanded EST projects had sequenced over 174,000 tags from diverse tissues, encompassing 83 million nucleotides of cDNA and revealing patterns of across cell types, with estimates suggesting representation of approximately 50,000–100,000 genes. These efforts, reliant on prior RNA isolation and reverse transcription to cDNA, provided early snapshots of the in samples and were extended to model organisms like for comparative studies. However, ESTs suffered from challenges including high redundancy, where abundant transcripts dominated libraries, and inconsistent quantitative accuracy due to biases in efficiency and sequencing depth. Complementing ESTs, (SAGE) was developed in 1995 by Victor Velculescu and colleagues at . SAGE refined tag-based profiling by digesting cDNA with restriction enzymes to produce 10–14 tags from a defined position near the 3' end, ligating multiple tags into concatemers for efficient sequencing, and counting tag frequencies to quantify transcript abundance without needing gene-specific probes or a . The technique was first applied to human cells, identifying differentially expressed genes, and soon extended to , where it cataloged nearly the entire of Saccharomyces cerevisiae under standard growth conditions, detecting over 6,000 transcripts. While SAGE offered improved quantification over ESTs, it still grappled with tag ambiguity (where short sequences might match multiple genes) and redundancy from highly expressed transcripts, limiting resolution for low-abundance or isoform-specific expression. Nonetheless, both methods excelled in uncovering novel genes pre-genome assembly, with ESTs alone contributing to the annotation of thousands of previously unknown human sequences and accelerating the Project's gene discovery phase.

Evolution of high-throughput techniques

The development of DNA microarrays in the late 1990s represented a foundational shift toward high-throughput transcriptomics, enabling the parallel quantification of thousands of levels. At , and colleagues pioneered spotted cDNA microarrays, which used robotic printing to array PCR-amplified DNA fragments on glass slides, allowing for the monitoring of expression patterns across genomes and beyond. Independently, introduced the GeneChip as the first commercial high-density oligonucleotide array platform in 1996, employing to synthesize up to 10410^4 probes per chip for expression monitoring in human and model organisms. These innovations surpassed earlier qualitative methods like SAGE by providing quantitative, genome-scale data, albeit limited by probe design biases and static gene coverage. The transition to next-generation sequencing (NGS) in the mid-2000s propelled transcriptomics into an era of unbiased, high-resolution analysis, with emerging as the dominant paradigm by 2008. Mortazavi et al. demonstrated 's potential using the 454 platform to deeply sequence mouse liver and brain transcriptomes, achieving quantitative mapping of exons, introns, and novel transcripts with greater dynamic range than microarrays—detecting expression levels spanning over five orders of magnitude. This approach revolutionized accuracy by eliminating hybridization artifacts and enabling discovery of and low-abundance RNAs, while Illumina's reversible terminator chemistry soon adapted for shorter-read, higher-throughput , scaling from millions to hundreds of millions of reads per run. Milestones such as the project, launched in 2003, further integrated transcriptomics into comprehensive genomic annotation by combining microarray-based RNA profiling with chromatin and protein-binding data across 1% of the , later expanding to whole-genome for functional element identification. The , initiated in 2008, enhanced variant-aware expression studies by cataloging over 88 million human genetic variants, which, when paired with from diverse populations, revealed how common and rare variants influence allele-specific expression and splicing. Throughput advancements underscored this evolution, progressing from 10410^4 probes on early arrays to billions of reads on contemporary NGS platforms, dramatically improving coverage and cost-efficiency for complex transcriptomes. Illumina's HiSeq 2000 release in exemplified this leap, delivering up to 600 gigabases per run and enabling routine deep sequencing of polyadenylated and total .

Fundamentals of Transcriptome Analysis

RNA isolation and preparation

RNA isolation is a critical initial step in transcriptomics workflows, aimed at extracting high-quality RNA from biological samples such as cells, tissues, or biofluids while minimizing degradation and contamination. This process ensures that downstream analyses, including sequencing or hybridization-based assays, yield reliable transcriptome profiles. The primary goal is to obtain pure RNA free from DNA, proteins, and other contaminants, as impurities can interfere with enzymatic reactions and bias results. One of the most widely adopted methods for RNA isolation is the acid guanidinium thiocyanate-phenol-chloroform extraction, originally developed by Chomczynski and Sacchi in 1987 and commercialized as the TRIzol reagent in 1993. This single-step protocol involves lysing cells or tissues in a denaturing solution containing guanidinium thiocyanate to inactivate RNases, followed by phase separation using phenol and chloroform; RNA partitions into the aqueous phase, while DNA and proteins remain in the interphase and organic phase, respectively. The method is versatile, applicable to diverse sample types including mammalian tissues and bacteria, and typically yields 1-10 μg of total RNA per mg of tissue, varying by sample type such as liver (6-10 μg/mg) or muscle (1-5 μg/mg). Key considerations during RNA isolation include maintaining an RNase-free environment, as these ubiquitous enzymes can rapidly degrade . All reagents, equipment, and workspaces must be treated with RNase inhibitors like (DEPC) or certified RNase-free, and procedures are performed on ice or at 4°C to slow enzymatic activity. Sample-specific challenges also arise; for instance, fibrous tissues such as material or muscle require mechanical homogenization using bead beating or grinding to disrupt cell walls and release effectively. Improper handling can lead to incomplete or shear-induced fragmentation, compromising yield and . Following isolation, RNA quality must be rigorously assessed to ensure suitability for transcriptomics applications. The (RIN), introduced in 2006 as an automated metric derived from electrophoretic profiles on the Agilent Bioanalyzer, quantifies degradation on a scale of 1 to 10, with higher values indicating intact based on ratios of ribosomal RNA bands and baseline noise. A RIN value greater than 7 is generally recommended for RNA sequencing, as lower integrity correlates with fragmented transcripts and reduced detection of full-length mRNAs. Purity is evaluated by , targeting A260/A280 ratios of 1.8-2.0 to confirm minimal protein contamination and A260/A230 ratios above 2.0 to exclude salts or organic carryover. In transcriptomics, isolated total RNA often requires enrichment for specific fractions to focus on coding or regulatory transcripts. Poly(A)-selection, using oligo(dT) beads to capture mRNA via its polyadenylated tail, isolates comprising about 1-5% of total and is standard for eukaryotic poly(A)+ transcript profiling. Alternatively, for non-polyadenylated RNAs like bacterial transcripts or long non-coding RNAs, total is used directly, but (rRNA), which constitutes 80-90% of cellular , must be depleted to enhance sequencing depth for low-abundance targets. The Ribo-Zero kit, introduced by Epicentre (now Illumina) around 2010, employs biotinylated probes hybridizing to rRNA for magnetic bead-based removal, achieving over 99% depletion efficiency in , , and bacterial samples.

Library preparation basics

Library preparation in transcriptomics involves converting purified into a sequencing-compatible library, typically double-stranded cDNA with adapters for high-throughput sequencing platforms. This process is essential for all RNA-seq-based methods, starting from isolated total RNA or mRNA as the input material. The minimizes biases introduced during enzymatic reactions and ensures sufficient yield for sequencing, with typical input requirements ranging from 10-100 ng of high-quality (RIN >7). The core steps begin with reverse transcription of RNA to complementary DNA (cDNA). This is achieved using enzymes, with primers such as oligo(dT) to target polyadenylated mRNA or random hexamers for broader RNA coverage including non-poly(A) transcripts. Following first-strand synthesis, second-strand synthesis generates double-stranded cDNA, often via extension or template-switching mechanisms. The cDNA then undergoes end-repair to create blunt ends suitable for ligation, where platform-specific adapters (e.g., Y-adapters for Illumina sequencing) are attached to enable cluster amplification and sequencing. Fragmentation is performed to produce fragments of optimal size, typically 200-500 for short-read sequencing, using methods like enzymatic (e.g., RNase III for or DNase for cDNA), heat-induced , or physical shearing (e.g., ). Size selection follows, often via or magnetic bead-based purification, to isolate the desired fragment lengths and remove adapters or primers. Amplification via PCR increases library yield, but excessive cycles can introduce duplication bias; to mitigate this, unique molecular identifiers (UMIs)—short random barcodes added during reverse transcription—enable deduplication and accurate quantification, a strategy widely adopted since the 2010s. Common commercial kits streamline these steps, with Illumina's TruSeq RNA Library Prep Kit serving as a standard for stranded or unstranded libraries. The TruSeq protocol integrates fragmentation, end-repair, and adapter ligation in a streamlined , supporting inputs as low as 10 ng for total in optimized variants while requiring high RNA integrity to avoid degradation artifacts.

Core Transcriptomics Technologies

Expressed Sequence Tags (ESTs)

Expressed sequence tags (ESTs) represent one of the earliest high-throughput methods for transcriptomics, enabling the identification of expressed through partial sequencing of (cDNA) clones. Developed in the early 1990s, ESTs provided a cost-effective approach to catalog gene transcripts without requiring a prior , playing a pivotal role in the by facilitating gene discovery and annotation. By generating short sequence reads from the ends of cDNA inserts, EST projects amassed vast datasets that revealed the complexity of eukaryotic transcriptomes, including and tissue-specific expression patterns. The workflow for EST generation begins with the isolation of (mRNA) from a biological sample, followed by reverse transcription to synthesize double-stranded cDNA. This cDNA is then cloned into bacterial vectors, such as or plasmids, to create a of expressed genes. Randomly selected clones undergo , typically targeting the 5' and 3' ends to produce tags of 200–800 base pairs in length; the 5' ends are prioritized for capturing coding regions near the , while 3' ends often include poly(A) tail sequences for normalization. These single-pass sequences are deposited into public repositories like dbEST, the division of , which was established in 1993 to organize and annotate the growing collection. By 2000, dbEST had accumulated nearly 2 million human ESTs, significantly aiding the annotation of the by identifying approximately 80–90% of protein-coding genes. ESTs offer key advantages for transcriptomics in resource-limited settings, particularly their independence from a sequenced , allowing rapid discovery in novel or non-model . They also provide insights into , such as exon-intron boundaries and alternative isoforms, at a fraction of the cost of full-length sequencing. However, limitations include the inability to quantify transcript abundance, as EST frequency reflects biases rather than expression levels, and high , with 30–50% of tags often duplicating highly expressed genes like transcripts. Additionally, sequencing errors from single-pass reads (up to 5–10% inaccuracy) and chimeric clones can introduce artifacts, necessitating bioinformatics filtering. In contemporary applications, ESTs remain valuable for de novo transcriptome assembly in non-model organisms lacking reference genomes, where they support initial gene cataloging before more advanced sequencing. Tools like TGICL (TIGR Gene Indices Clustering Tools) facilitate this by clustering redundant ESTs based on pairwise similarity and assembling consensus sequences, reducing dataset size while preserving diversity; for instance, TGICL has been applied to assemble EST libraries from plants and , yielding unigene sets for functional annotation. This approach evolved into methods like (SAGE), which addressed quantification shortcomings by concatenating short tags.

Serial Analysis of Gene Expression (SAGE) and variants

() is a tag-based transcriptomics method developed in that enables quantitative profiling of by generating and sequencing short tags from transcripts. The protocol begins with the synthesis of double-stranded cDNA from poly(A)+ , followed by digestion with an anchoring such as NlaIII, which recognizes the CATG near the 3' end of transcripts. This produces cDNA fragments that are then ligated to adapters containing recognition sites for a tagging , resulting in ditags consisting of two 10-14 () tags flanking a linker . These ditags are subsequently amplified by PCR, cleaved to release the tags, and concatenated into longer chains for efficient and sequencing. The frequency of each unique tag in the sequenced concatemers directly reflects the abundance of the corresponding transcript, allowing relative quantification without the need for prior knowledge of sequences. Variants of SAGE were introduced to enhance tag specificity and sensitivity for low-abundance transcripts. Long SAGE (LongSAGE), developed in 2002, extends tag length to 21 bp by using a different anchoring like MmeI, which cuts further from the recognition site, improving unique mapping to transcripts and reducing ambiguity in assignment. SuperSAGE, introduced in 2005, further refines the approach by employing multiple type III restriction s (e.g., EcoP15I) to generate 26 bp tags, enabling detection of rare transcripts and facilitating discovery of novel s through higher resolution. These modifications maintain the core ditagging and steps but increase the informational content per tag, making the method more suitable for complex transcriptomes. In SAGE and its variants, quantification involves mapping tags to genes using the known position of the anchoring enzyme site, typically verified against reference genomes or transcript databases. For instance, NlaIII tags are positioned 14 bp downstream of the CATG site, allowing reliable . Expression levels are normalized by dividing tag counts by the total number of tags sequenced in the , providing relative abundance metrics that can be compared across samples. This approach offers semi-quantitative data superior to earlier methods like expressed sequence tags (ESTs), which primarily catalog transcripts qualitatively. SAGE has been particularly impactful in , exemplified by its early application in profiling colorectal tumors. In a study, SAGE libraries from normal colonic mucosa and colon cancer cells revealed expression patterns of over 45,000 transcripts, identifying more than 500 differentially expressed genes and novel fusion transcripts, such as those involving tumor suppressors, which highlighted potential oncogenic mechanisms.

Microarrays

Microarrays are hybridization-based platforms that enable the simultaneous measurement of thousands of transcripts by detecting the binding of fluorescently labeled RNA targets to immobilized DNA probes on a solid substrate. The core principle involves the immobilization of probes, such as or cDNA fragments, onto a glass slide or silicon chip, followed by hybridization with complementary RNA derived from the sample. After washing to remove unbound material, the bound targets are visualized through fluorescence detection, typically using to quantify signal intensity, which correlates with transcript abundance. The technology originated in the mid-1990s with the development of spotted cDNA microarrays, where robotic spotting deposited picoliter volumes of DNA probes onto slides, allowing the monitoring of patterns in . Concurrently, high-density arrays emerged using photolithographic synthesis to create arrays with millions of features, as pioneered by in their GeneChip system, which facilitated genome-wide expression analysis. These early platforms revolutionized transcriptomics by providing a high-throughput alternative to low-throughput methods like Northern blotting, enabling the study of differential expression across conditions such as disease states or treatments. Key methods in microarray experiments include probe design, where short oligonucleotides (typically 25-60 nucleotides long) are selected for specificity to target transcripts, often incorporating multiple probes per gene to account for variability. Sample RNA is reverse-transcribed into cDNA, labeled with fluorescent dyes such as Cy3 (green) or Cy5 (red) for two-color designs, and hybridized to the array overnight. In two-color formats, like those from Agilent introduced in the early 2000s, two samples are co-hybridized on the same array, yielding intensity ratios that directly indicate relative expression levels between them. One-color designs, such as Affymetrix GeneChips, label each sample independently and compare absolute intensities across separate arrays, reducing dye bias but requiring more replicates. Post-hybridization, data are extracted as fluorescence intensities, background-subtracted, and normalized to enable differential expression analysis. Advances in the 2000s included increased probe density, with arrays reaching over 1 million features by incorporating whole-genome coverage, and improved manufacturing via inkjet or maskless for custom designs. These enhancements boosted sensitivity for detecting low-abundance transcripts and expanded applications to non-model organisms using expressed sequence tag (EST)-based probes. Despite these improvements, microarrays face inherent limitations, including cross-hybridization between similar sequences, which can lead to false positives, and reliance on predefined probe sets that miss novel transcripts or isoforms. Additionally, their dynamic range (typically 10^3 to 10^4-fold) is narrower than sequencing-based methods, limiting detection of highly variable expression levels. The widespread adoption of RNA sequencing after 2010 has led to a decline in microarray use for discovery transcriptomics, as sequencing offers unbiased, higher-resolution profiling without prior sequence knowledge. However, microarrays persist in targeted applications, such as validation panels or fixed-probe sets for clinical diagnostics, exemplified by the NanoString nCounter system introduced in 2008, which uses color-coded barcodes for direct digital counting of up to 800 transcripts without enzymatic amplification. This hybrid approach combines hybridization specificity with amplification-free quantification, maintaining relevance in resource-limited settings.

Bulk RNA-Seq

Bulk , or bulk RNA sequencing, represents the foundational next-generation sequencing (NGS) approach for profiling the of a of cells, providing a comprehensive snapshot of levels across thousands of transcripts simultaneously. Unlike earlier hybridization-based methods such as microarrays, which rely on predefined probes and offer relative quantification, bulk RNA-Seq enables unbiased discovery of novel transcripts and delivers absolute expression measures based on sequencing depth, making it the gold standard for population-averaged transcriptomic analysis. Introduced in 2008 using Illumina platforms, it generates short reads typically 50-150 base pairs in length, allowing high-throughput interrogation of complex eukaryotic transcriptomes at single-nucleotide resolution. The core workflow of bulk begins with extraction from bulk tissue or cell populations, followed by conversion to (cDNA) via reverse transcription. The cDNA is then randomly fragmented into short pieces, adapters are ligated to the ends for sequencing compatibility, and the library is amplified via PCR before sequencing on platforms like Illumina, which produce millions to billions of reads per sample. This random fragmentation ensures even coverage across transcripts, while the resulting paired-end or single-end reads span exons and introns, facilitating the detection of splicing events through intron-spanning reads that bridge exon-exon junctions. For polyadenylated mRNA enrichment, poly(A) selection is common, but to capture non-coding RNAs and precursors, rRNA depletion methods—such as subtraction kits—enable broader total coverage by removing abundant ribosomal transcripts that comprise up to 80-90% of cellular . Advances in the enhanced the fidelity of bulk , particularly through stranded library preparation protocols that preserve information on the original strand of origin, crucial for distinguishing and antisense transcription. The dUTP method, incorporating deoxyuridine triphosphate during second-strand cDNA synthesis followed by enzymatic degradation, achieves high strand specificity with minimal bias and has become widely adopted in commercial kits. Post-sequencing, reads are aligned to a using spliced aligners like , developed in 2012, which efficiently handles large datasets and accurately maps reads across splice junctions by indexing the with annotations. Transcript abundance is quantified as reads per kilobase million (RPKM), normalizing for length and total read count to enable comparable expression levels across samples and experiments; the formula is: RPKM=reads mapped to genegene length in kb×total aligned reads in millions\text{RPKM} = \frac{\text{reads mapped to gene}}{\text{gene length in kb} \times \text{total aligned reads in millions}} Since its inception, bulk RNA-Seq costs have plummeted due to improvements in sequencing throughput and chemistry, dropping from several thousand dollars per sample in 2008—when early experiments required substantial computational and reagent resources—to under $100 per sample in the 2020s for standard 50-100 million read depths, democratizing access for routine transcriptomic studies. This cost trajectory mirrors broader NGS economies of scale, with innovations like multiplexed barcoding further reducing per-sample expenses while maintaining sensitivity for detecting differentially expressed genes in biological contexts.

Advanced Transcriptomics Techniques

Single-cell RNA sequencing (scRNA-seq)

Single-cell RNA sequencing (scRNA-seq) enables the profiling of the at the resolution of individual cells, revealing cellular heterogeneity that bulk averages out across cell populations. By capturing profiles from thousands to millions of cells simultaneously, scRNA-seq identifies rare cell types, tracks developmental trajectories, and uncovers regulatory mechanisms driving cellular diversity in tissues. This technology builds on the foundational principles of bulk but incorporates unique barcoding strategies to distinguish transcripts from individual cells. Key developments in scRNA-seq include droplet-based methods, which emerged in the mid-2010s to achieve high-throughput . Drop-seq, introduced by the McCarroll laboratory in 2015, uses microfluidic droplets to encapsulate single cells with barcoded mRNA capture beads, allowing parallel analysis of thousands of cells in a single run. Building on this, commercialized the Chromium platform in 2016, refining droplet encapsulation for scalable, reproducible scRNA-seq with gel bead-in-emulsion () technology that incorporates cell-specific and unique molecular identifier (UMI) barcodes during reverse transcription.00549-8) The typical workflow for droplet-based scRNA-seq begins with tissue dissociation into single-cell suspensions, followed by encapsulation in oil-emulsion droplets alongside barcoded beads. Within each droplet, cell lysis releases mRNA, which is captured by poly-dT oligos on the beads linked to cell barcodes and UMIs; reverse transcription then generates cDNA barcoded at both cellular and molecular levels to mitigate PCR amplification biases. Droplets are broken, cDNA is pooled and amplified, libraries are prepared for next-generation sequencing, and reads are demultiplexed to assign transcripts to specific cells for downstream analysis.00549-8) Advances in scRNA-seq have dramatically increased throughput, with platforms like enabling profiling of up to 10^5 cells per run by the early 2020s through optimized and deeper sequencing. More recent innovations, such as the GEM-X technology introduced by in 2024, have further increased throughput to up to 160,000 cells per run while maintaining high sensitivity and reducing multiplet rates. Computational tools have addressed technical challenges, such as dropout events where low-abundance transcripts are missed; for instance, scImpute uses statistical modeling to impute these zeros by leveraging similar cells and dropout probability patterns, improving data accuracy without over-imputation. Despite these gains, mRNA capture efficiency remains limited at approximately 10-20%, leading to sparse data that requires careful normalization.00261-0) Applications of scRNA-seq span large-scale initiatives like the Human Cell Atlas, launched in 2016 as a global effort to map all human cell types using single-cell profiling to create reference atlases of tissues and organs. These efforts have facilitated discoveries in , , and disease, such as identifying novel cell states in tumors and immune responses. However, challenges persist in handling ambient contamination and batch effects across experiments, underscoring the need for integrated computational pipelines.

Spatial transcriptomics

Spatial transcriptomics encompasses technologies that profile while retaining spatial information from intact tissue sections, allowing researchers to link transcriptomic data to morphological and cellular contexts for studying tissue architecture and intercellular interactions. These methods address limitations of dissociated single-cell approaches by preserving native tissue organization, enabling the identification of spatially restricted patterns that drive biological processes. Pioneering work in this field demonstrated the feasibility of untargeted, genome-wide spatial profiling, laying the foundation for subsequent commercial and high-resolution advancements. Core principles of spatial transcriptomics involve either imaging-based detection or sequencing-based capture of transcripts directly from tissue. In imaging-based approaches, such as (FISH), oligonucleotide probes hybridize to target RNAs in fixed tissue, with fluorescent signals imaged to determine transcript locations at single-molecule resolution. Capture-based methods, conversely, use arrays of spatially barcoded probes on slides to bind and reverse-transcribe mRNAs from permeabilized tissue sections; subsequent sequencing of the barcoded cDNAs maps reads back to their tissue coordinates via post-permeabilization imaging of the stained section. A seminal capture-based technique, introduced by Ståhl et al. in 2016, employed a of ~100 μm barcoded spots to capture poly(A) tails, enabling the first unbiased spatial transcriptomes of and sections with ~1,000 genes detected per spot. This approach was advanced and commercialized by as Visium in 2019, featuring hexagonal arrays at 55 μm resolution for broader tissue compatibility and higher throughput. Subsequent innovations have pushed toward subcellular resolution and scalability. MERFISH, developed by Chen et al. in 2015, uses error-robust binary encoding in multiplexed single-molecule to simultaneously image up to 1,000 species in single cells, achieving ~140 nm resolution through combinatorial probe design that minimizes detection errors. Complementing sequencing-based methods, Slide-seq, reported by Rodriques et al. in 2019, transfers surface-bound mRNAs from fresh-frozen tissue onto a densely packed array of ~10 μm barcoded beads, yielding near-single-cell resolution (average ~10 cells per "pixel") and detecting ~1,000–5,000 genes per location in mouse cerebellum and . In 2024, introduced Visium HD, offering 2 μm resolution and comprehensive whole-transcriptome profiling, enabling detailed subcellular mapping of in tissues. These high-resolution variants have expanded applications to dynamic processes like neuronal circuit mapping. Spatial transcriptomics data often aggregate transcripts from multiple cells per capture spot, necessitating computational deconvolution to resolve cell-type compositions; algorithms like SPOTlight employ seeded with single-cell references to estimate proportions and impute single-cell-level expression, enhancing resolution in heterogeneous tissues. In tumor microenvironments, such analyses have illuminated spatially patterned immune-tumor interactions, such as fibroblast-driven exclusion zones in that correlate with poor response, informing targeted therapies. For instance, Visium profiling of colorectal tumors has mapped ligand-receptor pairs between malignant cells and stromal niches, revealing mechanisms of immune evasion at ~50 μm scale.

Long-read and direct RNA sequencing

Long-read and direct sequencing technologies enable the capture of full-length transcripts, providing comprehensive resolution of isoforms and structural features that short-read methods often fragment and misassemble. These platforms generate reads spanning thousands of , facilitating accurate detection of , novel transcripts, and regulatory elements like alternative sites, which influence mRNA stability, localization, and efficiency. By overcoming the ambiguity in isoform reconstruction inherent to short-read bulk , long-read approaches reveal the true complexity of eukaryotic and prokaryotic transcriptomes. Pacific Biosciences' Single Molecule Real-Time (SMRT) sequencing, introduced in , employs circular consensus sequencing of full-length cDNA to produce highly accurate long reads. In this system, a DNA repeatedly synthesizes the same cDNA molecule within zero-mode waveguides, incorporating fluorescently labeled observed in real time to generate multiple passes per molecule, yielding consensus sequences with read lengths often exceeding 10 kb. The Iso-Seq protocol, developed in the early , supports transcriptomic applications by synthesizing double-stranded full-length cDNA from poly(A)-selected , performing gel-based size selection to enrich isoforms longer than 1 kb, and preparing SMRTbell libraries for circularization and sequencing. Consensus accuracy in SMRT sequencing improves with pass coverage and can be approximated as 1ϵN1 - \frac{\epsilon}{\sqrt{N}}
Add your contribution
Related Hubs
User Avatar
No comments yet.