Hubbry Logo
Third-generation sequencingThird-generation sequencingMain
Open search
Third-generation sequencing
Community hub
Third-generation sequencing
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Third-generation sequencing
Third-generation sequencing
from Wikipedia

Third-generation sequencing (also known as long-read sequencing) is a class of DNA sequencing methods that have the capability to produce substantially longer reads (ranging from 10 kb to >1 Mb in length)[1] than second generation sequencing, also known as next-generation sequencing methods.[2] These methods emerged in 2008, characterized by technologies such as nanopore sequencing or single-molecule real-time sequencing, and continue to be developed.[2] The ability to sequence longer reads has critical implications for both genome science and the study of biology in general. In structural variant calling, third generation sequencing has been found to outperform existing methods, even at a low depth of sequencing coverage.[3] However, third generation sequencing data have much higher error rates than previous technologies, which can complicate downstream genome assembly and analysis of the resulting data.[4] These technologies are undergoing active development and it is expected that there will be improvements to the high error rates.[1]

Current technologies

[edit]

Sequencing technologies with a different approach than second-generation platforms were first described as "third-generation" in 2008–2009.[5]

There are several companies currently at the heart of third generation sequencing technology development, namely, Pacific Biosciences, Oxford Nanopore Technology, Quantapore (CA-USA), and Stratos (WA-USA). These companies are taking fundamentally different approaches to sequencing single DNA molecules.

PacBio developed the sequencing platform of single molecule real time sequencing (SMRT), based on the properties of zero-mode waveguides. Signals are in the form of fluorescent light emission from each nucleotide incorporated by a DNA polymerase bound to the bottom of the zL well.

Oxford Nanopore's technology involves passing a DNA molecule through a nanoscale pore structure and then measuring changes in electrical field surrounding the pore; while Quantapore has a different proprietary nanopore approach. Stratos Genomics spaces out the DNA bases with polymeric inserts, "Xpandomers", to circumvent the signal to noise challenge of nanopore ssDNA reading.

Also notable is Helicos's single molecule fluorescence approach, but the company entered bankruptcy in the fall of 2015.

Advantages

[edit]

Longer reads

[edit]

In comparison to the second generation of sequencing technologies, third generation sequencing has the obvious advantage of producing much longer reads. It is expected that these longer read lengths will alleviate numerous computational challenges surrounding genome assembly, transcript reconstruction, and metagenomics among other important areas of modern biology and medicine.[2]

It is well known that eukaryotic genomes including primates and humans are complex and have large numbers of long repeated regions. Short reads from second generation sequencing must resort to approximative strategies in order to infer sequences over long ranges for assembly and genetic variant calling. Pair end reads have been leveraged by second generation sequencing to combat these limitations. However, exact fragment lengths of pair ends are often unknown and must also be approximated as well. By making long reads lengths possible, third generation sequencing technologies have clear advantages.

Epigenetics

[edit]

Epigenetic markers are stable and potentially heritable modifications to the DNA molecule that are not in its sequence. An example is DNA methylation at CpG sites, which has been found to influence gene expression. Histone modifications are another example. The current generation of sequencing technologies rely on laboratory techniques such as ChIP-sequencing for the detection of epigenetic markers. These techniques involve tagging the DNA strand, breaking and filtering fragments that contain markers, followed by sequencing. Third generation sequencing may enable direct detection of these markers due to their distinctive signal from the other four nucleotide bases.[6]

Portability and speed

[edit]
MinION Portable Gene Sequencer, Oxford Nanopore Technologies

Other important advantages of third generation sequencing technologies include portability and sequencing speed.[7] Since minimal sample preprocessing is required in comparison to second generation sequencing, smaller equipment could be designed. Oxford Nanopore Technology has recently commercialized the MinION sequencer. This sequencing machine is roughly the size of a regular USB flash drive and can be used readily by connecting to a laptop. In addition, since the sequencing process is not parallelized across regions of the genome, data could be collected and analyzed in real time. These advantages of third generation sequencing may be well-suited in hospital settings where quick and on-site data collection and analysis is demanded.

Challenges

[edit]

Third generation sequencing, as of 2008, faced important challenges mainly surrounding accurate identification of nucleotide bases; error rates were still much higher compared to second generation sequencing.[4] This is generally due to instability of the molecular machinery involved. For example, in PacBio's single molecular and real time sequencing technology, the DNA polymerase molecule becomes increasingly damaged as the sequencing process occurs.[4] Additionally, since the process happens quickly, the signals given off by individual bases may be blurred by signals from neighbouring bases. This poses a new computational challenge for deciphering the signals and consequently inferring the sequence. Methods such as Hidden Markov Models, for example, have been leveraged for this purpose with some success.[6]

On average, different individuals of the human population share about 99.9% of their genes. In other words, approximately only one out of every thousand bases would differ between any two person. The high error rates involved with third generation sequencing are inevitably problematic for the purpose of characterizing individual differences that exist between members of the same species.[citation needed]

Genome assembly

[edit]

Genome assembly is the reconstruction of whole genome DNA sequences. This is generally done with two fundamentally different approaches.

Reference alignment

[edit]

When a reference genome is available, as one is in the case of human, newly sequenced reads could simply be aligned to the reference genome in order to characterize its properties. Such reference based assembly is quick and easy but has the disadvantage of "hiding" novel sequences and large copy number variants. In addition, reference genomes do not yet exist for most organisms.

De novo assembly

[edit]

De novo assembly is the alternative genome assembly approach to reference alignment. It refers to the reconstruction of whole genome sequences entirely from raw sequence reads. This method would be chosen when there is no reference genome, when the species of the given organism is unknown as in metagenomics, or when there exist genetic variants of interest that may not be detected by reference genome alignment.

Given the short reads produced by the current generation of sequencing technologies, de novo assembly is a major computational problem. It is normally approached by an iterative process of finding and connecting sequence reads with sensible overlaps. Various computational and statistical techniques, such as de bruijn graphs and overlap layout consensus graphs, have been leveraged to solve this problem. Nonetheless, due to the highly repetitive nature of eukaryotic genomes, accurate and complete reconstruction of genome sequences in de novo assembly remains challenging. Pair end reads have been posed as a possible solution, though exact fragment lengths are often unknown and must be approximated.[8]

Hybrid assembly – the use of reads from 3rd gen sequencing platforms with shorts reads from 2nd gen platforms – may be used to resolve ambiguities that exist in genomes previously assembled using second generation sequencing. Short second generation reads have also been used to correct errors that exist in the long third generation reads.

Hybrid assembly

[edit]

Long read lengths offered by third generation sequencing may alleviate many of the challenges currently faced by de novo genome assemblies. For example, if an entire repetitive region can be sequenced unambiguously in a single read, no computation inference would be required. Computational methods have been proposed to alleviate the issue of high error rates. For example, in one study, it was demonstrated that de novo assembly of a microbial genome using PacBio sequencing alone performed superior to that of second generation sequencing.[9]

Third generation sequencing may also be used in conjunction with second generation sequencing. This approach is often referred to as hybrid sequencing. For example, long reads from third generation sequencing may be used to resolve ambiguities that exist in genomes previously assembled using second generation sequencing. On the other hand, short second generation reads have been used to correct errors in that exist in the long third generation reads. In general, this hybrid approach has been shown to improve de novo genome assemblies significantly.[10]

Epigenetic markers

[edit]

DNA methylation (DNAm) – the covalent modification of DNA at CpG sites resulting in attached methyl groups – is the best understood component of epigenetic machinery. DNA modifications and resulting gene expression can vary across cell types, temporal development, with genetic ancestry, can change due to environmental stimuli and are heritable. After the discovery of DNAm, researchers have also found its correlation to diseases like cancer and autism.[11] In this disease etiology context DNAm is an important avenue of further research.

Advantages

[edit]

The current most common methods for examining methylation state require an assay that fragments DNA before standard second generation sequencing on the Illumina platform. As a result of short read length, information regarding the longer patterns of methylation are lost.[6] Third generation sequencing technologies offer the capability for single molecule real-time sequencing of longer reads, and detection of DNA modification without the aforementioned assay.[12]

PacBio SMRT technology and Oxford Nanopore can use unaltered DNA to detect methylation.

Oxford Nanopore Technologies' MinION has been used to detect DNAm. As each DNA strand passes through a pore, it produces electrical signals which have been found to be sensitive to epigenetic changes in the nucleotides, and a hidden Markov model (HMM) was used to analyze MinION data to detect 5-methylcytosine (5mC) DNA modification.[6] The model was trained using synthetically methylated E. coli DNA and the resulting signals measured by the nanopore technology. Then the trained model was used to detect 5mC in MinION genomic reads from a human cell line which already had a reference methylome. The classifier has 82% accuracy in randomly sampled singleton sites, which increases to 95% when more stringent thresholds are applied.[6]

Other methods address different types of DNA modifications using the MinION platform. Stoiber et al. examined 4-methylcytosine (4mC) and 6-methyladenine (6mA), along with 5mC, and also created software to directly visualize the raw MinION data in a human-friendly way.[13] Here they found that in E. coli, which has a known methylome, event windows of 5 base pairs long can be used to divide and statistically analyze the raw MinION electrical signals. A straightforward Mann-Whitney U test can detect modified portions of the E. coli sequence, as well as further split the modifications into 4mC, 6mA or 5mC regions.[13]

It seems likely that in the future, MinION raw data will be used to detect many different epigenetic marks in DNA.

PacBio sequencing has also been used to detect DNA methylation. In this platform, the pulse width – the width of a fluorescent light pulse – corresponds to a specific base. In 2010 it was shown that the interpulse distance in control and methylated samples are different, and there is a "signature" pulse width for each methylation type.[12] In 2012 using the PacBio platform the binding sites of DNA methyltransferases were characterized.[14] The detection of N6-methylation in C Elegans was shown in 2015.[15] DNA methylation on N6-adenine using the PacBio platform in mouse embryonic stem cells was shown in 2016.[16]

Other forms of DNA modifications – from heavy metals, oxidation, or UV damage – are also possible avenues of research using Oxford Nanopore and PacBio third generation sequencing.

Drawbacks

[edit]

Processing of the raw data – such as normalization to the median signal – was needed on MinION raw data, reducing real-time capability of the technology.[13] Consistency of the electrical signals is still an issue, making it difficult to accurately call a nucleotide. MinION has low throughput; since multiple overlapping reads are hard to obtain, this further leads to accuracy problems of downstream DNA modification detection. Both the hidden Markov model and statistical methods used with MinION raw data require repeated observations of DNA modifications for detection, meaning that individual modified nucleotides need to be consistently present in multiple copies of the genome, e.g. in multiple cells or plasmids in the sample.

For the PacBio platform, too, depending on what methylation you expect to find, coverage needs can vary. As of March 2017, other epigenetic factors like histone modifications have not been discoverable using third-generation technologies. Longer patterns of methylation are often lost because smaller contigs still need to be assembled.

Transcriptomics

[edit]

Transcriptomics is the study of the transcriptome, usually by characterizing the relative abundances of messenger RNA molecules in the tissue under study. According to the central dogma of molecular biology, genetic information flows from double stranded DNA molecules to single stranded mRNA molecules where they can be readily translated into functional protein molecules. By studying the transcriptome, one can gain valuable insight into the regulation of gene expression.

While expression levels can be more or less accurately depicted by second generation sequencing (we can assume that actual abundances of the population of transcripts are randomly sampled), transcript-level information still remains an important challenge.[17] As a consequence, the role of alternative splicing in molecular biology remains largely elusive. Third generation sequencing technologies hold promising prospects in resolving this issue by enabling sequencing of mRNA molecules at their full lengths.

Alternative splicing

[edit]

Alternative splicing (AS) is the process by which a single gene may give rise to multiple distinct mRNA transcripts and consequently different protein translations.[18] Some evidence suggests that AS is a ubiquitous phenomenon and may play a key role in determining the phenotypes of organisms, especially in complex eukaryotes; all eukaryotes contain genes consisting of introns that may undergo AS. In particular, it has been estimated that AS occurs in 95% of all human multi-exon genes.[19] AS has undeniable potential to influence myriad biological processes. Advancing knowledge in this area has critical implications for the study of biology in general.

Transcript reconstruction

[edit]

The current generation of sequencing technologies produce only short reads, putting tremendous limitation on the ability to detect distinct transcripts; short reads must be reverse engineered into original transcripts that could have given rise to the resulting read observations.[20] This task is further complicated by the highly variable expression levels across transcripts, and consequently variable read coverages across the sequence of the gene.[20] In addition, exons may be shared among individual transcripts, rendering unambiguous inferences essentially impossible.[18] Existing computational methods make inferences based on the accumulation of short reads at various sequence locations often by making simplifying assumptions.[20] Cufflinks takes a parsimonious approach, seeking to explain all the reads with the fewest possible number of transcripts.[21] On the other hand, StringTie attempts to simultaneously estimate transcript abundances while assembling the reads.[20] These methods, while reasonable, may not always identify real transcripts.

A study published in 2008 surveyed 25 different existing transcript reconstruction protocols.[17] Its evidence suggested that existing methods are generally weak in assembling transcripts, though the ability to detect individual exons are relatively intact.[17] According to the estimates, average sensitivity to detect exons across the 25 protocols is 80% for Caenorhabditis elegans genes.[17] In comparison, transcript identification sensitivity decreases to 65%. For human, the study reported an exon detection sensitivity averaging to 69% and transcript detection sensitivity had an average of a mere 33%.[17] In other words, for human, existing methods are able to identify less than half of all existing transcript.

Third generation sequencing technologies have demonstrated promising prospects in solving the problem of transcript detection as well as mRNA abundance estimation at the level of transcripts. While error rates remain high, third generation sequencing technologies have the capability to produce much longer read lengths.[22] Pacific Bioscience has introduced the iso-seq platform, proposing to sequence mRNA molecules at their full lengths.[22] It is anticipated that Oxford Nanopore will put forth similar technologies. The trouble with higher error rates may be alleviated by supplementary high quality short reads. This approach has been previously tested and reported to reduce the error rate by more than 3 folds.[23]

Metagenomics

[edit]

Metagenomics is the analysis of genetic material recovered directly from environmental samples.

Advantages

[edit]

The main advantage for third-generation sequencing technologies in metagenomics is their speed of sequencing in comparison to second generation techniques. Speed of sequencing is important for example in the clinical setting (i.e. pathogen identification), to allow for efficient diagnosis and timely clinical actions.

Oxford Nanopore's MinION was used in 2015 for real-time metagenomic detection of pathogens in complex, high-background clinical samples. The first Ebola virus (EBOV) read was sequenced 44 seconds after data acquisition.[24] There was uniform mapping of reads to genome; at least one read mapped to >88% of the genome. The relatively long reads allowed for sequencing of a near-complete viral genome to high accuracy (97–99% identity) directly from a primary clinical sample.[24]

A common phylogenetic marker for microbial community diversity studies is the 16S ribosomal RNA gene. Both MinION and PacBio's SMRT platform have been used to sequence this gene.[25][26] In this context the PacBio error rate was comparable to that of shorter reads from 454 and Illumina's MiSeq sequencing platforms.[citation needed]

Drawbacks

[edit]

MinION's high error rate (~10-40%) prevented identification of antimicrobial resistance markers, for which single nucleotide resolution is necessary. For the same reason, eukaryotic pathogens were not identified.[24] Ease of carryover contamination when re-using the same flow cell (standard wash protocols don't work) is also a concern. Unique barcodes may allow for more multiplexing. Furthermore, performing accurate species identification for bacteria, fungi and parasites is very difficult, as they share a larger portion of the genome, and some only differ by <5%.

The per base sequencing cost is still significantly more than that of MiSeq. However, the prospect of supplementing reference databases with full-length sequences from organisms below the limit of detection from the Sanger approach;[25] this could possibly greatly help the identification of organisms in metagenomics.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Third-generation sequencing (TGS), also known as long-read sequencing, refers to advanced DNA and RNA sequencing technologies that analyze individual nucleic acid molecules in real time without prior PCR amplification, generating reads ranging from thousands to millions of base pairs in length. Introduced in the early 2010s, TGS emerged as a response to the limitations of second-generation sequencing (SGS), such as short read lengths (typically 150–300 bp) that hinder accurate assembly of repetitive genomic regions, detection of structural variants, and phasing of haplotypes. The primary platforms driving TGS are (PacBio) and (ONT). PacBio employs single-molecule real-time (SMRT) sequencing, where incorporates fluorescently labeled in zero-mode waveguides, allowing continuous, long-read sequencing with average lengths of 15–20 kb and accuracy up to 99.95% using circular consensus methods. In contrast, ONT uses nanopore-based sequencing, passing DNA or RNA through protein nanopores to measure changes in ionic current, enabling ultra-long reads up to 4 Mb, real-time analysis, and direct detection of base modifications like without conversion. Notable devices include PacBio's Revio system, which achieves up to 360 Gb per day, and ONT's portable MinION (up to 50 Gb) and high-throughput PromethION (up to 13.3 Tb). TGS offers significant advantages over SGS, including improved de novo genome assembly, resolution of complex structural variations (e.g., insertions, deletions, inversions), and comprehensive transcriptomics through full-length isoform identification. It also facilitates epigenomic studies by directly sequencing modified bases and supports applications in , cancer , and rapid pathogen detection, such as during the and outbreaks. Despite these benefits, challenges persist, including higher per-base error rates (5–15% before correction) and the need for sophisticated computational tools to handle long reads. Ongoing advancements in TGS, such as improved base-calling algorithms and hybrid approaches combining TGS with SGS, promise to reduce costs and enhance accuracy, with recent developments including PacBio's SPRQ-Nx chemistry and ONT's PromethION Plus flow cells further advancing throughput and multiomic capabilities, positioning it as a for precision medicine and large-scale genomic projects.

Overview

Definition and key characteristics

Third-generation sequencing (TGS), also referred to as long-read sequencing, encompasses a suite of DNA and RNA sequencing technologies designed to generate extended sequence reads ranging from 10 kilobases (kb) to over 1 megabase (Mb) in length, facilitating the interrogation of complex genomic regions such as repetitive sequences and structural variants that challenge shorter-read methods. These platforms achieve this by directly sequencing native nucleic acid molecules, bypassing the need for fragmentation or amplification, which minimizes biases associated with polymerase chain reaction (PCR) processes. Key characteristics of TGS include single-molecule resolution, where individual DNA or RNA strands are sequenced without clonal amplification, enabling the detection of rare variants and heterogeneity at the molecular level. Real-time data acquisition is another hallmark, allowing immediate base calling during the sequencing process rather than post-sequencing assembly. Additionally, TGS supports the native detection of epigenetic modifications, such as DNA methylation, by analyzing kinetic signatures or ionic current changes inherent to the unmodified molecule, providing insights into gene regulation without separate bisulfite conversion steps. In comparison to earlier generations, first-generation produces short reads of approximately 800–1,000 base pairs () with low throughput, suitable for targeted validation but inefficient for large-scale . Second-generation next-generation sequencing (NGS) methods, such as Illumina platforms, yield high-throughput short reads of 100–300 but rely on PCR amplification, introducing biases and complicating the resolution of repetitive or low-complexity regions. TGS shifts the paradigm by prioritizing read length and structural fidelity over per-base accuracy (which has improved to >99% in recent iterations), though it initially traded higher error rates for these benefits. TGS technologies first emerged in the late 2000s, with launching its single-molecule real-time (SMRT) platform in 2010, and by 2025, ultra-long reads exceeding 4 Mb have been achieved, particularly with nanopore-based systems.

Historical development and generational context

The development of DNA sequencing technologies traces back to the first generation, which emerged in the 1970s with Frederick Sanger's chain-termination method introduced in 1977. This technique, relying on dideoxynucleotides and , produced reads of less than 1 kb and was pivotal for the , where it enabled the sequencing of the 3 billion base pair over 13 years from 1990 to 2003. The second generation of sequencing, launched in the mid-2000s, shifted to high-throughput platforms that amplified and sequenced millions of short DNA fragments in parallel, drastically reducing costs and time. Key early systems included 454 Life Sciences' pyrosequencing instrument in 2005, which generated 400–500 bp reads, and Illumina's Genome Analyzer in 2006, utilizing reversible terminator chemistry for even higher output. These technologies dominated the 2000s and 2010s, facilitating large-scale genomic studies but struggling with repetitive sequences and structural variants due to short read lengths. Early single-molecule sequencing without amplification began in 2008 with Helicos BioSciences' true single-molecule sequencing platform, but long-read TGS technologies started in 2010 with ' launch of its PacBio RS system, introducing single-molecule real-time (SMRT) sequencing for continuous long-read generation. entered the field in 2014 with , a USB-powered device that allowed portable, real-time sequencing of ultra-long strands. Subsequent key events in the and focused on enhancing TGS accuracy and utility. In 2015, Oxford Nanopore released for widespread early access, emphasizing its portability for field applications. The brought major accuracy boosts, including ' 2019 introduction of HiFi reads through circular consensus sequencing, yielding >99% accuracy for 15–20 kb reads, and Oxford Nanopore's 2020 rollout of adaptive sampling for real-time targeted enrichment without library modifications. TGS represents a generational shift by overcoming second-generation limitations in assembling complex genomes, particularly in repetitive regions and structural variant detection, through reads often exceeding 10 kb. By 2025, TGS adoption has accelerated via hybrid workflows combining short- and long-read data, with the market valued at approximately USD 881 million in 2025 and exhibiting a of over 20% through 2032, reflecting its integration into routine . From 2023 to 2025, advancements emphasized precision, such as Oxford Nanopore's R10.4 pores paired with Q20+ chemistry, achieving >99% raw read accuracy for , further refined by AI-driven error correction in the Dorado basecaller for consistent high-quality outputs. In October 2025, Oxford Nanopore announced the PromethION Plus flow cell, which significantly increases output for large-scale genomic studies.

Technologies

Pacific Biosciences SMRT sequencing

Pacific Biosciences' Single Molecule, Real-Time (SMRT) sequencing technology enables the direct observation of by individual molecules, providing long-read sequencing data with high accuracy through consensus generation. Introduced commercially in 2010 following foundational research published in 2009, SMRT sequencing has evolved to support scalable genomic applications, including the 2022 launch of the Revio system, which facilitates sequencing of human genomes at 30x coverage in approximately 24 hours using a single SMRT Cell. Recent 2025 updates include the announcement of SPRQ-Nx chemistry for enhanced throughput on Revio, with beta testing beginning in November 2025. The core mechanism relies on zero-mode waveguides (ZMWs), nanoscale wells etched into a fused silica substrate that confine excitation light to a volume of about 20 nm in depth, allowing real-time optical detection of incorporation without illuminating the entire reaction volume. A highly processive , such as a modified phi29, incorporates fluorescently labeled —each with a distinct attached to the terminal —into a growing DNA strand complementary to the template. As incorporation occurs, the fluorophore is cleaved and diffuses away, producing a characteristic light pulse captured by high-speed cameras; the sequence of pulses corresponds to the DNA template sequence. To enable multiple observations of the same molecule, the template is prepared as a circular SMRTbell , where the repeatedly traverses the insert region, generating subreads that can be aligned for consensus. The workflow begins with sample preparation to create SMRTbell libraries: high-molecular-weight DNA is sheared to desired fragment sizes (typically 10-20 kb for HiFi applications), ends are repaired and A-tailed, and adapters are ligated to form the double-stranded circular template. Libraries are then bound to and loaded onto SMRT Cells containing millions of ZMWs (8 million on IIe, 25 million on Revio), where sequencing occurs in real time. Raw data produce continuous long reads (CLR) from single passes or high-fidelity (HiFi) reads via circular consensus sequencing (CCS), where multiple subreads (typically 10-30 passes) are computationally aligned to generate a . SMRT sequencing specifications vary by instrument and chemistry. On the , CLR reads exceed 20 kb, while HiFi reads average 15-20 kb with >99% accuracy (Q30 or better). The Revio system enhances throughput, yielding up to 120 Gb of HiFi data per SMRT Cell in a 24-hour run at 15-20 kb read lengths, supporting two phased 20x human genomes per cell. HiFi accuracy derives from consensus over multiple passes, approximated by the error rate formula: error rateinitial errornpasses\text{error rate} \approx \frac{\text{initial error}}{\sqrt{n_{\text{passes}}}}
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.