Hubbry Logo
Presto (SQL query engine)Presto (SQL query engine)Main
Open search
Presto (SQL query engine)
Community hub
Presto (SQL query engine)
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Presto (SQL query engine)
Presto (SQL query engine)
from Wikipedia
Presto
Original authorsMartin Traverso, Dain Sundstrom, David Phillips, Eric Hwang
Initial release10 November 2013; 12 years ago (10 November 2013)
Written inJava
Operating systemCross-platform
StandardSQL
TypeData warehouse
LicenseApache License 2.0
Website
Architecture of Presto.
Architecture of Presto.

Presto (including PrestoDB, and PrestoSQL which was re-branded to Trino) is a distributed query engine for big data using the SQL query language. Its architecture allows users to query data sources such as Hadoop, Cassandra, Kafka, AWS S3, Alluxio, MySQL, MongoDB and Teradata,[1] and allows use of multiple data sources within a query. Presto is community-driven open-source software released under the Apache License.

History

[edit]

Presto was originally designed and developed at Facebook, Inc. (later renamed Meta) for their data analysts to run interactive queries on its large data warehouse in Apache Hadoop. The first four developers were Martin Traverso, Dain Sundstrom, David Phillips, and Eric Hwang. Before Presto, the data analysts at Facebook relied on Apache Hive for running SQL analytics on their multi-petabyte data warehouse.[2] Hive was deemed too slow for Facebook's scale and Presto was invented to fill the gap to run fast queries.[3] Original development started in 2012 and deployed at Facebook later that year. In November 2013, Facebook announced its open source release.[3][4]

In 2014, Netflix disclosed they used Presto on 10 petabytes of data stored in the Amazon Simple Storage Service (S3).[5] In November, 2016, Amazon announced a service called Athena that was based on Presto.[6] In 2017, Teradata spun out a company called Starburst Data to commercially support Presto, which included staff acquired from Hadapt in 2014.[7] Teradata's QueryGrid software allowed Presto to access a Teradata relational database.[8]

In January 2019, the Presto Software Foundation was announced. The foundation is a not-for-profit organization for the advancement of the Presto open source distributed SQL query engine.[9][10] At the same time, Presto development forked: PrestoDB maintained by Facebook, and PrestoSQL maintained by the Presto Software Foundation, with some cross pollination of code.

In September 2019, Facebook donated PrestoDB to the Linux Foundation, establishing the Presto Foundation.[11] Neither the creators of Presto, nor the top contributors and committers, were invited to join this foundation.[12]

By 2020, all four of the original Presto developers had joined Starburst.[13] In December 2020, PrestoSQL was rebranded as Trino, since Facebook had obtained a trademark on the name "Presto" (also donated to the Linux Foundation).[14]

Another company called Ahana was announced in 2020 to commercialize the PrestoDB fork as a cloud service and was acquired by IBM in 2023.[15]

Architecture

[edit]

Presto's architecture is very similar to other database management systems using cluster computing, sometimes called massively parallel processing (MPP). One coordinator works in sync with multiple workers. Clients submit SQL statements that are parsed and planned, following which parallel tasks are scheduled to workers. Workers jointly process rows from the data sources and produce results that are returned to the client. Compared to the original Apache Hive execution model which used the Hadoop MapReduce mechanism on each query, Presto does not write intermediate results to disk, resulting in a significant speed improvement. Presto is written in Java.

A Presto query can combine data from multiple sources. Presto offers connectors to data sources including files in Alluxio, Hadoop Distributed File System (often called a data lake), Amazon S3, MySQL, PostgreSQL, Microsoft SQL Server, Amazon Redshift, Apache Kudu, Apache Phoenix, Apache Kafka, Apache Cassandra, Apache Accumulo, MongoDB and Redis. Unlike other Hadoop distribution-specific tools, such as Apache Impala, Presto can work with any variant of Hadoop or without it. Presto supports separation of compute and storage and may be deployed on-premises or using cloud computing.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Presto is an open-source query engine designed for running interactive analytic queries against large data sources ranging from gigabytes to petabytes, enabling users to query heterogeneous data across systems like Hadoop, relational databases, stores, and proprietary data warehouses using standard ANSI SQL without data movement. Developed initially at in 2012 to address the limitations of existing tools like Hive for ad-hoc analytics on massive datasets, Presto emphasizes high performance through , pipelined execution, and fault-tolerant distributed architecture, supporting workloads from sub-second interactive queries to multi-hour ETL jobs. Its extensible connector allows seamless integration with diverse data sources, making it suitable for data lakes, lakehouses, and real-time applications at scale. Following a 2019 split in the project, there are now two main implementations: PrestoDB, governed by the Presto Foundation under the and established that year with founding members including (now Meta), , X (formerly ), and Alibaba to foster community-driven development; and Trino (formerly PrestoSQL), a created by original contributors with its own independent governance under the Trino Software Foundation.

Overview

Definition and Purpose

Presto is an open-source, distributed SQL query engine designed for interactive ad-hoc analytics on . It enables users to execute standard SQL queries across heterogeneous data sources, such as Hadoop, , and relational databases, without requiring data movement or preprocessing. Developed initially at in 2012, Presto addressed the need for rapid querying of the company's vast data warehouse, allowing analysts to derive insights in seconds rather than hours. The core purpose of Presto is to facilitate fast analytic queries on petabyte-scale datasets, emphasizing accessibility for data analysts through a familiar SQL interface. By federating queries across multiple storage systems in a single cluster, it eliminates the need for extract, transform, and load (ETL) processes, reducing complexity and enabling real-time decision-making. At organizations like Meta, Presto processes hundreds of petabytes daily, supporting diverse workloads from sub-second reporting to longer-running jobs. In its basic , users submit SQL queries that are parsed, optimized, and executed in parallel across a distributed , leveraging for high performance. This design prioritizes scalability and extensibility, allowing seamless integration with various data connectors while maintaining ANSI SQL compliance.

Key Distributions

Presto, originally developed at , has evolved into two primary distributions following a project split, each maintaining distinct focuses within the open-source ecosystem. PrestoDB is maintained by the Presto Foundation, which operates under the umbrella to ensure neutral governance and community collaboration. This distribution emphasizes core engine stability and seamless enterprise integrations, such as its use in ' Athena for serverless querying of data lakes. As of 2025, PrestoDB remains the choice for environments prioritizing reliability in production-scale deployments tied to its foundational contributions from early adopters like and . In contrast, Trino—formerly known as PrestoSQL—was forked in 2018 to accelerate innovation beyond the original project's pace and rebranded in under the Trino Software Foundation for independent . This variant prioritizes community-driven enhancements, broader support for diverse data connectors, and rapid iteration to address evolving needs. Trino's model fosters a more decentralized, volunteer-led structure, distancing it from the original Facebook-influenced direction of PrestoDB. Key governance differences highlight their divergent paths: PrestoDB retains ties to its origins through contributions from founding companies like Facebook, focusing on conservative stability, while Trino exhibits higher open-source activity with more frequent releases—often quarterly or faster—to incorporate new features and optimizations. As of 2025, adoption trends show PrestoDB prevalent in proprietary, managed services like AWS Athena for cost-effective, integrated querying, whereas Trino dominates open ecosystems, powering platforms such as Starburst for federated data access in hybrid environments.

History

Origins and Development

Presto was developed in the fall of 2012 by a small team of engineers in Facebook's Data Infrastructure group, including Martin Traverso, Dain Sundstrom, David Phillips, Eric Hwang, Nileema Shingte, and Ravi Murthy, to enable interactive SQL queries on the company's vast data warehouse. The project addressed key limitations in the existing Hadoop ecosystem, where tools like MapReduce and Hive were designed for high-throughput batch processing rather than low-latency ad-hoc analysis, often leaving data analysts waiting hours for query results on terabyte- and petabyte-scale datasets. The initial motivations stemmed from the need to boost productivity for Facebook's data scientists, analysts, and engineers by supporting complex, interactive queries across diverse storage systems without the inefficiencies of disk-based jobs. Early evaluations of external query engines revealed shortcomings in flexibility and scalability for Facebook's environment, prompting the team to build a custom solution. The prototypes were implemented in , emphasizing an in-memory, pipelined execution model to minimize latency and avoid intermediate disk spills, while incorporating extensible connectors for sources like HDFS (via Hive) and . Presto's first internal deployment occurred in early 2013, initially supporting queries across HDFS, Hive, and to handle Facebook's petabyte-scale . By spring 2013, it had scaled to over 1,000 nodes and was fully rolled out company-wide, marking a significant shift toward interactive . This internal success led to its open-sourcing later that year.

Open-Sourcing and Forks

Presto was initially developed internally at Facebook and open-sourced in 2013 under the Apache License 2.0, with the original GitHub repository hosted at github.com/facebook/presto, featuring contributions primarily from Facebook engineers such as Dain Sundstrom and Martin Traverso. The project quickly gained traction within the open-source community, leading to widespread adoption by major organizations; by 2015, companies like Netflix and Uber had integrated Presto into their data analytics pipelines, with Netflix deploying it in production as early as 2014 to query petabyte-scale data across diverse sources. This growth culminated in the formation of the Presto Foundation under the Linux Foundation in September 2019, established by founding members including Facebook, Uber, Twitter, and Alibaba to provide neutral governance, foster community contributions, and ensure the project's long-term sustainability. In March 2018, tensions arose within the community over the project's direction, particularly concerns about increasing commercialization efforts, including the involvement of startups like Ahana aiming to build enterprise offerings around Presto, which some contributors felt risked prioritizing proprietary features over open development. This led to a fork by key maintainers, including Dain Sundstrom, Martin Traverso, and David Phillips, who created PrestoSQL to maintain a focus on rapid innovation and community-driven enhancements without commercial constraints. The original project was subsequently renamed PrestoDB to distinguish the variants, with PrestoSQL continuing active development until it was rebranded as Trino in December 2020 due to trademark conflicts, as had registered "Presto" and donated it to the Presto Foundation, prompting the fork's maintainers to seek a new identity to avoid legal issues and affirm their independent path. As of November 2025, PrestoDB has reached version 0.295, released on October 1, 2025, emphasizing stability through incremental improvements in query reliability and connector compatibility while maintaining compatibility with existing deployments. In parallel, Trino has advanced to version 478, also released on October 29, 2025, incorporating enhanced features such as improved task retry mechanisms and adaptive query recovery to better handle failures in large-scale distributed environments. These developments reflect the divergent yet complementary evolutions of the two projects, with PrestoDB prioritizing enterprise stability under the Presto Foundation and Trino focusing on cutting-edge scalability through its own Trino Software Foundation (established in 2019 as the Presto Software Foundation and renamed in 2020) to support ongoing community governance.

Technical Features

SQL Standards and Extensions

Presto adheres to ANSI SQL standards, supporting core constructs such as SELECT statements, JOIN operations, GROUP BY clauses, subqueries, and window functions, which enable complex analytical queries across distributed data sources. This compliance facilitates seamless integration with standard SQL tools and clients, including platforms like Tableau and Power BI. While PrestoDB and Trino (the primary continuation of PrestoSQL) both maintain this foundational support, their implementations ensure compatibility with SQL:2011 features where applicable. Presto extends standard SQL with specialized functions optimized for environments, including approximate aggregates like approx_distinct, which estimates the number of unique values using sketches for efficient processing of large datasets. Geospatial capabilities are provided through ST_ prefixed functions, such as ST_Area and ST_Buffer, compliant with the Open Geospatial Consortium (OGC) specification for . Additionally, built-in JSON operators like json_extract and json_value allow querying and manipulating without external preprocessing. As a distributed query engine, Presto emphasizes read-only operations, lacking native DDL or DML statements for data modification to avoid interference with underlying storage systems; instead, it focuses on federated querying across heterogeneous sources. Parameterized queries are supported through client drivers, enhancing security by preventing in interactive and ad-hoc workloads. Dialect variations exist between PrestoDB and Trino, with Trino introducing advanced extensions such as functions (e.g., learn_classifier for training SVM models within SQL), which expand analytical capabilities beyond PrestoDB's core offerings. These differences are minor and generally backward-compatible, allowing most queries to execute across both distributions with minimal adjustments.

Performance and Scalability

Presto achieves high performance through its in-memory, pipelined query execution model, which processes data in columnar format without intermediate disk writes, enabling sub-second query times on terabyte-scale datasets. This vectorized approach leverages dynamic code generation and streaming from data sources, minimizing latency for interactive workloads. For scalability, Presto supports horizontal scaling by adding worker nodes to the cluster, allowing it to handle massive workloads across distributed environments. At (now Meta), Presto processes hundreds of petabytes of data and quadrillions of rows daily across thousands of nodes in multiple data centers. This design ensures and elastic , supporting both low-latency ad-hoc queries and long-running batch jobs without disrupting ongoing operations. Key optimizations in Presto include predicate and projection pushdown to data sources, which reduces data transfer by filtering and selecting only necessary columns at the connector level. The cost-based optimizer uses table statistics to evaluate join orders and distribution types, automatically selecting strategies like broadcast or partitioned joins to minimize CPU and network costs. Additionally, history-based query optimization refines estimates for complex queries by learning from past executions, improving accuracy over traditional rule-based methods. Benchmarks demonstrate Presto's efficiency for ad-hoc workloads. Recent Presto C++ implementations further boost TPC-DS 100TB performance, outperforming alternatives like in price-performance ratio.

Architecture

Core Components

Presto operates as a query engine, relying on a cluster of nodes to handle query processing across large datasets. The core components form a master-worker , where one or more coordinators oversee operations and multiple workers perform the actual computation, enabling scalability and fault tolerance. The coordinator node serves as the central point of control in the Presto cluster, responsible for parsing incoming SQL statements, generating optimized query plans, and scheduling tasks across worker nodes. It manages metadata, coordinates worker assignments, and acts as the interface for client connections, using a for communication with workers. In typical deployments, the coordinator runs on dedicated hardware to handle these duties without participating in , though in single-node setups it can double as a worker. Every Presto cluster requires at least one coordinator to function. For larger deployments with multiple coordinators, a resource manager aggregates data from all coordinators and workers to provide a global view of the cluster, using a Thrift for communication and supporting coordinated . Worker nodes execute the distributed tasks assigned by the coordinator, processing data in parallel to support high-throughput queries. Each worker fetches data from underlying sources via connectors, performs computations such as filtering, aggregation, and joins, and exchanges intermediate results with other workers as needed. As of 2025, Presto also supports a Native Worker, implemented in C++ as a drop-in replacement for the traditional Java-based worker, to reduce CPU and memory footprint while maintaining compatibility through integration with the Velox library and supporting key connectors like Hive and Iceberg. Workers register themselves with the discovery service upon startup and communicate via REST API, allowing the cluster to scale by adding more workers to handle increased load. In production environments, clusters can comprise hundreds to thousands of workers for petabyte-scale analytics. The discovery service facilitates dynamic node management by allowing workers to advertise their availability to the coordinator, enabling automatic cluster scaling and fault recovery. Presto includes an embedded discovery server within the coordinator, activated via the discovery-server.enabled=true property, where nodes register upon launch. Alternative configurations use the discovery.uri property to specify the URI of the discovery service, typically pointing to the coordinator's HTTP endpoint, for setups without an embedded server. The embedded option is standard for most PrestoDB clusters. Configuration elements are essential for tuning Presto's behavior and integrating data sources, managed through property files in the installation directory. JVM settings, defined in etc/jvm.config, control and garbage collection to optimize ; for example, properties like -Xmx16G set maximum heap size, while -XX:+UseG1GC enables the G1 garbage collector to handle large heaps efficiently. Catalog files, located in etc/catalog/, define data sources with properties such as connector.name=hive-hadoop2 to specify the connector type and hive.metastore.uri for metadata access, allowing Presto to interface with diverse storage systems without code changes. These configurations ensure reliable operation and adaptability in distributed environments.

Query Execution Model

Presto processes SQL queries through a multi-stage that transforms the input statement into executable distributed tasks, enabling efficient analysis across heterogeneous data sources. The process begins with parsing the SQL query into an (AST) using an ANTLR-based parser, which breaks down the statement into its syntactic components. This is followed by semantic analysis, where the analyzer resolves types, performs coercions, identifies functions, and extracts logical elements such as subqueries, ensuring the query is semantically valid and building an initial representation of the query structure. Next, logical planning generates an as a of plan nodes, outlining the logical operations without specifying execution details. Optimization occurs in two phases: the logical optimizer applies rule-based transformations to reduce algorithmic complexity, such as predicate pushdown and join reordering, while the physical optimizer selects efficient distributed strategies, including join methods and data partitioning, based on cost estimates. Physical planning then converts this into a distributed execution plan, dividing the query into stages—a where each stage represents a set of parallelizable operations—and assigning tasks within those stages to worker nodes. The execution model employs a pipeline-based approach for fault-tolerant, streaming data processing, where operators are chained into pipelines within tasks to enable continuous data flow without materializing intermediate results to disk, thus minimizing latency and storage overhead. The coordinator splits the query into and tasks, distributing tasks and data splits to workers via ; workers then execute these tasks concurrently, processing splits in parallel using drivers that sequence operators for intra-node pipelining. Data exchange between workers occurs through buffered connections, supporting low-latency shuffling for operations like joins and aggregations. The coordinator, serving as the query manager, tracks progress and aggregates final results from the root . Fault handling in Presto emphasizes resilience without requiring full query restarts, leveraging a MapReduce-inspired execution model where individual tasks or partitions can be retried independently upon failure. For transient errors, low-level retries are applied during task execution, while stage-level failures trigger re-execution of affected components, such as lifespans in grouped execution, allowing partial recovery and maintaining query progress. This design ensures in distributed environments, with the coordinator monitoring worker health and reassigning tasks as needed.

Integrations

Connectors and Data Sources

Presto employs a pluggable connector based on the Service Provider Interface (SPI), allowing modular integration of diverse sources as plugins within the query engine. This design enables administrators to configure catalogs that map to specific connectors, each handling metadata discovery, access, and query optimization tailored to the underlying storage system. Connectors act as the primary interface for all in Presto, supporting a wide array of sources including relational databases, stores, streaming systems, and file-based lakes without requiring data replication or ETL processes. Key connectors facilitate access to prominent data ecosystems. The Hive connector provides read access to data in HDFS or object storage like and (GCS), supporting formats such as and through the Hive metastore for schema management. JDBC-based connectors enable integration with relational databases, including , , and , by leveraging standard JDBC drivers to execute pushed-down operations where possible. For modern table formats, the Delta Lake connector allows querying ACID-compliant tables stored on S3 or GCS, using the Delta Kernel API for metadata handling and supporting time travel via snapshot or timestamp specifications, though it remains read-only with no support for schema evolution. Similarly, the connector supports querying tables across catalogs like Hive Metastore, Hadoop, or , with compatibility for S3 and GCS storage, including features like hidden metadata tables and , but write operations such as INSERT are limited or unavailable in certain implementations. This architecture underpins Presto's federated querying capabilities, permitting complex operations like joins across heterogeneous sources—such as combining Hive tables with data—in a single SQL query without copying or moving data between systems. The engine pushes down eligible computations to the source connectors for efficiency, coordinating results in memory across the cluster. However, most connectors enforce read-only access to maintain consistency and avoid side effects on source systems, and handling of evolution varies by connector, with some like Hive offering basic support while others, such as Delta Lake, provide none. Trino, a of Presto, extends support with additional optimizations for evolution and writes, though core Presto maintains robust querying for these formats.

Ecosystem Tools

Presto provides a range of client tools to facilitate interaction with the query engine, including the Presto CLI, which serves as a for submitting SQL queries and managing sessions interactively or in batch mode. The CLI supports authentication methods such as password files and integrates seamlessly with the engine's coordinator for real-time query execution. For programmatic access, Presto offers a that enables Java-based applications to connect and execute queries, supporting features like connection pooling and transaction control. ODBC drivers, available through third-party providers like and , allow non-Java applications to interface with Presto via standard ODBC connectivity, bridging to tools that require this protocol. Integrations with (BI) tools extend Presto's usability for visualization and . Tableau features a native Presto connector, enabling direct source configuration and live query execution within dashboards for ad hoc analysis. Similarly, supports Presto as a database backend through its SQL Lab and visualization interfaces, allowing users to build charts and explore federated sources efficiently. Monitoring capabilities in Presto include a built-in web UI on the coordinator node, which displays query statistics, execution plans, and resource utilization for ongoing and historical queries. For advanced observability, Presto exposes JMX metrics that can be scraped by , enabling the collection of performance data such as CPU usage, query latency, and worker node health. These metrics integrate with for customizable dashboards, providing visualizations of cluster throughput and error rates to aid in and . Additionally, Presto supports deployment on resource managers like , where monitoring leverages YARN's native tools alongside Presto-specific metrics for distributed resource tracking. Orchestration tools enhance Presto's integration into automated workflows. Apache Airflow includes a dedicated Presto provider package with operators and hooks for scheduling queries, managing connections, and incorporating Presto tasks into directed acyclic graphs (DAGs) for ETL pipelines. For containerized environments, Trino—the open-source fork of Presto—offers a operator that automates cluster deployment, scaling, and , including support for custom plugins and setups. As of August 2025, Presto upgraded to Java 17, enhancing security for integrations through improved TLS support and cryptographic protocols while maintaining compatibility for existing JDBC drivers and CLI with Java 8.

Adoption and Use Cases

Notable Deployments

Meta (formerly ) developed Presto as its primary query engine for interactive analytics on massive datasets, initially processing 1 petabyte of data daily across a exceeding 300 petabytes in 2013. By 2023, Presto had scaled to support multiple exabyte-scale data sources in Meta's global data lakes, handling both low-latency interactive queries and long-running ETL jobs across clusters in multiple data centers. Netflix adopted PrestoDB in 2014 to enable low-latency ad-hoc queries on its 10-petabyte stored in , supporting analyses such as results for product insights. The deployment utilized around 250 EC2 worker nodes for approximately 2,500 daily queries, separating compute from storage to allow shared access to S3 data without interfering with Hadoop workflows. Uber deployed Presto to power real-time analytics on its , initially integrating with for historical queries on HDFS-stored data. It later enhanced Presto with connectors to for sub-second latency on from Kafka, enabling use cases like real-time dashboards, while supporting backfills via . Presto integrates with Uber's platform by querying fresh data in Pinot for model monitoring and feature generation. As of 2024, Uber runs Presto across over 12,000 nodes in more than 20 clusters, processing approximately 100 petabytes daily and handling 500,000 queries per day. Twitter (now X) uses Presto for large-scale federated SQL queries in the cloud, enabling ad-hoc across diverse data sources. Alibaba employs Presto in its Analytics service for high-performance querying of petabyte-scale data in hybrid cloud environments. Amazon Athena, built on PrestoDB (now Trino), provides serverless SQL querying directly on data in , automatically scaling to handle petabyte-scale datasets without infrastructure management. It supports federated queries that join S3 data with sources like Amazon RDS, DynamoDB, and DocumentDB using -based connectors, allowing complex cross-source in a single SQL statement. As of 2025, Athena has evolved with enhancements including federated queries via and KMS encryption support on TIP-enabled workgroups, serving high-volume ad-hoc querying for millions of users globally.

Common Applications

Presto is widely applied in scenarios involving interactive and data exploration, where its ability to query diverse data sources efficiently supports rapid insights without data movement. Common patterns include ad-hoc analysis by data professionals, unification of disparate datasets for reporting, real-time monitoring of , and preparation of large-scale datasets for workflows. In ad-hoc querying, data analysts leverage Presto to perform exploratory SQL queries directly on data warehouses or lakes, enabling interactive analysis of terabytes or petabytes of data without the need for pre-computed structures or lengthy preparation steps. This approach replaces slower batch-oriented tools like Hive, allowing sub-second to minute-scale responses for complex aggregations and joins in exploratory sessions. As an alternative to traditional ETL processes, Presto facilitates federated joins across heterogeneous data sources, such as combining records from relational databases and stores to build unified reports or pipelines. This capability, enabled by its connector architecture, reduces the overhead of data ingestion and transformation by querying sources in place, supporting multi-hour jobs on large datasets for and reporting. For real-time analytics, Presto integrates with streaming platforms like to enable low-latency queries on live data feeds, powering dashboards and monitoring applications that require ongoing aggregation and alerting. Such integrations allow organizations to process and analyze event streams in near real-time, deriving insights from high-velocity data without dedicated engines. In data preparation, Presto is used for sampling, feature extraction, and aggregation from massive datasets stored in various formats, streamlining the from access to model-ready inputs. By supporting SQL-based operations on distributed sources, it accelerates iterative experimentation and reduces the time spent on for training pipelines.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.