Recent from talks
Nothing was collected or created yet.
Online analytical processing
View on WikipediaThis article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
In computing, online analytical processing (OLAP) (/ˈoʊlæp/), is an approach to quickly answer multi-dimensional analytical (MDA) queries.[1] The term OLAP was created as a slight modification of the traditional database term online transaction processing (OLTP).[2] OLAP is part of the broader category of business intelligence, which also encompasses relational databases, report writing and data mining.[3] Typical applications of OLAP include business reporting for sales, marketing, management reporting, business process management (BPM),[4] budgeting and forecasting, financial reporting and similar areas, with new applications emerging, such as agriculture.[5]
OLAP tools enable users to analyse multidimensional data interactively from multiple perspectives. OLAP consists of three basic analytical operations: consolidation (roll-up), drill-down, and slicing and dicing.[6]: 402–403 Consolidation involves the aggregation of data that can be accumulated and computed in one or more dimensions. For example, all sales offices are rolled up to the sales department or sales division to anticipate sales trends. By contrast, the drill-down is a technique that allows users to navigate through the details. For instance, users can view the sales by individual products that make up a region's sales. Slicing and dicing is a feature whereby users can take out (slicing) a specific set of data of the OLAP cube and view (dicing) the slices from different viewpoints. These viewpoints are sometimes called dimensions (such as looking at the same sales by salesperson, or by date, or by customer, or by product, or by region, etc.).
Databases configured for OLAP use a multidimensional data model, allowing for complex analytical and ad hoc queries with a rapid execution time.[7] They borrow aspects of navigational databases, hierarchical databases and relational databases.
OLAP is typically contrasted to OLTP (online transaction processing), which is generally characterized by much less complex queries, in a larger volume, to process transactions rather than for the purpose of business intelligence or reporting. Whereas OLAP systems are mostly optimized for read, OLTP has to process all kinds of queries (read, insert, update and delete).
Overview of OLAP systems
[edit]At the core of any OLAP system is an OLAP cube (also called a 'multidimensional cube' or a hypercube). It consists of numeric facts called measures that are categorized by dimensions. The measures are placed at the intersections of the hypercube, which is spanned by the dimensions as a vector space. The usual interface to manipulate an OLAP cube is a matrix interface, like Pivot tables in a spreadsheet program, which performs projection operations along the dimensions, such as aggregation or averaging.
The cube metadata is typically created from a star schema or snowflake schema or fact constellation of tables in a relational database. Measures are derived from the records in the fact table and dimensions are derived from the dimension tables.
Each measure can be thought of as having a set of labels, or meta-data associated with it. A dimension is what describes these labels; it provides information about the measure.
A simple example would be a cube that contains a store's sales as a measure, and Date/Time as a dimension. Each Sale has a Date/Time label that describes more about that sale.
For example:
Sales Fact Table
+-------------+----------+
| sale_amount | time_id |
+-------------+----------+ Time Dimension
| 930.10| 1234 |----+ +---------+-------------------+
+-------------+----------+ | | time_id | timestamp |
| +---------+-------------------+
+---->| 1234 | 20080902 12:35:43 |
+---------+-------------------+
Multidimensional databases
[edit]Multidimensional structure is defined as "a variation of the relational model that uses multidimensional structures to organize data and express the relationships between data".[6]: 177 The structure is broken into cubes and the cubes are able to store and access data within the confines of each cube. "Each cell within a multidimensional structure contains aggregated data related to elements along each of its dimensions".[6]: 178 Even when data is manipulated it remains easy to access and continues to constitute a compact database format. The data still remains interrelated. Multidimensional structure is quite popular for analytical databases that use online analytical processing (OLAP) applications.[6] Analytical databases use these databases because of their ability to deliver answers to complex business queries swiftly. Data can be viewed from different angles, which gives a broader perspective of a problem unlike other models.[8]
Aggregations
[edit]It has been claimed that for complex queries OLAP cubes can produce an answer in around 0.1% of the time required for the same query on OLTP relational data.[9][10] The most important mechanism in OLAP which allows it to achieve such performance is the use of aggregations. Aggregations are built from the fact table by changing the granularity on specific dimensions and aggregating up data along these dimensions, using an aggregate function (or aggregation function). The number of possible aggregations is determined by every possible combination of dimension granularities.
The combination of all possible aggregations and the base data contains the answers to every query which can be answered from the data.[11]
Because usually there are many aggregations that can be calculated, often only a predetermined number are fully calculated; the remainder are solved on demand. The problem of deciding which aggregations (views) to calculate is known as the view selection problem. View selection can be constrained by the total size of the selected set of aggregations, the time to update them from changes in the base data, or both. The objective of view selection is typically to minimize the average time to answer OLAP queries, although some studies also minimize the update time. View selection is NP-complete. Many approaches to the problem have been explored, including greedy algorithms, randomized search, genetic algorithms and A* search algorithm.
Some aggregation functions can be computed for the entire OLAP cube by precomputing values for each cell, and then computing the aggregation for a roll-up of cells by aggregating these aggregates, applying a divide and conquer algorithm to the multidimensional problem to compute them efficiently.[12] For example, the overall sum of a roll-up is just the sum of the sub-sums in each cell. Functions that can be decomposed in this way are called decomposable aggregation functions, and include COUNT, MAX, MIN, and SUM, which can be computed for each cell and then directly aggregated; these are known as self-decomposable aggregation functions.[13]
In other cases, the aggregate function can be computed by computing auxiliary numbers for cells, aggregating these auxiliary numbers, and finally computing the overall number at the end; examples include AVERAGE (tracking sum and count, dividing at the end) and RANGE (tracking max and min, subtracting at the end). In other cases, the aggregate function cannot be computed without analyzing the entire set at once, though in some cases approximations can be computed; examples include DISTINCT COUNT, MEDIAN, and MODE; for example, the median of a set is not the median of medians of subsets. These latter are difficult to implement efficiently in OLAP, as they require computing the aggregate function on the base data, either computing them online (slow) or precomputing them for possible rollouts (large space).
Types
[edit]OLAP systems have been traditionally categorized using the following taxonomy.[14]
Multidimensional OLAP (MOLAP)
[edit]MOLAP (multi-dimensional online analytical processing) is the classic form of OLAP and is sometimes referred to as just OLAP. MOLAP stores this data in an optimized multi-dimensional array storage, rather than in a relational database.
Some MOLAP tools require the pre-computation and storage of derived data, such as consolidations – the operation known as processing. Such MOLAP tools generally utilize a pre-calculated data set referred to as a data cube. The data cube contains all the possible answers to a given range of questions. As a result, they have a very fast response to queries. On the other hand, updating can take a long time depending on the degree of pre-computation. Pre-computation can also lead to what is known as data explosion.
Other MOLAP tools, particularly those that implement the functional database model do not pre-compute derived data but make all calculations on demand other than those that were previously requested and stored in a cache.
Advantages of MOLAP
- Fast query performance due to optimized storage, multidimensional indexing and caching.
- Smaller on-disk size of data compared to data stored in relational database due to compression techniques.
- Automated computation of higher-level aggregates of the data.
- It is very compact for low dimension data sets.
- Array models provide natural indexing.
- Effective data extraction achieved through the pre-structuring of aggregated data.
Disadvantages of MOLAP
- Within some MOLAP systems the processing step (data load) can be quite lengthy, especially on large data volumes. This is usually remedied by doing only incremental processing, i.e., processing only the data which have changed (usually new data) instead of reprocessing the entire data set.
- Some MOLAP methodologies introduce data redundancy.
Products
[edit]Examples of commercial products that use MOLAP are Cognos Powerplay, Oracle Database OLAP Option, MicroStrategy, Microsoft Analysis Services, Essbase, TM1, Jedox, and icCube.
Relational OLAP (ROLAP)
[edit]ROLAP works directly with relational databases and does not require pre-computation. The base data and the dimension tables are stored as relational tables and new tables are created to hold the aggregated information. It depends on a specialized schema design. This methodology relies on manipulating the data stored in the relational database to give the appearance of traditional OLAP's slicing and dicing functionality. In essence, each action of slicing and dicing is equivalent to adding a "WHERE" clause in the SQL statement. ROLAP tools do not use pre-calculated data cubes but instead pose the query to the standard relational database and its tables in order to bring back the data required to answer the question. ROLAP tools feature the ability to ask any question because the methodology is not limited to the contents of a cube. ROLAP also has the ability to drill down to the lowest level of detail in the database.
While ROLAP uses a relational database source, generally the database must be carefully designed for ROLAP use. A database which was designed for OLTP will not function well as a ROLAP database. Therefore, ROLAP still involves creating an additional copy of the data. However, since it is a database, a variety of technologies can be used to populate the database.
Advantages of ROLAP
[edit]- ROLAP is considered to be more scalable in handling large data volumes, especially models with dimensions with very high cardinality (i.e., millions of members).
- With a variety of data loading tools available, and the ability to fine-tune the extract, transform, load (ETL) code to the particular data model, load times are generally much shorter than with the automated MOLAP loads.
- The data are stored in a standard relational database and can be accessed by any SQL reporting tool (the tool does not have to be an OLAP tool).
- ROLAP tools are better at handling non-aggregable facts (e.g., textual descriptions). MOLAP tools tend to suffer from slow performance when querying these elements.
- By decoupling the data storage from the multi-dimensional model, it is possible to successfully model data that would not otherwise fit into a strict dimensional model.
- The ROLAP approach can leverage database authorization controls such as row-level security, whereby the query results are filtered depending on preset criteria applied, for example, to a given user or group of users (SQL WHERE clause).
Disadvantages of ROLAP
[edit]- There is a consensus in the industry that ROLAP tools have slower performance than MOLAP tools. However, see the discussion below about ROLAP performance.
- The loading of aggregate tables must be managed by custom ETL code. The ROLAP tools do not help with this task. This means additional development time and more code to support.
- When the step of creating aggregate tables is skipped, the query performance then suffers because the larger detailed tables must be queried. This can be partially remedied by adding additional aggregate tables; however it is still not practical to create aggregate tables for all combinations of dimensions/attributes.
- ROLAP relies on the general-purpose database for querying and caching, and therefore several special techniques employed by MOLAP tools are not available (such as special hierarchical indexing). However, modern ROLAP tools take advantage of latest improvements in SQL language such as CUBE and ROLLUP operators, DB2 Cube Views, as well as other SQL OLAP extensions. These SQL improvements can mitigate the benefits of the MOLAP tools.
- Since ROLAP tools rely on SQL for all of the computations, they are not suitable when the model is heavy on calculations which don't translate well into SQL. Examples of such models include budgeting, allocations, financial reporting and other scenarios.
Performance of ROLAP
[edit]In the OLAP industry ROLAP is usually perceived as being able to scale for large data volumes but suffering from slower query performance as opposed to MOLAP. The OLAP Survey, the largest independent survey across all major OLAP products, being conducted for 6 years (2001 to 2006) have consistently found that companies using ROLAP report slower performance than those using MOLAP even when data volumes were taken into consideration.
However, as with any survey there are a number of subtle issues that must be taken into account when interpreting the results.
- The survey shows that ROLAP tools have 7 times more users than MOLAP tools within each company. Systems with more users will tend to suffer more performance problems at peak usage times.
- There is also a question about complexity of the model, measured both in number of dimensions and richness of calculations. The survey does not offer a good way to control for these variations in the data being analyzed.
Downside of flexibility
[edit]Some companies select ROLAP because they intend to re-use existing relational database tables—these tables will frequently not be optimally designed for OLAP use. The superior flexibility of ROLAP tools allows this less-than-optimal design to work, but performance suffers. MOLAP tools in contrast would force the data to be re-loaded into an optimal OLAP design.
Hybrid OLAP (HOLAP)
[edit]The undesirable trade-off between additional ETL cost and slow query performance has ensured that most commercial OLAP tools now use a "Hybrid OLAP" (HOLAP) approach, which allows the model designer to decide which portion of the data will be stored in MOLAP and which portion in ROLAP.
There is no clear agreement across the industry as to what constitutes "Hybrid OLAP", except that a database will divide data between relational and specialized storage.[15] For example, for some vendors, a HOLAP database will use relational tables to hold the larger quantities of detailed data and use specialized storage for at least some aspects of the smaller quantities of more-aggregate or less-detailed data. HOLAP addresses the shortcomings of MOLAP and ROLAP by combining the capabilities of both approaches. HOLAP tools can utilize both pre-calculated cubes and relational data sources.
Vertical partitioning
[edit]In this mode HOLAP stores aggregations in MOLAP for fast query performance, and detailed data in ROLAP to optimize time of cube processing.
Horizontal partitioning
[edit]In this mode HOLAP stores some slice of data, usually the more recent one (i.e. sliced by Time dimension) in MOLAP for fast query performance, and older data in ROLAP. Moreover, we can store some dices in MOLAP and others in ROLAP, leveraging the fact that in a large cuboid, there will be dense and sparse subregions.[16]
Products
[edit]The first product to provide HOLAP storage was Holos, but the technology also became available in other commercial products such as Microsoft Analysis Services, Oracle Database OLAP Option, MicroStrategy and SAP AG BI Accelerator. The hybrid OLAP approach combines ROLAP and MOLAP technology, benefiting from the greater scalability of ROLAP and the faster computation of MOLAP. For example, a HOLAP server may store large volumes of detailed data in a relational database, while aggregations are kept in a separate MOLAP store. The Microsoft SQL Server 7.0 OLAP Services supports a hybrid OLAP server
Comparison
[edit]Each type has certain benefits, although there is disagreement about the specifics of the benefits between providers.
- Some MOLAP implementations are prone to database explosion, a phenomenon causing vast amounts of storage space to be used by MOLAP databases when certain common conditions are met: high number of dimensions, pre-calculated results and sparse multidimensional data.
- MOLAP generally delivers better performance due to specialized indexing and storage optimizations. MOLAP also needs less storage space compared to ROLAP because the specialized storage typically includes compression techniques.[15]
- ROLAP is generally more scalable.[15] However, large volume pre-processing is difficult to implement efficiently so it is frequently skipped. ROLAP query performance can therefore suffer tremendously.
- Since ROLAP relies more on the database to perform calculations, it has more limitations in the specialized functions it can use.
- HOLAP attempts to mix the best of ROLAP and MOLAP. It can generally pre-process swiftly, scale well, and offer good function support.
Other types
[edit]The following acronyms are also sometimes used, although they are not as widespread as the ones above:
- WOLAP – Web-based OLAP
- DOLAP – Desktop OLAP
- RTOLAP – Real-time OLAP
- GOLAP – Graph OLAP[17][18]
- CaseOLAP – Context-aware Semantic OLAP,[19] developed for biomedical applications.[20] The CaseOLAP platform includes data preprocessing (e.g., downloading, extraction, and parsing text documents), indexing and searching with Elasticsearch, creating a functional document structure called Text-Cube,[21][22][23][24][25] and quantifying user-defined phrase-category relationships using the core CaseOLAP algorithm.
APIs and query languages
[edit]Unlike relational databases, which had SQL as the standard query language, and widespread APIs such as ODBC, JDBC and OLEDB, there was no such unification in the OLAP world for a long time. The first real standard API was OLE DB for OLAP specification from Microsoft which appeared in 1997 and introduced the MDX query language. Several OLAP vendors – both server and client – adopted it. In 2001 Microsoft and Hyperion announced the XML for Analysis specification, which was endorsed by most of the OLAP vendors. Since this also used MDX as a query language, MDX became the de facto standard.[26] Since September-2011 LINQ can be used to query SSAS OLAP cubes from Microsoft .NET.[27]
Products
[edit]History
[edit]The first product that performed OLAP queries was Express, which was released in 1970 (and acquired by Oracle in 1995 from Information Resources).[28] However, the term did not appear until 1993 when it was coined by Edgar F. Codd, who has been described as "the father of the relational database". Codd's paper[1] resulted from a short consulting assignment which Codd undertook for former Arbor Software (later Hyperion Solutions, and in 2007 acquired by Oracle), as a sort of marketing coup.
The company had released its own OLAP product, Essbase, a year earlier. As a result, Codd's "twelve laws of online analytical processing" were explicit in their reference to Essbase. There was some ensuing controversy and when Computerworld learned that Codd was paid by Arbor, it retracted the article. The OLAP market experienced strong growth in the late 1990s with dozens of commercial products going into market. In 1998, Microsoft released its first OLAP Server – Microsoft Analysis Services, which drove wide adoption of OLAP technology and moved it into the mainstream.
Product comparison
[edit]OLAP clients
[edit]OLAP clients include many spreadsheet programs like Excel, web application, SQL, dashboard tools, etc. Many clients support interactive data exploration where users select dimensions and measures of interest. Some dimensions are used as filters (for slicing and dicing the data) while others are selected as the axes of a pivot table or pivot chart. Users can also vary aggregation level (for drilling-down or rolling-up) the displayed view. Clients can also offer a variety of graphical widgets such as sliders, geographic maps, heat maps and more which can be grouped and coordinated as dashboards. An extensive list of clients appears in the visualization column of the comparison of OLAP servers table.
Market structure
[edit]Below is a list of top OLAP vendors in 2006, with figures in millions of US Dollars.[29]
| Vendor | Global Revenue | Consolidated company |
|---|---|---|
| Microsoft Corporation | 1,806 | Microsoft |
| Hyperion Solutions Corporation | 1,077 | Oracle |
| Cognos | 735 | IBM |
| Business Objects | 416 | SAP |
| MicroStrategy | 416 | MicroStrategy |
| SAP AG | 330 | SAP |
| Cartesis (SAP) | 210 | SAP |
| Applix | 205 | IBM |
| Infor | 199 | Infor |
| Oracle Corporation | 159 | Oracle |
| Others | 152 | Others |
| Total | 5,700 |
Open source
[edit]- Apache Pinot is used at LinkedIn, Cisco, Uber, Slack, Stripe, DoorDash, Target, Walmart, Amazon, and Microsoft to deliver scalable real time analytics with low latency.[30] It can ingest data from offline data sources (such as Hadoop and flat files) as well as online sources (such as Kafka). Pinot is designed to scale horizontally.
- Mondrian OLAP server is an open-source OLAP server written in Java. It supports the MDX query language, the XML for Analysis and the olap4j interface specifications.
- Apache Doris is an open-source real-time analytical database based on MPP architecture. It can support both high-concurrency point query scenarios and high-throughput complex analysis.[31]
- Apache Druid is a popular open-source distributed data store for OLAP queries that is used at scale in production by various organizations.
- Apache Kylin is a distributed data store for OLAP queries originally developed by eBay.
- Cubes (OLAP server) is another lightweight open-source toolkit implementation of OLAP functionality in the Python programming language with built-in ROLAP.
- ClickHouse is a fairly new column-oriented DBMS focusing on fast processing and response times.
- DuckDB[32] is an in-process SQL OLAP[33] database management system.
- MonetDB is a mature open-source column-oriented SQL RDBMS designed for OLAP queries.
See also
[edit]References
[edit]Citations
[edit]- ^ a b Codd E.F.; Codd S.B. & Salley C.T. (1993). "Providing OLAP (On-line Analytical Processing) to User-Analysts: An IT Mandate" (PDF). Codd & Date, Inc. Retrieved March 5, 2008.[permanent dead link]
- ^ "OLAP Council White Paper" (PDF). OLAP Council. 1997. Retrieved March 18, 2008.
- ^ Deepak Pareek (2007). Business Intelligence for Telecommunications. CRC Press. pp. 294 pp. ISBN 978-0-8493-8792-0. Retrieved March 18, 2008.
- ^ Apostolos Benisis (2010). Business Process Management:A Data Cube To Analyze Business Process Simulation Data For Decision Making. VDM Verlag Dr. Müller e.K. pp. 204 pp. ISBN 978-3-639-22216-6.
- ^ Abdullah, Ahsan (November 2009). "Analysis of mealybug incidence on the cotton crop using ADSS-OLAP (Online Analytical Processing) tool". Computers and Electronics in Agriculture. 69 (1): 59–72. Bibcode:2009CEAgr..69...59A. doi:10.1016/j.compag.2009.07.003.
- ^ a b c d O'Brien, J. A., & Marakas, G. M. (2009). Management information systems (9th ed.). Boston, MA: McGraw-Hill/Irwin.
- ^ Hari Mailvaganam (2007). "Introduction to OLAP – Slice, Dice and Drill!". Data Warehousing Review. Archived from the original on May 22, 2013. Retrieved March 18, 2008.
- ^ Williams, C., Garza, V.R., Tucker, S, Marcus, A.M. (1994, January 24). Multidimensional models boost viewing options. InfoWorld, 16(4)
- ^ MicroStrategy, Incorporated (1995). "The Case for Relational OLAP" (PDF). Retrieved March 20, 2008.
- ^ Surajit Chaudhuri & Umeshwar Dayal (1997). "An overview of data warehousing and OLAP technology". SIGMOD Rec. 26 (1): 65. CiteSeerX 10.1.1.211.7178. doi:10.1145/248603.248616. S2CID 8125630.
- ^ Gray, Jim; Chaudhuri, Surajit; Layman, Andrew; Reichart, Don; Venkatrao, Murali; Pellow, Frank; Pirahesh, Hamid (1997). "Data Cube: {A} Relational Aggregation Operator Generalizing Group-By, Cross-Tab, and Sub-Totals". J. Data Mining and Knowledge Discovery. 1 (1): 29–53. arXiv:cs/0701155. doi:10.1023/A:1009726021843. S2CID 12502175. Retrieved March 20, 2008.
- ^ Zhang 2017, p. 1.
- ^ Jesus, Baquero & Almeida 2011, 2.1 Decomposable functions, pp. 3–4.
- ^ Nigel Pendse (June 27, 2006). "OLAP architectures". OLAP Report. Archived from the original on January 24, 2008. Retrieved March 17, 2008.
- ^ a b c Bach Pedersen, Torben; S. Jensen, Christian (December 2001). "Multidimensional Database Technology". Computer. 34 (12): 40–46. Bibcode:2001Compr..34l..40P. doi:10.1109/2.970558. ISSN 0018-9162.
- ^ Kaser, Owen; Lemire, Daniel (2006). "Attribute value reordering for efficient hybrid OLAP". Information Sciences. 176 (16): 2304–2336. arXiv:cs/0702143. doi:10.1016/j.ins.2005.09.005.
- ^ "This Week in Graph and Entity Analytics". Datanami. December 7, 2016. Retrieved March 8, 2018.
- ^ "Cambridge Semantics Announces AnzoGraph Support for Amazon Neptune and Graph Databases". Database Trends and Applications. February 15, 2018. Retrieved March 8, 2018.
- ^ Tao, Fangbo; Zhuang, Honglei; Yu, Chi Wang; Wang, Qi; Cassidy, Taylor; Kaplan, Lance; Voss, Clare; Han, Jiawei (2016). "Multi-Dimensional, Phrase-Based Summarization in Text Cubes" (PDF).
- ^ Liem, David A.; Murali, Sanjana; Sigdel, Dibakar; Shi, Yu; Wang, Xuan; Shen, Jiaming; Choi, Howard; Caufield, John H.; Wang, Wei; Ping, Peipei; Han, Jiawei (October 1, 2018). "Phrase mining of textual data to analyze extracellular matrix protein patterns across cardiovascular disease". American Journal of Physiology. Heart and Circulatory Physiology. 315 (4): H910 – H924. doi:10.1152/ajpheart.00175.2018. ISSN 1522-1539. PMC 6230912. PMID 29775406.
- ^ Lee, S.; Kim, N.; Kim, J. (2014). "A Multi-dimensional Analysis and Data Cube for Unstructured Text and Social Media". 2014 IEEE Fourth International Conference on Big Data and Cloud Computing. pp. 761–764. doi:10.1109/BDCloud.2014.117. ISBN 978-1-4799-6719-3. S2CID 229585.
- ^ Ding, B.; Lin, X.C.; Han, J.; Zhai, C.; Srivastava, A.; Oza, N.C. (December 2011). "Efficient Keyword-Based Search for Top-K Cells in Text Cube". IEEE Transactions on Knowledge and Data Engineering. 23 (12): 1795–1810. Bibcode:2011ITKDE..23.1795D. doi:10.1109/TKDE.2011.34. S2CID 13960227.
- ^ Ding, B.; Zhao, B.; Lin, C.X.; Han, J.; Zhai, C. (2010). "TopCells: Keyword-based search of top-k aggregated documents in text cube". 2010 IEEE 26th International Conference on Data Engineering (ICDE 2010). pp. 381–384. CiteSeerX 10.1.1.215.7504. doi:10.1109/ICDE.2010.5447838. ISBN 978-1-4244-5445-7. S2CID 14649087.
- ^ Lin, C.X.; Ding, B.; Han, K.; Zhu, F.; Zhao, B. (2008). "Text Cube: Computing IR Measures for Multidimensional Text Database Analysis". 2008 Eighth IEEE International Conference on Data Mining. pp. 905–910. doi:10.1109/icdm.2008.135. ISBN 978-0-7695-3502-9. S2CID 1522480.
- ^ Liu, X.; Tang, K.; Hancock, J.; Han, J.; Song, M.; Xu, R.; Pokorny, B. (March 21, 2013). Greenberg, A.M.; Kennedy, W.G.; Bos, N.D. (eds.). Social Computing, Behavioral-Cultural Modeling and Prediction: 6th International Conference, SBP 2013, Washington, DC, USA, April 2-5, 2013, Proceedings (7812 ed.). Berlin, Heidelberg: Springer. pp. 321–330. ISBN 978-3-642-37209-4.
- ^ Nigel Pendse (August 23, 2007). "Commentary: OLAP API wars". OLAP Report. Archived from the original on May 28, 2008. Retrieved March 18, 2008.
- ^ "SSAS Entity Framework Provider for LINQ to SSAS OLAP". Archived from the original on September 29, 2011.
- ^ Nigel Pendse (August 23, 2007). "The origins of today's OLAP products". OLAP Report. Archived from the original on December 21, 2007. Retrieved November 27, 2007.
- ^ Nigel Pendse (2006). "OLAP Market". OLAP Report. Archived from the original on March 30, 1997. Retrieved March 17, 2008.
- ^ Yegulalp, Serdar (June 11, 2015). "LinkedIn fills another SQL-on-Hadoop niche". InfoWorld. Retrieved November 19, 2016.
- ^ "Apache Doris". Github. Apache Doris Community. Retrieved April 5, 2023.
- ^ "An in-process SQL OLAP database management system". DuckDB. Retrieved December 10, 2022.
- ^ Anand, Chillar (November 17, 2022). "Common Crawl On Laptop - Extracting Subset Of Data". Avil Page. Retrieved December 10, 2022.
Sources
[edit]- Jesus, Paulo; Baquero, Carlos; Paulo Sérgio Almeida (2011). "A Survey of Distributed Data Aggregation Algorithms". arXiv:1110.0725 [cs.DC].
- Zhang, Chao (2017). Symmetric and Asymmetric Aggregate Function in Massively Parallel Computing (Technical report).
Further reading
[edit]- Erik Thomsen. (1997). OLAP Solutions: Building Multidimensional Information Systems, 2nd Edition. John Wiley & Sons. ISBN 978-0-471-14931-6.
- Ling Liu and Tamer M. Özsu (Eds.) (2009). "Encyclopedia of Database Systems, 4100 p. 60 illus. ISBN 978-0-387-49616-0.
Online analytical processing
View on GrokipediaFundamentals
Definition and Purpose
Online analytical processing (OLAP) is a technology designed to enable the rapid, interactive examination of large volumes of data organized in multiple dimensions, allowing users to gain insights from various analytical perspectives.[5] Coined by Edgar F. Codd in 1993, OLAP emphasizes multidimensional views of aggregated data to facilitate complex querying beyond traditional relational database operations.[6] The core purpose of OLAP is to empower business intelligence processes, including trend identification, forecasting, and informed decision-making, by supporting ad hoc exploration of datasets that assume familiarity with basic database concepts like tables and queries.[5] It achieves this through key operations such as slicing (extracting data along a single dimension, e.g., sales for a specific year), dicing (defining a sub-cube with ranges across dimensions), drilling down (adding finer granularity, like from quarterly to monthly sales), drilling up (aggregating to higher levels, such as from products to categories), and pivoting (rotating axes to view data differently, like swapping rows and columns for region versus product analysis).[7] These capabilities address the need for flexible, on-the-fly analytics in environments where predefined reports fall short.[6] In contrast to online transaction processing (OLTP), which manages numerous short, update-oriented transactions for day-to-day operations like recording a single purchase, OLAP prioritizes read-intensive, aggregative queries over historical and integrated data for analytical depth.[8] For instance, an OLAP system might compute total sales revenue by geographic region, product line, and fiscal quarter to uncover patterns, whereas OLTP systems ensure the integrity of that individual transaction entry in real time.[9] This distinction underscores OLAP's role in strategic analysis rather than operational efficiency.[10]Multidimensional Data Model
The multidimensional data model forms the foundational structure for online analytical processing (OLAP), enabling the organization and analysis of large volumes of data from multiple perspectives. This model, proposed by Edgar F. Codd in 1993 as the basis for OLAP systems, emphasizes multidimensional databases that support dynamic, intuitive data exploration over traditional relational approaches.[11] In this paradigm, data is conceptualized as a multidimensional array, where categorical attributes define the axes of analysis, allowing users to perform complex aggregations and insights without predefined queries.[12] Dimensions represent the categorical attributes or perspectives along which data is analyzed, such as time, geography, or product categories, forming the edges of the analytical structure.[13] Each dimension consists of discrete values that categorize the data, enabling slicing and dicing operations to focus on specific subsets. Hierarchies within dimensions organize these values into leveled structures for progressive aggregation and navigation; for instance, a time dimension might include a hierarchy progressing from year to quarter to month, where higher levels (e.g., year) aggregate data from lower ones (e.g., months).[12][13] This hierarchical organization facilitates drill-down analysis, such as examining annual sales totals before breaking them into quarterly figures. Measures, in contrast, are the quantitative facts or numerical values stored at the intersections of dimensions, such as sales amounts or unit quantities, which are aggregated across dimensional axes to yield analytical results.[12] These measures form the core content of the model, with their values computed through functions like sum or average, providing the basis for business intelligence metrics. For example, in a sales analysis, the measure might be total revenue, varying by dimensions like product and region.[13] The logical representation of this model is the OLAP cube, a multidimensional array that encapsulates measures along shared dimensions, visualized as a hypercube in higher dimensions but often exemplified in three dimensions for clarity.[12] Consider a three-dimensional sales cube with axes for time (e.g., months), product (e.g., categories like electronics or apparel), and geography (e.g., regions like North America or Europe); each cell at the intersection holds a measure value, such as sales dollars for electronics in North America during January, enabling rapid pivoting to view data from alternative perspectives.[13] In relational implementations, the multidimensional model is mapped to database schemas, primarily the star and snowflake designs, to store data in tables while preserving analytical efficiency. The star schema features a central fact table containing measures and foreign keys linking to surrounding dimension tables, each holding descriptive attributes for a single dimension, promoting simplicity and query performance.[12] The snowflake schema extends this by normalizing dimension tables into multiple related sub-tables, one per hierarchy level, to reduce redundancy but potentially increasing join complexity during queries.[13] For instance, a product dimension in a snowflake schema might split into separate tables for categories, subcategories, and individual items.Key Operations and Aggregations
Online analytical processing (OLAP) relies on a set of core operations that allow users to manipulate and explore multidimensional data cubes interactively. These operations enable analysts to view data from various perspectives without restructuring the underlying model. The primary operations, as defined in foundational OLAP literature, include slice, dice, drill-down, roll-up, and pivot, each facilitating different aspects of data navigation and summarization.[14] Slice fixes one dimension to a specific value, effectively reducing the cube to a lower-dimensional slice for focused analysis; for example, selecting sales data for a single year removes the time dimension, yielding a two-dimensional view of product and region. Dice extends this by selecting sub-ranges or specific values across multiple dimensions, extracting a smaller sub-cube; this might involve querying sales for a particular quarter in specific regions and product categories. Drill-down increases granularity by descending a hierarchy within a dimension, such as moving from yearly to monthly sales data to reveal underlying trends. Conversely, roll-up (also known as drill-up) aggregates data by ascending the hierarchy, summarizing lower-level details into higher-level overviews, like consolidating monthly sales into annual totals. Pivot rotates the axes of the cube to swap dimensions, providing alternative viewpoints; for instance, transposing rows (products) and columns (time) in a sales report to emphasize temporal patterns over products. These operations collectively support ad-hoc querying, allowing seamless transitions between detailed and summarized views.[14] Aggregations form the backbone of OLAP analysis, applying functions to measures across selected dimensions to derive insights. Common aggregation functions include sum (totaling values), average (mean across a set), count (number of non-null entries), minimum, and maximum, which compute summaries like total revenue or peak sales. For instance, total sales can be calculated as the sum over all relevant records: where the summation occurs across the selected dimensions, such as time, product, and location. To achieve interactive speeds, OLAP systems pre-compute these aggregations by materializing views—storing the results of common aggregations in advance—reducing query times from minutes to seconds on large datasets.[14] Multidimensional cubes often exhibit high sparsity, with most cells empty due to the combinatorial explosion of dimensions (e.g., not every product sells in every region every day). OLAP implementations address this through sparse storage techniques, such as hashing only non-zero cells or using bitmap indices and B-trees, which minimize memory usage while preserving query efficiency; this dynamic handling ensures that operations like roll-up or slice perform optimally even on sparse data.[14]History and Evolution
Origins in the 1990s
The emergence of online analytical processing (OLAP) in the early 1990s addressed the growing demand for advanced data analysis tools amid the proliferation of business data following the relational database boom of the 1980s. Relational database management systems (RDBMS), while effective for transactional processing, struggled with the complex, ad-hoc queries required for business intelligence, such as multidimensional aggregations and slicing across large datasets, due to performance bottlenecks from extensive joins and normalization.[15][16] This limitation became particularly acute as enterprises accumulated vast amounts of operational data, necessitating faster, more intuitive analytics to support decision-making without disrupting online transaction processing (OLTP) systems.[17][18] A key precursor to OLAP was the concept of data warehousing, formalized by Bill Inmon in his 1992 book Building the Data Warehouse. Inmon advocated for a centralized repository of integrated, historical data separated from operational OLTP systems, enabling efficient querying for analytical purposes and laying the groundwork for distinguishing OLAP workloads from transactional ones.[19] This approach highlighted the need for specialized architectures to handle read-heavy, aggregate-oriented operations on cleaned, subject-oriented data stores. The term "OLAP" was coined by Edgar F. Codd in his seminal 1993 technical report, Providing OLAP to User-Analysts: An IT Mandate, co-authored with Sharon B. Codd and C. T. Salley. In this work, Codd outlined 12 rules for designing OLAP systems, emphasizing multidimensional data views, fast query performance, and user-friendly interfaces to empower non-technical analysts.[20] These rules positioned OLAP as an evolution beyond relational models, focusing on intuitive navigation of data cubes for business reporting. Early prototypes, such as the Express multidimensional database, originally released by Information Resources, Inc. in 1975 and later acquired by Oracle in 1995, demonstrated practical implementations of these ideas, allowing developers to build OLAP applications for financial and sales analysis.[21]Key Milestones and Developments
In the 2000s, the integration of OLAP with data warehousing tools advanced significantly through enhanced ETL (Extract, Transform, Load) processes, enabling more efficient data consolidation from disparate sources into multidimensional structures for analysis.[22] Tools like Informatica and IBM DataStage, which emerged in the late 1990s, saw widespread adoption during this decade, facilitating automated data pipelines that supported OLAP's need for clean, aggregated datasets in enterprise environments.[23] This period also marked the standardization of the Multidimensional Expressions (MDX) query language, initially released by Microsoft in 1998 with SQL Server 7's OLAP Services, which gained broad industry adoption in the early 2000s for complex multidimensional querying across vendors.[24] Additionally, the XML for Analysis (XML/A) standard, introduced by Microsoft around 2002-2003 as a SOAP-based protocol, emerged as a key specification for accessing OLAP metadata and executing queries over web services, promoting interoperability between OLAP servers and client applications.[25] The 2010s brought a shift toward cloud computing and big data integration in OLAP systems, with in-memory processing becoming a cornerstone for faster query performance on large datasets. SAP HANA, launched in 2010 as an in-memory columnar database, revolutionized OLAP by enabling real-time analytics directly on transactional data, reducing latency from hours to seconds for complex aggregations.[26] Complementing this, columnar storage innovations like Apache Kudu, released in its 1.0 version in 2016 by the Apache Software Foundation, addressed big data challenges by providing a distributed storage engine optimized for OLAP workloads within Hadoop ecosystems, supporting both analytical scans and updates on petabyte-scale data.[27] These developments aligned OLAP more closely with scalable cloud architectures, allowing organizations to handle exponentially growing data volumes without traditional hardware constraints. In the 2020s, OLAP evolved further with emphases on real-time processing of streaming data and AI integration for automated insights. Apache Druid, originally developed in 2011 and open-sourced in 2012, matured into a prominent real-time OLAP database by the early 2020s, ingesting streaming data at high velocities while delivering sub-second query responses on event-driven datasets for applications like user behavior analysis.[28] Cloud-native platforms such as Snowflake, founded in 2012 and reaching significant maturity in the late 2010s through 2020s expansions, provided separated storage and compute for OLAP, enabling elastic scaling and near-real-time analytics on massive datasets across multi-cloud environments.[29] Concurrently, AI enhancements in OLAP tools, such as those integrating machine learning for predictive modeling and anomaly detection, began proliferating around 2023, with systems like IBM's offerings combining OLAP cubes with AI to automate insight generation and improve decision-making accuracy.[30] In 2024, Oracle announced the deprecation of its OLAP option, signaling a broader industry transition to cloud-based and real-time analytics platforms.[31]Types of OLAP Systems
Multidimensional OLAP (MOLAP)
Multidimensional OLAP (MOLAP) employs specialized multidimensional databases that utilize array-based storage structures to organize data into multi-dimensional cubes. These cubes are built by pre-computing and storing aggregates across dimensions, such as sums or averages, which allows for rapid access to summarized data without requiring real-time calculations during queries.[32] This architecture directly implements the multidimensional data model in optimized storage engines tailored for analytical processing.[33] A key strength of MOLAP is its support for high-speed queries on pre-aggregated data, enabling efficient handling of complex analytics like multi-dimensional slicing and aggregation. By storing results of common operations in advance, MOLAP minimizes processing overhead, delivering near-instantaneous responses for interactive exploration of large datasets.[32] MOLAP systems typically use proprietary storage formats to enhance performance in multidimensional environments. For example, Essbase's Block Storage Option (BSO) structures data into blocks defined by combinations of sparse dimension members, with each block holding values from dense dimensions. Sparsity is managed through a dedicated index that records only existing sparse combinations and points to corresponding data blocks, avoiding allocation of space for non-existent cells and thereby optimizing storage efficiency.[34] MOLAP excels with dense datasets, where most cube cells are populated, as the array-based approach maximizes storage utilization and query speed in such scenarios. The fixed schema of these systems, which enforces predefined dimensions and measures, constrains flexibility for unstructured changes but supports sub-second response times for anticipated analytical queries on pre-built cubes.[35][36]Relational OLAP (ROLAP)
Relational OLAP (ROLAP) is an OLAP implementation that operates directly on relational databases, extending standard relational database management systems (RDBMS) to support multidimensional analysis without dedicated multidimensional storage structures. The architecture positions ROLAP servers as an intermediate layer between the relational back-end, where data is stored in normalized or denormalized schemas such as star or snowflake schemas, and client-front-end tools for querying. This setup leverages existing RDBMS like Microsoft SQL Server, using middleware to translate OLAP operations into optimized SQL queries, often incorporating materialized views for performance enhancement. Unlike multidimensional approaches, ROLAP avoids proprietary storage formats, relying instead on the RDBMS's native capabilities for data management.[6] A key strength of ROLAP lies in its ability to handle very large and sparse datasets, as it stores only the actual data facts without padding for empty cells, thereby optimizing storage efficiency. It capitalizes on the inherent scalability and robustness of relational systems, which are designed for high-volume transactions and can manage terabyte-scale warehouses seamlessly. Additionally, ROLAP facilitates straightforward integration with operational transactional systems, as the analytical data resides within the same relational environment, enabling real-time access to up-to-date information without data duplication.[6][37] The query process in ROLAP involves dynamic, on-the-fly aggregation executed through generated SQL statements against the relational database. For instance, a roll-up operation to aggregate sales data from daily to monthly levels might employ the SQL GROUP BY ROLLUP clause, which computes subtotals hierarchically in a single query, such asSELECT product, month, SUM([sales](/page/Data)) FROM sales_table GROUP BY ROLLUP (product, month);. Aggregations may be supported via indexed views in the RDBMS to accelerate repeated access, but complex multidimensional queries often require multi-statement SQL execution, leading to potential performance slowdowns due to real-time computation overhead.[6][37][38]
Hybrid OLAP (HOLAP)
Hybrid OLAP (HOLAP) integrates the multidimensional storage and fast aggregation capabilities of MOLAP with the relational storage and scalability of ROLAP, enabling systems to handle both precomputed summaries and detailed data efficiently. In this architecture, the OLAP server manages the division of data between relational databases for raw or detailed information and multidimensional cubes for aggregated views, allowing transparent access to users without specifying the underlying storage type.[39][40] A key aspect of HOLAP architecture is vertical partitioning, where aggregated data is stored in a MOLAP structure for rapid access to summaries, while the underlying raw or detailed data remains in a relational format akin to ROLAP. This approach avoids duplicating the entire dataset in multidimensional storage, reducing redundancy and enabling real-time updates to source data. Horizontal partitioning complements this by allocating specific data slices—such as those requiring frequent querying—to MOLAP cubes for summary-level performance, while storing less-accessed or detailed portions in relational tables. For instance, recent sales summaries might be precomputed in cubes, with historical transaction details queried directly from relations.[40][39] The benefits of HOLAP include optimized storage footprint compared to pure MOLAP, which can become unwieldy with large datasets, and superior query speeds for common aggregations over ROLAP's relational joins. It is particularly effective for scenarios balancing performance and flexibility, such as using MOLAP partitions for frequent reporting queries on summarized data and ROLAP for ad-hoc explorations of granular details. Implementations like Jedox (formerly Palo) and Mondrian OLAP server exemplify this family of HOLAP systems, where Mondrian, for example, stores aggregates multidimensionally while retaining leaf-level data relationally to mitigate MOLAP's storage constraints and ROLAP's latency issues.[41][42][40] In modern cloud environments, HOLAP has gained prominence through platforms like Azure Analysis Services, introduced in the 2010s, which support hybrid storage modes for scalable, managed OLAP deployments handling petabyte-scale data without on-premises hardware. This evolution addresses earlier limitations by leveraging cloud elasticity for partitioning strategies, ensuring high availability and integration with services like Azure Synapse Analytics.[39]Comparisons and Advanced Variants
Performance and Trade-offs
A fundamental distinction in database systems is between Online Analytical Processing (OLAP), also known as Analytical Processing (AP), and Online Transactional Processing (OLTP), also known as Transactional Processing (TP). OLAP systems are optimized for handling complex queries and aggregations on large datasets, often at terabyte or petabyte scales, with read-heavy workloads, columnar storage, and response times ranging from seconds to minutes, primarily used for data warehouses, business intelligence, and reporting. In contrast, OLTP systems manage small, real-time create, read, update, and delete (CRUD) operations, feature write-heavy workloads, row-based storage, millisecond response times, and strict ACID compliance, supporting operational business systems such as enterprise resource planning (ERP) and e-commerce platforms.[43][44][45] The following table summarizes key differences:| Aspect | OLAP (Analytical Processing) | OLTP (Transactional Processing) |
|---|---|---|
| Operations | Complex queries and aggregations on big data | Small real-time CRUD operations |
| Data Volume | Massive (TB/PB scale) | Small to medium |
| Response Time | Seconds to minutes | Milliseconds |
| Storage | Columnar | Row-based |
| Scenarios | Data warehouses, BI, reports | ERP, e-commerce |
Other Variants and Extensions
Spatial OLAP (SOLAP) integrates geographic information systems (GIS) with traditional OLAP to enable multidimensional analysis of geospatial data, supporting operations like spatial aggregation and visualization for applications in urban planning and environmental monitoring. This variant emerged in the late 1990s and early 2000s as a response to the need for handling location-based dimensions alongside conventional measures.[50] Real-time OLAP (RTOLAP) extends OLAP capabilities to process streaming data with minimal latency, allowing immediate insights from continuously incoming information sources. It often incorporates integration with streaming platforms such as Apache Kafka to ingest and analyze high-velocity data in sectors like finance and IoT. For instance, systems like Apache Kylin support RTOLAP by querying streaming data directly through dedicated receivers.[51] Mobile OLAP adapts OLAP processing for handheld devices by employing semantics-aware compression of data cubes, ensuring efficient query execution despite constraints on storage, bandwidth, and computation. This extension, exemplified by frameworks like Hand-OLAP, facilitates on-the-go analytics for field-based decision-making in sales and logistics. Collaborative OLAP promotes shared multidimensional analysis across distributed entities, leveraging peer-to-peer architectures to federate data marts while preserving autonomy. It supports inter-organizational decision-making by enabling reformulation of OLAP queries over heterogeneous sources, as seen in collaborative business intelligence environments.[52][53] Cloud-native extensions of OLAP emphasize serverless architectures that scale dynamically without infrastructure provisioning, such as AWS Athena, which executes SQL-based analytical queries on data stored in Amazon S3 for cost-effective, pay-per-query processing. These adaptations suit variable workloads in modern data lakes.[54] Graph OLAP, developed in the 2010s, applies OLAP principles to graph-structured data for analyzing networks like social connections or supply chains, using constructs such as Graph Cubes to compute aggregations over nodes and edges. This variant addresses limitations of traditional OLAP in handling interconnected, non-tabular data.[55] Post-2020 advancements have increasingly integrated AI and machine learning into OLAP systems, enabling predictive aggregations for forecasting trends within multidimensional cubes, automated query optimization, and natural language interfaces to enhance proactive analytics. Examples include AI-powered anomaly detection and real-time insights in platforms supporting OLAP workflows.[30] Federated OLAP variants, including fast approaches for distributed environments, enable seamless querying across disparate data sources without centralization, supporting scalable analysis in multi-site enterprises.[56]Query Interfaces
APIs and Standards
OLE DB for OLAP (ODBO), introduced by Microsoft in 1997, extends the OLE DB specification to provide programmatic access to multidimensional data stores, enabling developers to query and manipulate OLAP cubes through COM-based interfaces.[57] This API defines objects such as MDSchema rowsets for schema discovery and supports operations like slicing, dicing, and drilling down in OLAP datasets.[58] Building on OD BO, XML for Analysis (XML/A), standardized in 2002 by Microsoft, Hyperion, and SAS, introduces a SOAP-based web services protocol for accessing OLAP data over HTTP, facilitating interoperability in distributed environments.[59] XML/A uses XML payloads to execute commands like multidimensional expressions (MDX) and retrieve results in XML format, making it suitable for cross-platform analytical applications.[60] The Common Warehouse Metamodel (CWM), adopted by the Object Management Group (OMG) in 2001, serves as a standard for interchanging metadata across OLAP and data warehousing tools, using the Meta Object Facility (MOF) and XML Metadata Interchange (XMI) for representation.[61] CWM models elements such as dimensions, measures, and transformations, promoting consistency in metadata management without prescribing data storage formats.[61] JOLAP, proposed in Java Specification Request 69 by the Java Community Process in 2000 but withdrawn in 2004 without final approval, aimed to provide a pure Java API for creating, accessing, and maintaining OLAP metadata and data, analogous to JDBC for relational databases.[62] It supported operations on multidimensional schemas and integrated with the Common Warehouse Metamodel for metadata handling, though adoption has been limited compared to vendor-specific implementations like Oracle's OLAP Java API.[62] As a community-driven successor, olap4j, first released in version 1.0 in 2011, has become a widely used open-source Java API for OLAP, supporting connections to various OLAP servers and MDX querying.[63] For .NET environments, ADOMD.NET, a Microsoft library released in the early 2000s, enables seamless integration of OLAP functionality by leveraging XML/A over the .NET Framework, allowing developers to connect to Analysis Services and execute analytical queries programmatically.[64] In the 2010s, OLAP systems evolved toward RESTful APIs in cloud platforms, such as Google BigQuery's REST API introduced in 2011, which supports HTTP-based queries for scalable analytical processing without proprietary protocols. This shift enhances accessibility for web and mobile applications, decoupling clients from server-specific interfaces. Modern extensions to ODBC and JDBC standards address big data OLAP needs; for instance, Apache Druid's JDBC driver, compliant with JDBC 4.2 since 2015, enables SQL-like queries on distributed OLAP stores, while Google BigQuery's ODBC/JDBC drivers, updated in the 2020s, handle petabyte-scale analytics with federated query support.Query Languages
Query languages for online analytical processing (OLAP) enable users to express complex multidimensional queries against data cubes, facilitating operations such as slicing, dicing, and aggregations across dimensions. These languages extend traditional relational querying paradigms to handle hierarchical and multidimensional data structures efficiently, allowing analysts to retrieve insights from large-scale datasets without procedural code. Primarily designed for ad-hoc analysis, OLAP query languages emphasize declarative syntax that abstracts underlying storage mechanisms, whether multidimensional arrays or relational tables.[65] Multidimensional Expressions (MDX) is a SQL-like query language specifically tailored for querying and manipulating OLAP cubes in multidimensional databases. Developed by Microsoft and adopted widely in tools like SQL Server Analysis Services, MDX supports the definition of axes for rows, columns, and filters, enabling precise retrieval of measures along dimensions. For instance, a basic MDX query to select sales measures on the columns axis from a sales cube might be written as:SELECT
[Measures].[Sales] ON COLUMNS,
[Date].[Year].Members ON ROWS
FROM [Sales Cube]
SELECT
[Measures].[Sales] ON COLUMNS,
[Date].[Year].Members ON ROWS
FROM [Sales Cube]
