Recent from talks
Contribute something to knowledge base
Content stats: 0 posts, 0 articles, 0 media, 0 notes
Members stats: 0 subscribers, 0 contributors, 0 moderators, 0 supporters
Subscribers
Supporters
Contributors
Moderators
Hub AI
Data engineering AI simulator
(@Data engineering_simulator)
Hub AI
Data engineering AI simulator
(@Data engineering_simulator)
Data engineering
Data engineering is a software engineering approach to the building of data systems, to enable the collection and usage of data. This data is usually used to enable subsequent analysis and data science, which often involves machine learning. Making the data usable usually involves substantial computing and storage, as well as data processing.
Around the 1970s/1980s the term information engineering methodology (IEM) was created to describe database design and the use of software for data analysis and processing. These techniques were intended to be used by database administrators (DBAs) and by systems analysts based upon an understanding of the operational processing needs of organizations for the 1980s. In particular, these techniques were meant to help bridge the gap between strategic business planning and information systems. A key early contributor (often called the "father" of information engineering methodology) was the Australian Clive Finkelstein, who wrote several articles about it between 1976 and 1980, and also co-authored an influential Savant Institute report on it with James Martin. Over the next few years, Finkelstein continued work in a more business-driven direction, which was intended to address a rapidly changing business environment; Martin continued work in a more data processing-driven direction. From 1983 to 1987, Charles M. Richter, guided by Clive Finkelstein, played a significant role in revamping IEM as well as helping to design the IEM software product (user data), which helped automate IEM.
In the early 2000s, the data and data tooling was generally held by the information technology (IT) teams in most companies. Other teams then used data for their work (e.g. reporting), and there was usually little overlap in data skillset between these parts of the business.
In the early 2010s, with the rise of the internet, the massive increase in data volumes, velocity, and variety led to the term big data to describe the data itself, and data-driven tech companies like Facebook and Airbnb started using the phrase data engineer. Due to the new scale of the data, major firms like Google, Facebook, Amazon, Apple, Microsoft, and Netflix started to move away from traditional ETL and storage techniques. They started creating data engineering, a type of software engineering focused on data, and in particular infrastructure, warehousing, data protection, cybersecurity, mining, modelling, processing, and metadata management. This change in approach was particularly focused on cloud computing. Data started to be handled and used by many parts of the business, such as sales and marketing, and not just IT.
High-performance computing is critical for the processing and analysis of data. One particularly widespread approach to computing for data engineering is dataflow programming, in which the computation is represented as a directed graph (dataflow graph); nodes are the operations, and edges represent the flow of data. Popular implementations include Apache Spark, and the deep learning specific TensorFlow. More recent implementations, such as Differential/Timely Dataflow, have used incremental computing for much more efficient data processing.
Data is stored in a variety of ways, one of the key deciding factors is in how the data will be used. Data engineers optimize data storage and processing systems to reduce costs. They use data compression, partitioning, and archiving.
If the data is structured and some form of online transaction processing is required, then databases are generally used. Originally mostly relational databases were used, with strong ACID transaction correctness guarantees; most relational databases use SQL for their queries. However, with the growth of data in the 2010s, NoSQL databases have also become popular since they horizontally scaled more easily than relational databases by giving up the ACID transaction guarantees, as well as reducing the object-relational impedance mismatch. More recently, NewSQL databases — which attempt to allow horizontal scaling while retaining ACID guarantees — have become popular.
If the data is structured and online analytical processing is required (but not online transaction processing), then data warehouses are a main choice. They enable data analysis, mining, and artificial intelligence on a much larger scale than databases can allow, and indeed data often flow from databases into data warehouses. Business analysts, data engineers, and data scientists can access data warehouses using tools such as SQL or business intelligence software.
Data engineering
Data engineering is a software engineering approach to the building of data systems, to enable the collection and usage of data. This data is usually used to enable subsequent analysis and data science, which often involves machine learning. Making the data usable usually involves substantial computing and storage, as well as data processing.
Around the 1970s/1980s the term information engineering methodology (IEM) was created to describe database design and the use of software for data analysis and processing. These techniques were intended to be used by database administrators (DBAs) and by systems analysts based upon an understanding of the operational processing needs of organizations for the 1980s. In particular, these techniques were meant to help bridge the gap between strategic business planning and information systems. A key early contributor (often called the "father" of information engineering methodology) was the Australian Clive Finkelstein, who wrote several articles about it between 1976 and 1980, and also co-authored an influential Savant Institute report on it with James Martin. Over the next few years, Finkelstein continued work in a more business-driven direction, which was intended to address a rapidly changing business environment; Martin continued work in a more data processing-driven direction. From 1983 to 1987, Charles M. Richter, guided by Clive Finkelstein, played a significant role in revamping IEM as well as helping to design the IEM software product (user data), which helped automate IEM.
In the early 2000s, the data and data tooling was generally held by the information technology (IT) teams in most companies. Other teams then used data for their work (e.g. reporting), and there was usually little overlap in data skillset between these parts of the business.
In the early 2010s, with the rise of the internet, the massive increase in data volumes, velocity, and variety led to the term big data to describe the data itself, and data-driven tech companies like Facebook and Airbnb started using the phrase data engineer. Due to the new scale of the data, major firms like Google, Facebook, Amazon, Apple, Microsoft, and Netflix started to move away from traditional ETL and storage techniques. They started creating data engineering, a type of software engineering focused on data, and in particular infrastructure, warehousing, data protection, cybersecurity, mining, modelling, processing, and metadata management. This change in approach was particularly focused on cloud computing. Data started to be handled and used by many parts of the business, such as sales and marketing, and not just IT.
High-performance computing is critical for the processing and analysis of data. One particularly widespread approach to computing for data engineering is dataflow programming, in which the computation is represented as a directed graph (dataflow graph); nodes are the operations, and edges represent the flow of data. Popular implementations include Apache Spark, and the deep learning specific TensorFlow. More recent implementations, such as Differential/Timely Dataflow, have used incremental computing for much more efficient data processing.
Data is stored in a variety of ways, one of the key deciding factors is in how the data will be used. Data engineers optimize data storage and processing systems to reduce costs. They use data compression, partitioning, and archiving.
If the data is structured and some form of online transaction processing is required, then databases are generally used. Originally mostly relational databases were used, with strong ACID transaction correctness guarantees; most relational databases use SQL for their queries. However, with the growth of data in the 2010s, NoSQL databases have also become popular since they horizontally scaled more easily than relational databases by giving up the ACID transaction guarantees, as well as reducing the object-relational impedance mismatch. More recently, NewSQL databases — which attempt to allow horizontal scaling while retaining ACID guarantees — have become popular.
If the data is structured and online analytical processing is required (but not online transaction processing), then data warehouses are a main choice. They enable data analysis, mining, and artificial intelligence on a much larger scale than databases can allow, and indeed data often flow from databases into data warehouses. Business analysts, data engineers, and data scientists can access data warehouses using tools such as SQL or business intelligence software.
