Personal tools

Types and Use Cases of Data Pipelines

Connecticut_110820A
[Connecticut State - Forbes]

 

- Overview

A data pipeline is a method in which raw data is ingested from various data sources, transformed and then ported to a data store, such as a data lake or data warehouse, for analysis. 

Before data flows into a data repository, it usually undergoes some data processing. This includes data transformations such as filtering, masking, and aggregation to ensure proper data integration and normalization. This is particularly important when the target of the dataset is a relational database. This type of data repository has a defined schema that needs to be aligned (i.e., matching data columns and types) to update existing data with new data.

A data pipeline is an end-to-end sequence of digital processes for collecting, modifying, and delivering data. Organizations use data pipelines to copy or move data from one source to another so that it can be stored, used for analysis, or combined with other data.

Data pipelines ingest, process, prepare, transform and enrich structured, unstructured and semi-structured data in a controlled manner; this is called data integration.

Ultimately, data pipelines can help enterprises break down information silos and easily move and derive value from data in the form of insights and analytics.

 

- Types of Data Pipelines 

Data pipelines are classified based on how they are used. There are several main types of data pipelines, each appropriate for specific tasks on specific platforms. Batch and just-in-time (streaming) processing are the two most common pipeline types.

Following are four main data pipelines: 

  • Batch data pipelines
  • Streaming data pipelines
  • Data integration pipelines
  • Cloud-native data pipelines

 

- Batch Data Pipelines

A batch data pipeline is a structured and automated system designed to process large amounts of data at predetermined intervals, or batches. It differs from real-time big data processing, which processes data as it arrives. 

This approach is particularly useful when immediate processing is not required.

 

- Streaming Data Pipelines

Streaming data pipelines allow the transfer of real-time modified data from source to destination, enabling rapid decision-making to scale business operations. 

Choosing a streaming data pipeline can significantly reduce latency during data transfer.

 

Istanbul_Turkey_032221A
[Istanbul, Turkey - Civil Engineering Discoveries]

- Data Integration Pipelines

A data integration pipeline is a series of automated processes that combine data from multiple sources, cleaning, transforming, and loading it into a centralized repository (like a data warehouse or data lake) to create a unified view of the data, enabling analysis and insights by merging information from different systems within an organization.

Essentially, it's a system that takes raw data from various places, standardizes it, and makes it ready for analysis by combining relevant data points from different sources.

 

- Cloud-native Data Pipelines

Cloud native is a software approach for building, deploying, and managing applications in cloud computing environments. 

Cloud native is a way to build and run applications that take advantage of the scalability of the cloud computing model. Cloud native computing uses modern cloud services, like container orchestration, serverless, and multicloud to name a few. Cloud native applications are built to run in the cloud. 

Cloud-native data pipelines use cloud technologies to move and process data, and are more scalable and cost-efficient than on-premises data pipelines. They can help organizations convert raw data into a standardized format for analysis and decision-making. 

  
 
 

[More to come ...]

Document Actions