Parquet, Avro or ORC?

Oswin Rahadiyan Hartono
5 min readNov 4, 2019

--

Source: Shutterstock

When working within a big data environment, the plethora of data formats available can often lead to questions regarding their suitability, advantages, drawbacks, and how best to utilize them within specific use cases and data pipelines. While data can be stored in human-readable formats like JSON or CSV files, the optimal storage method may differ.

In the realm of Hadoop clusters, three file formats have emerged as optimized solutions:

  1. Optimized Row Columnar (ORC)
  2. Avro
  3. Parquet

While these file formats share some similarities and offer compression benefits, each has its unique characteristics and trade-offs.

Common Traits:

  • HDFS Storage Data Format: All three formats are designed to be compatible with the Hadoop Distributed File System (HDFS), facilitating efficient storage and retrieval within Hadoop clusters.
  • File Distribution: Files can be split across multiple disks, enabling parallel processing and improved performance.
  • Schema Support: Each format supports schema definition, providing a structured representation of the data and enhancing compatibility with various data processing frameworks and tools.

Despite these shared traits, understanding the specific features and considerations of each format is essential for making informed decisions regarding data storage and processing within big data environments.

Parquet

Parquet is a column-oriented file format designed for optimal performance in read-heavy analytical workloads. By storing data in columns rather than rows, Parquet minimizes the amount of disk I/O required for data retrieval, making it well-suited for applications where read efficiency is paramount.

Key Features:

  1. Column-Oriented Storage: Parquet stores data in a columnar format, enabling efficient query processing by accessing only the required columns, thereby reducing disk I/O and improving overall performance.
  2. High Compression Rates: Parquet offers high compression rates, often achieving up to 75% compression with algorithms like Snappy. This not only reduces storage costs but also enhances data transfer efficiency across the network.
  3. Selective Column Retrieval: Since Parquet organizes data by columns, only the required columns need to be fetched or read during query execution. This selective column retrieval further reduces disk I/O overhead and speeds up data processing.
  4. Compatibility with Avro: Parquet files can be seamlessly read and written using the Avro API and Avro Schema, providing interoperability with other data formats and frameworks within the Hadoop ecosystem.
  5. Predicate Pushdown Support: Parquet supports predicate pushdown, a technique that pushes query predicates down to the storage layer, allowing filtering to be performed directly on the data files. This minimizes the amount of data read from disk, resulting in lower disk I/O costs and improved query performance.

In summary, Parquet’s column-oriented design, high compression rates, selective column retrieval, compatibility with Avro, and support for predicate pushdown make it an ideal choice for efficient data storage and processing in analytical workloads within big data environments.

Avro

Avro is a row-based data serialization system designed for efficient data storage and exchange within Hadoop ecosystems. Unlike column-oriented formats like Parquet, Avro stores data in rows, making it well-suited for write-heavy transactional workloads and scenarios where schema evolution is common.

Key Features:

  1. Row-Based Storage: Avro stores data in row-based format, which is optimal for write-heavy transactional workloads where data is continuously added or modified.
  2. Support for Serialization: Avro provides support for data serialization, allowing data objects to be converted into a compact binary format for efficient storage and transmission across the network.
  3. Fast Binary Format: Avro employs a fast binary format for data representation, enabling rapid serialization and deserialization operations, thereby improving overall performance.
  4. Compression and Splittable Support: Avro supports block compression techniques, which reduce storage space and enhance data transfer efficiency. Additionally, Avro files are splittable, allowing for parallel processing and improved scalability in distributed environments.
  5. Schema Evolution: Avro facilitates schema evolution by using JSON to describe data structures while leveraging a binary format for optimized storage size. This allows for seamless updates to data schemas without requiring changes to existing data or applications.
  6. Self-Describing Data: Avro stores schema information in the header of each file, making the data self-describing. This ensures data integrity and simplifies data processing by eliminating the need for external schema repositories or metadata.

In summary, Avro’s row-based storage, support for serialization, fast binary format, compression and splittable capabilities, schema evolution support, and self-describing data features make it a versatile and efficient choice for data storage and exchange in Hadoop environments.

ORC

Optimized Row Columnar (ORC) is a column-oriented file format specifically designed for efficient data storage and processing in Hadoop environments. With its column-oriented storage approach, ORC is optimized for read-heavy analytical workloads, where query performance and data retrieval efficiency are paramount.

Key Features:

  1. Column-Oriented Storage: ORC stores data in a columnar format, enabling fast query processing by accessing only the required columns during analysis. This minimizes disk I/O and enhances overall query performance, particularly for read-heavy workloads.
  2. High Compression Rates: ORC offers high compression rates using algorithms like ZLIB, reducing storage costs and improving data transfer efficiency across the network.
  3. Hive Type Support: ORC supports a wide range of data types commonly used in Apache Hive, including datetime, decimal, and complex types like struct, list, map, and union. This ensures compatibility with existing data schemas and simplifies data integration and processing workflows.
  4. Metadata Management: ORC stores metadata using Protocol Buffers, allowing for efficient addition and removal of fields without impacting existing data or applications. This flexibility facilitates schema evolution and data versioning in dynamic environments.
  5. Compatibility with HiveQL: ORC files are compatible with HiveQL, the query language used in Apache Hive, enabling seamless integration with Hive-based data processing pipelines and tools.
  6. Support for Serialization: ORC provides support for data serialization, enabling efficient conversion of data objects into a compact binary format for storage and transmission.

In summary, ORC’s column-oriented storage, high compression rates, extensive type support, flexible metadata management, compatibility with HiveQL, and support for serialization make it a powerful and versatile choice for data storage and analysis in Hadoop environments.

Reference

--

--

Oswin Rahadiyan Hartono
Oswin Rahadiyan Hartono

Written by Oswin Rahadiyan Hartono

Data Engineering | Data Warehousing | Data Governance

Responses (2)