Iceberg

Apache Iceberg is an open table format for large analytic datasets. Supermetal writes Parquet data files directly to Iceberg tables using REST, Glue, or S3 Tables catalogs with S3 or GCS storage.


Spec Versions

Iceberg V1 introduced schema evolution, hidden partitioning, and snapshot isolation. V2 added row-level deletes through position and equality delete files. V3 brings deletion vectors (replacing positional deletes), row-level lineage tracking, the variant type for semi-structured data, and geospatial types.

Supermetal defaults to V3 for new tables. Use V2 if your query engine doesn't support V3 yet.


Write Modes

Merge on Read (default)

SELECT * returns the current state of your data. When rows are updated or deleted, Supermetal writes equality delete files that mask older versions at query time. Duplicate primary keys are resolved using positional deletes (V2) or deletion vectors (V3). No data is rewritten at ingest.

Delete modes: Soft delete (default) preserves deleted rows, queryable via WHERE _sm_deleted = true. Hard delete removes rows completely from query results.

Requires V2 or V3 and a query engine that supports equality deletes (Spark 3.x+, Trino, Dremio, Snowflake, StarRocks).

Append

All changes are appended as new rows. Inserts, updates, and deletes each produce a new data row with metadata columns _sm_deleted and _sm_version. To query current state, filter with WHERE _sm_deleted = false and deduplicate by primary key using _sm_version.

Works with any Iceberg version and any query engine.

Comparison

Merge on ReadAppend
Iceberg versionV2, V3V1, V2, V3
Query engine supportRequires equality delete supportAny engine
Query complexitySELECT * returns current stateRequires dedup logic
Read performanceEngine applies deletes at read timeEngine scans all versions

Compaction

File creation rate is controlled by flush interval (default: 10 seconds). Run periodic compaction/maintenance using your query engine (Spark, Trino) or a table management service to optimize read performance.


Prerequisites

You need a catalog (REST, Glue, or S3 Tables), storage credentials for where data files will be written (S3 or GCS), and a target namespace for table creation.


Setup

Catalog

Configure the Iceberg catalog where table metadata is stored.

FieldDescription
URICatalog endpoint (e.g., https://catalog.example.com)
WarehouseStorage location identifier
AuthenticationOAuth2, Bearer, Basic, or SigV4

Authentication methods:

MethodUse Case
OAuth2Production environments with token endpoint, client ID/secret
BearerService accounts, CI/CD with static token
BasicDevelopment, JDBC catalogs with username/password
SigV4AWS services requiring request signing (region, service)
FieldDescription
WarehouseS3 location (e.g., s3://my-bucket/warehouse)
RegionAWS region
Catalog IDAWS account ID (optional)
CredentialsAccess key and secret
FieldDescription
Table Bucket ARNS3 Tables bucket ARN
RegionAWS region
CredentialsAccess key and secret

Target Namespace

Tables are created under this namespace. For multi-level namespaces, use comma-separated values: my_database, my_schema creates tables under my_database.my_schema.

Storage Credentials

Credentials for writing Parquet data files to cloud storage.

FieldDescription
Access Key IDAWS access key
Secret Access KeyAWS secret key
RegionAWS region (e.g., us-east-1)
EndpointCustom endpoint for S3-compatible storage
Path Style AccessEnable for MinIO and similar
FieldDescription
Credentials JSONService account key (base64-encoded)
Project IDGCP project identifier

Write Options

Control how data is written to Iceberg tables. See Write Modes for details on Merge on Read vs Append.

FieldDefaultDescription
Spec VersionV3Iceberg table format version
Write ModeMerge on ReadHow updates and deletes are handled
Delete ModeSoftFor Merge on Read: Soft preserves audit trail, Hard removes rows
Truncate Table if existsOffRemove existing data before snapshot sync
Metadata CompressionGzipCompression for Iceberg metadata files
Flush Interval10000 msCommit frequency

Parquet Settings

Configure the Parquet file format. Defaults work well for most workloads.

FieldDefaultDescription
CompressionZstdZstd, Snappy, Gzip, Lz4Raw, Brotli, or Uncompressed
Compression Level3Zstd (1-22), Gzip (0-9), or Brotli (0-11)
Target File Size512 MBFiles roll when exceeding this size
Parquet VersionV1V1 for compatibility, V2 for better encoding

Variant Type (V3)

Semi-structured source types such as Postgres JSONB, MySQL JSON, and MongoDB documents are automatically mapped to the Iceberg variant type on V3 tables. Variant encodes nested JSON natively in Parquet's binary variant format, giving query engines columnar access to individual fields without JSON parsing.


Truncate Table if exists

This option is off by default. Enable it to atomically remove all existing data before the initial snapshot sync, preventing duplicate rows when recreating a connector.

The previous data remains accessible via Iceberg time travel, so you can roll back if the sync fails.


Snapshot Metadata

Each commit writes properties to the Iceberg snapshot summary for debugging and audit:

  • sm.connector_id, sm.run_id - identify which sync produced the snapshot
  • sm.source.commit_ts - source commit timestamp (CDC only)
  • sm.truncated_from_snapshot - previous snapshot ID (truncate only)

Query via SELECT * FROM table$snapshots.


Limitations

  • Schema evolution: Data types promotion is not yet supported.
  • Partitioning: Table Partitioning is not yet supported.

Data Types

Source types are converted to Iceberg-compatible types. Types without native Iceberg support are stored as strings.

Arrow TypeIceberg TypeNotes
Booleanboolean
Int8, Int16, Int32intWidened to 32-bit
UInt8, UInt16intWidened to 32-bit
Int64long
UInt32longWidened to 64-bit
UInt64decimal(20,0)Exceeds long range
Float16, Float32float
Float64double
Decimal128(p,s)decimal(p,s)
Decimal256(p,s)stringExceeds decimal128 range
Date32, Date64date
Time32, Time64timeConverted to microseconds
Timestamp(s/ms/us, tz)timestamptzConverted to microseconds, UTC
Timestamp(s/ms/us, None)timestampConverted to microseconds
Timestamp(ns, *)longNanoseconds not supported
Utf8, LargeUtf8, Utf8Viewstring
Binary, LargeBinary, BinaryViewbinary
FixedSizeBinary(n)fixed(n)
List<T>, LargeList<T>list<T>
Map<K,V>map<K,V>
Structstruct
Duration, Interval, Union, Nullstring
Arrow TypeIceberg TypeNotes
Booleanboolean
Int8, Int16, Int32intWidened to 32-bit
UInt8, UInt16intWidened to 32-bit
Int64long
UInt32longWidened to 64-bit
UInt64decimal(20,0)Exceeds long range
Float16, Float32float
Float64double
Decimal128(p,s)decimal(p,s)
Decimal256(p,s)stringExceeds decimal128 range
Date32, Date64date
Time32, Time64timeConverted to microseconds
Timestamp(s/ms/us, tz)timestamptzConverted to microseconds, UTC
Timestamp(s/ms/us, None)timestampConverted to microseconds
Timestamp(ns, *)longQuery engines lack nanosecond support
Utf8, LargeUtf8, Utf8Viewstring
Binary, LargeBinary, BinaryViewbinary
FixedSizeBinary(n)fixed(n)
List<T>, LargeList<T>list<T>
Map<K,V>map<K,V>
Structstruct
Utf8 with arrow.json extensionvariant
Duration, Interval, Union, Nullstring

Apache Iceberg is a trademark of the Apache Software Foundation. No endorsement by the Apache Software Foundation is implied by the use of this mark.

Last updated on

On this page