storage_engine
Overview
| Package | Version | Category | License | Language |
|---|---|---|---|---|
storage_engine | 1.3.4 | OLAP | AGPL-3.0 | C |
| ID | Extension | Bin | Lib | Load | Create | Trust | Reloc | Schema |
|---|---|---|---|---|---|---|---|---|
| 2450 | storage_engine | No | Yes | Yes | Yes | No | No | engine |
release 1.2.3; SQL v1.2.1
Version
| Type | Repo | Version | PG Ver | Package | Deps |
|---|---|---|---|---|---|
| EXT | PIGSTY | 1.3.4 | 1817161514 | storage_engine | - |
| RPM | PIGSTY | 1.3.4 | 1817161514 | storage_engine_$v | - |
| DEB | PIGSTY | 1.3.4 | 1817161514 | postgresql-$v-storage-engine | - |
Build
You can build the RPM / DEB packages for storage_engine using pig build:
pig build pkg storage_engine # build RPM / DEB packages
Install
You can install storage_engine directly. First, make sure the PGDG and PIGSTY repositories are added and enabled:
pig repo add pgsql -u # Add repo and update cache
Install the extension using pig or apt/yum/dnf:
pig install storage_engine; # Install for current active PG version
pig ext install -y storage_engine -v 18 # PG 18
pig ext install -y storage_engine -v 17 # PG 17
pig ext install -y storage_engine -v 16 # PG 16
pig ext install -y storage_engine -v 15 # PG 15
pig ext install -y storage_engine -v 14 # PG 14
dnf install -y storage_engine_18 # PG 18
dnf install -y storage_engine_17 # PG 17
dnf install -y storage_engine_16 # PG 16
dnf install -y storage_engine_15 # PG 15
dnf install -y storage_engine_14 # PG 14
apt install -y postgresql-18-storage-engine # PG 18
apt install -y postgresql-17-storage-engine # PG 17
apt install -y postgresql-16-storage-engine # PG 16
apt install -y postgresql-15-storage-engine # PG 15
apt install -y postgresql-14-storage-engine # PG 14
Preload:
shared_preload_libraries = 'storage_engine';
Create Extension:
CREATE EXTENSION storage_engine;
Usage
Sources: README, release 1.3.4, PGXN 1.3.4, PGXN changelog, META.json
storage_engine provides two PostgreSQL table access methods in the engine schema:
colcompressfor column-oriented compressed storage with vectorized execution, min/max pruning, and parallel scans.rowcompressfor row-batch compression with parallel scans.
CREATE EXTENSION storage_engine;
Quick Start
Create tables using either access method:
CREATE TABLE events (
ts timestamptz NOT NULL,
user_id bigint,
event_type text,
value float8
) USING colcompress;
CREATE TABLE logs (
id bigserial,
logged_at timestamptz NOT NULL,
message text
) USING rowcompress;
Main Tuning Knobs
Session-level GUCs documented upstream include:
storage_engine.enable_parallel_executionstorage_engine.min_parallel_processesstorage_engine.enable_vectorizationstorage_engine.enable_custom_scanstorage_engine.enable_column_cachestorage_engine.enable_columnar_index_scanstorage_engine.enable_dmlstorage_engine.stripe_row_limitstorage_engine.chunk_group_row_limitstorage_engine.compression_level
The README says these GUCs become visible once the library is loaded; add storage_engine to shared_preload_libraries if you want them available immediately in every session.
Types and Operators
engine.uint8 stores unsigned 64-bit values for colcompress workloads that need the full 0 through 2^64 - 1 range. Upstream documents comparison operators (=, <>, <, <=, >, >=), B-tree and hash opclasses, casts to and from bigint, numeric, and text, plus engine.min, engine.max, and engine.sum aggregates.
Useful Management Functions
For colcompress tables:
SELECT engine.alter_colcompress_table_set(
'events'::regclass,
orderby => 'ts ASC, user_id ASC',
compression => 'zstd',
compression_level => 9
);
SELECT engine.colcompress_merge('events');
CALL engine.colcompress_repack('events');
CALL engine.colcompress_repack('events', 0.7);
In 1.3.4, engine.colcompress_repack(table_name regclass, min_fill_ratio float8 DEFAULT 0.9) is a procedure for online per-stripe defragmentation of colcompress tables. It repacks stripes whose live-row ratio falls below the threshold. Use engine.colcompress_merge() when you need the old full-table rewrite that globally sorts by the orderby key.
For rowcompress tables:
SELECT engine.alter_rowcompress_table_set(
'logs'::regclass,
batch_size => 10000,
compression => 'zstd',
compression_level => 5
);
SELECT engine.rowcompress_repack('logs');
When to Use Which AM
- Use
colcompressfor analytical scans, aggregates, and range predicates where projection, vectorization, and stripe/chunk pruning pay off. - Use
rowcompressfor append-heavy logs or wide rows that are usually fetched together, where compression matters more than column projection. - For point lookups on
colcompress, upstream recommends enablingstorage_engine.enable_columnar_index_scanor per-tableindex_scan.
Caveats
- Upgrade existing installations with
ALTER EXTENSION storage_engine UPDATE TO '1.3.4';. - The old
FUNCTION engine.colcompress_repack(regclass)alias was removed in 1.3.4. Existing callers should switch toCALL engine.colcompress_repack('table')for stripe defragmentation orSELECT engine.colcompress_merge('table')for the old full rewrite behavior. colcompressandrowcompressdo not support foreign keys orAFTER ROWtriggers.VACUUM FULLandCREATE UNLOGGED TABLE ... USING colcompressare not supported; upstream recommends the extension’s repack functions instead.- On
colcompress, combiningorderbywith B-tree indexes can disable the sort-on-write path, and B-tree indexes on ordered columns can defeat stripe pruning for range queries. Useengine.colcompress_merge()after loading data when global ordering matters, and preferindex_scan => falsefor analytical tables.
Feedback
Was this page helpful?
Thanks for the feedback! Please let us know how we can improve.
Sorry to hear that. Please let us know how we can improve.