The world's first OpenZL-powered
Convert
→
Compress
→
Cloud
pipeline.
Format-aware compression that beats zstd by 2×. Your data never leaves your machine.
Each stage works standalone, or chain them all together.
Format-aware compression wins where structure exists. General-purpose codecs can't compete on real-world data.
No CLI flags to memorize. No scripts to maintain. Drop a file, pick your pipeline, go.
Signed installers for all platforms. Auto-updates included.
No, never. Every operation — compression, format conversion, profile training — runs entirely on your own CPU via a native library called OpenZL. When you use the Cloud stage, Zippy uploads directly to your own S3/GCS/Azure bucket using credentials you supply. Zippy has no servers of its own that ever touch your files.
Tools like gzip and zstd treat your file as a stream of opaque bytes. OpenZL (the compression engine inside Zippy) actually understands your data's structure — it bit-packs integers, delta-encodes timestamps, and entropy-codes string columns. For real-world JSON logs and CSVs, this typically yields 2–11× better compression than zstd, not because the algorithm is cleverer but because it's format-aware.
Most compressors see a file as a flat stream of bytes and look for repeated byte sequences. OpenZL goes a level deeper — it parses the semantic structure of your data before compressing it. For a JSON log file, this means:
Integer columns are bit-packed to the minimum number of bits actually needed. A column containing values 0–255 doesn't need 64 bits per value — OpenZL packs it into 8.
Timestamps are delta-encoded: instead of storing each absolute value, only the difference from the previous row is stored. Log timestamps that increment by ~1 ms compress from 8 bytes to 1–2 bytes each.
String columns (like status codes, event types, country codes) are entropy-coded with a frequency table — common values get shorter codes, rare ones get longer.
Repeated schema keys in JSON (the field names themselves) are factored out entirely and stored once in a header.
The result: a 12 MB JSON log file that zstd compresses to ~3 MB can reach 1.1 MB with OpenZL — the same data, structured differently at the bit level.
Best gains (5–12× ratio):
— JSON logs with repeated keys and numeric values: event streams, application logs, analytics exports
— CSV tables with typed columns: time series, sensor data, financial records, user tables
— Parquet files: already columnar, so OpenZL's column-level encoding stacks on top cleanly
— NDJSON / JSON Lines: one object per line means OpenZL can infer the schema from the first batch and apply it to the rest
Moderate gains (1.5–3×):
— Mixed-type JSON with highly irregular schemas
— TSV files with many free-text string columns
— Sparse data with lots of nulls
Minimal gains (falls back to zstd-level):
— Already-compressed files (.zip, .mp4, .png, .jpg) — there's no structure left to exploit
— Random binary data (model weights, encrypted files)
— Very small files under ~10 KB where overhead dominates
Zippy always shows the actual ratio after compression — if OpenZL doesn't beat zstd on a particular file, you'll see it immediately in the benchmark panel.
A compression profile is a trained model of your specific data's statistical patterns. Instead of OpenZL inferring structure at runtime on each file, a profile pre-learns the schema — column names, value distributions, integer ranges, string frequency tables — from a sample of your real data.
How to train one: Go to Settings → Profiles → New Profile, give Zippy a folder of representative sample files (50–200 MB works well), and let it run. Training takes 10–60 seconds depending on sample size. The result is a .zl-profile file (metadata) paired with a .zl-compressor binary that encodes the learned model.
When it matters:
— You compress the same schema repeatedly (daily log exports, recurring ETL outputs)
— Your data has domain-specific patterns a generic run wouldn't see in a single pass
— You want consistent, predictable ratios across a batch rather than per-file variance
Sharing profiles: Export a profile and share the .zl-profile + .zl-compressor pair with teammates. Anyone who imports it gets the same compression behaviour on matching data — useful for standardising storage across a team or pipeline.
Profile training is a paid feature available in the 45-day trial and all paid plans.
Zippy converts between JSON, CSV, Parquet, NDJSON (JSON Lines), and TSV — 18 conversion directions in total. For compression, it handles any file but gets the biggest gains on structured formats (JSON, CSV, Parquet). Binary files like model weights or images are compressed with a fallback general-purpose codec.
During the free 45-day trial and on the paid plan you can process files up to 10 GB. Zippy automatically splits large files into chunks (transparent to you) and reassembles them. The underlying OpenZL library requires the whole payload in memory, so very large files will use more RAM — roughly 10× the file size during compression.
After the trial, Zippy switches to a read-only mode — you can still open and inspect compressed files but not create new ones. You can unlock everything permanently for a one-time payment: $49/year or $99 for lifetime access. No recurring fees unless you choose the annual plan.
Yes. One payment, no renewal, no subscription. You get all updates released during the major version you bought. If you need to switch machines, just deactivate on the old one and activate on the new one — no limit on transfers.
AWS S3, Google Cloud Storage, Azure Blob Storage, and any SFTP/SSH server (your NAS, home server, or internal Linux box). S3-compatible services like Cloudflare R2, MinIO, and Backblaze B2 also work — just enter your custom endpoint in the S3 settings.
Credentials are stored in your operating system's secure keychain — macOS Keychain, Windows Credential Manager, or Linux Secret Service. They're never sent to Zippy's servers (there aren't any). You can test the connection before your first upload so you know everything is wired up correctly.
Yes — Zippy ships signed native installers for macOS (.dmg), Windows (.exe wizard installer), and Linux (.deb + AppImage). All three platforms get the same features. Auto-updates are included.
Absolutely. Conversion and compression are 100% offline. The Cloud stage obviously needs a network connection to reach your bucket, and license activation requires a one-time internet check — but after that, the app works fully offline.
Zippy is the world's first desktop application built on OpenZL — Meta's open-source format-aware compression engine.