Compress much better.

↓ drag to Zippy
{ }
events.json
12.4 MB
CSV
users.csv
8.7 MB
PQ
metrics.parquet
24.1 MB
01
model.bin
156 MB
Drop to
compress
OpenZL · format-aware
0%
JSON Complete
520 MB/s
Speed
$4.20/mo
saved / 1 GB
2.1× better
vs zstd
1.97s
time / 1 GB

The world's first OpenZL-powered
Convert Compress Cloud  pipeline.
Format-aware compression that beats zstd by 2×. Your data stays local and processed locally.

2× better compression than zstd 100% local · data stays on your machine macOS · Windows · Linux · CLI Free 10 GB/month · no card needed

Three stages. One drag.

Each stage works standalone, or chain them all together.

FREE
01
Conversion
JSON
CSV
18 conversion directions
JSON ↔ CSV · CSV → Parquet
NDJSON · TSV · Parquet → JSON
pipeline
FREE
02
Compression
Before
12.4 MB
After
1.1 MB
11.2×
Format-aware OpenZL engine
2× better than zstd · train custom profiles
pipeline
FREE
03
Cloud Upload
S3
GCS
Azure
SFTP
Ready
AWS S3 · GCP Storage · Azure Blob
SFTP / any SSH server · MinIO · R2

OpenZL vs the field

Format-aware compression wins where structure exists. General-purpose codecs can't compete on real-world data.

events.json · 12.4 MB structured event log
Speed is measured in megabytes per second (MB/s)
higher = smaller output file
General codec benchmarks: zstd README (Core i7-9700K @ 4.9 GHz, Ubuntu 24.04, Silesia corpus, lzbench). OpenZL ratios measured on representative structured data matching each tab.

Everything in one drag

GUI or CLI — same pipeline, same engine. Drop a file in the app, or pipe it through the terminal.

Your data stays local and processed locally
Compression, conversion, and profile training run entirely on your CPU via native OpenZL. Nothing is uploaded to any Zippy server. Cloud credentials are stored in your OS keychain — never sent anywhere.
Format-aware compression
OpenZL understands your data's structure — bit-packing integers, delta-encoding timestamps, entropy-coding strings. Generic codecs like gzip or zstd treat every file as opaque bytes and can't compete.
Composable pipeline stages
Each stage works standalone or chains with the next. Drop a CSV and run it through Convert → Compress → Upload to S3 in a single drag. No other local tool does this.
Cloud upload to your own storage
Upload compressed files directly to AWS S3, Google Cloud Storage, Azure Blob Storage, or any SFTP server — including S3-compatible providers like Cloudflare R2, MinIO, and Backblaze B2. Credentials are stored in your OS keychain; Zippy never touches your keys.
Train custom compression profiles
Feed Zippy a sample of your data and it trains a custom OpenZL profile tuned exactly to your schema. Export and share profiles with your team.
Batch workflows for entire folders
Select a folder and run any saved workflow — Convert → Compress → Upload or any subset — across every matching file simultaneously. Define reusable workflows once and apply them on demand. Per-file progress with a full summary report at the end.
Watch Folders — continuous auto-processing
Point Zippy at any directory and it monitors it in real time. Every new file is automatically run through your configured workflow — convert, compress, upload — hands-free. Built-in retry logic, failure handling, and a per-file event log keep your pipeline running unattended.
CLI for terminal & CI/CD
Same pipeline engine, available from your terminal. zippy convert, zippy compress, zippy pipeline, zippy watch — 12 commands total. Install via curl or brew install zippy-cli. Perfect for cron jobs, CI pipelines, and headless servers.
FREE
10 GB/mo
no credit card · no time limit
  • Full pipeline — all features included
  • Convert, compress, cloud upload
  • Watch folders & batch processing
  • GUI app + CLI
  • 10 GB/month processing limit
Download Free →
ANNUAL
$49
per year · cancel any time
  • Everything in Free
  • Unlimited processing — no monthly cap
  • Files up to 10 GB (auto-chunked)
  • macOS · Windows · Linux · CLI
  • Priority support
Get Annual →
Teams / volume pricing? hello@zippypro.xyz

Get Zippy

Common questions

Does my data ever leave my machine?

No, never. Every operation — compression, format conversion, profile training — runs entirely on your own CPU via a native library called OpenZL. When you use the Cloud stage, Zippy uploads directly to your own S3/GCS/Azure bucket using credentials you supply. Zippy has no servers of its own that ever touch your files.

How is Zippy different from just using gzip or zstd?

Tools like gzip and zstd treat your file as a stream of opaque bytes. OpenZL (the compression engine inside Zippy) actually understands your data's structure — it bit-packs integers, delta-encodes timestamps, and entropy-codes string columns. For real-world JSON logs and CSVs, this typically yields 2–11× better compression than zstd, not because the algorithm is cleverer but because it's format-aware.

What exactly does "format-aware" mean technically?

Most compressors see a file as a flat stream of bytes and look for repeated byte sequences. OpenZL goes a level deeper — it parses the semantic structure of your data before compressing it. For a JSON log file, this means:

Integer columns are bit-packed to the minimum number of bits actually needed. A column containing values 0–255 doesn't need 64 bits per value — OpenZL packs it into 8.
Timestamps are delta-encoded: instead of storing each absolute value, only the difference from the previous row is stored. Log timestamps that increment by ~1 ms compress from 8 bytes to 1–2 bytes each.
String columns (like status codes, event types, country codes) are entropy-coded with a frequency table — common values get shorter codes, rare ones get longer.
Repeated schema keys in JSON (the field names themselves) are factored out entirely and stored once in a header.

The result: a 12 MB JSON log file that zstd compresses to ~3 MB can reach 1.1 MB with OpenZL — the same data, structured differently at the bit level.

What kinds of data compress best with Zippy?

Best gains (5–12× ratio):

JSON logs with repeated keys and numeric values: event streams, application logs, analytics exports
CSV tables with typed columns: time series, sensor data, financial records, user tables
Parquet files: already columnar, so OpenZL's column-level encoding stacks on top cleanly
NDJSON / JSON Lines: one object per line means OpenZL can infer the schema from the first batch and apply it to the rest

Moderate gains (1.5–3×):

— Mixed-type JSON with highly irregular schemas
— TSV files with many free-text string columns
— Sparse data with lots of nulls

Minimal gains (falls back to zstd-level):

— Already-compressed files (.zip, .mp4, .png, .jpg) — there's no structure left to exploit
— Random binary data (model weights, encrypted files)
— Very small files under ~10 KB where overhead dominates

Zippy always shows the actual ratio after compression — if OpenZL doesn't beat zstd on a particular file, you'll see it immediately in the benchmark panel.

What is profile training and when should I use it?

A compression profile is a trained model of your specific data's statistical patterns. Instead of OpenZL inferring structure at runtime on each file, a profile pre-learns the schema — column names, value distributions, integer ranges, string frequency tables — from a sample of your real data.

How to train one: Go to Settings → Profiles → New Profile, give Zippy a folder of representative sample files (50–200 MB works well), and let it run. Training takes 10–60 seconds depending on sample size. The result is a .zl-profile file (metadata) paired with a .zl-compressor binary that encodes the learned model.

When it matters:

— You compress the same schema repeatedly (daily log exports, recurring ETL outputs)
— Your data has domain-specific patterns a generic run wouldn't see in a single pass
— You want consistent, predictable ratios across a batch rather than per-file variance

Sharing profiles: Export a profile and share the .zl-profile + .zl-compressor pair with teammates. Anyone who imports it gets the same compression behaviour on matching data — useful for standardising storage across a team or pipeline.

Profile training is available on the free tier (within the 10 GB/month limit) and all paid plans.

What file formats does Zippy support?

Zippy converts between JSON, CSV, Parquet, NDJSON (JSON Lines), and TSV — 18 conversion directions in total. For compression, it handles any file but gets the biggest gains on structured formats (JSON, CSV, Parquet). Binary files like model weights or images are compressed with a fallback general-purpose codec.

Is there a limit on file size?

You can process files up to 10 GB. Zippy automatically splits large files into chunks (transparent to you) and reassembles them. The underlying OpenZL library requires the whole payload in memory, so very large files will use more RAM — roughly 10× the file size during compression.

How does the free tier work?

The free tier gives you 10 GB of processing per month with all features unlocked — no time limit, no credit card. Decompression is always free and doesn't count toward the limit. When you exceed 10 GB, you can activate a license for unlimited processing: $49/year or $99 for lifetime access.

Is the lifetime license really forever?

Yes. One payment, no renewal, no subscription. You get all updates released during the major version you bought. If you need to switch machines, just deactivate on the old one and activate on the new one — no limit on transfers.

Which cloud providers does Zippy support?

AWS S3, Google Cloud Storage, Azure Blob Storage, and any SFTP/SSH server (your NAS, home server, or internal Linux box). S3-compatible services like Cloudflare R2, MinIO, and Backblaze B2 also work — just enter your custom endpoint in the S3 settings.

How are my cloud credentials stored?

Credentials are stored in your operating system's secure keychain — macOS Keychain, Windows Credential Manager, or Linux Secret Service. They're never sent to Zippy's servers (there aren't any). You can test the connection before your first upload so you know everything is wired up correctly.

Does it work on Windows and Linux too?

Yes — Zippy ships signed native installers for macOS (.dmg), Windows (.exe wizard installer), and Linux (.deb + AppImage). All three platforms get the same features. Auto-updates are included. The CLI is also available for macOS and Linux via a single curl command or Homebrew.

Can I use Zippy without an internet connection?

Absolutely. Conversion and compression are 100% offline. The Cloud stage obviously needs a network connection to reach your bucket, and license activation requires a one-time internet check — but after that, the app works fully offline.

Powered by OpenZL

Zippy is the world's first desktop application built on OpenZL — Meta's open-source format-aware compression engine.

What is OpenZL?
OpenZL is an open-source compression library by Meta Platforms. Unlike gzip, zstd, or lz4 — which treat every file as a flat byte stream — OpenZL parses the semantic structure of your data before compressing it. Integer columns are bit-packed, timestamps are delta-encoded, and string columns are entropy-coded with frequency tables. The result: 2–11× better compression on real-world JSON and CSV data.
OpenZL desktop app + CLI — now available
Zippy wraps the native OpenZL library in a drag-and-drop desktop interface for macOS, Windows, and Linux. For terminal users, the CLI provides the same pipeline engine with 12 commands — perfect for cron jobs, CI/CD, and headless servers.
OpenZL vs zstd benchmark
On structured JSON log data: 12.4 MB → 1.1 MB with OpenZL (11.2× ratio) vs 3.0 MB with zstd (4.1× ratio). On CSV tables with typed columns: 8.7 MB → 1.4 MB with OpenZL (6.2× ratio) vs 2.5 MB with zstd (3.5× ratio). Gains depend on data regularity — Zippy always shows the actual post-compression ratio so you can verify yourself.
Custom OpenZL compression profiles
Zippy lets you train a custom OpenZL profile from a sample of your own data. The profile pre-learns your schema's statistical patterns — column distributions, value ranges, string frequencies — and applies them on every subsequent compression run. Export and share profiles across your team for consistent, reproducible compression ratios.