A database for modern
airline retailing. At today's performance.
Order storage with append-only journaling. Offer storage with versioned reads. One engine, two airline jobs — same SDKs, same ops surface, branded for whichever side you're hiring it for.
Apache 2.0 · free to fork & deploy. Powered by DocumentForge.
Numbers that should embarrass
your incumbent stack.
Measured on a single 8-core node with the airline-orders workload from the reference benchmark suite — append-heavy writes, point lookups by PNR and passenger, and revision walks across the order journal.
Point lookups, single node
Sustained append throughput
Embedded read latency
Networked read latency
vs Postgres on document workloads
vs MongoDB on revision walks
Linear scale tested
From binary to first query
Measured 2026-04 · 8-core / 32GB / NVMe · workload generator and full results in the docs. Reproduce on your own hardware in under five minutes.
Why a new database?
Airline data is document-shaped: an offer is a tree, an order is a ledger, a schedule is a stream. Forcing it through a relational schema costs every airline the same compromise twice. The Open Order Database Engine lets the data be what it is — and scales to the read patterns offers and orders actually have.
It's the same engine that powers the Open Tax Engine and Open Schedules Data — so the rest of the AeroToys stack inherits its performance for free.
Document-native
Append-only journaling, versioned reads, and snapshot semantics — the audit trail airlines need, in the database itself.
SQL-fluent
A familiar query surface — joins, aggregates, projections — over a document model. Your engineers don't need a new mental model.
Apache 2.0
Inspect every line. Fork it. Run it on your own metal. No vendor holding your offer history hostage.
Performance. Scalability. Simplicity.
In that order.
Three properties most airline systems treat as competing tradeoffs. The Open Order Database Engine treats them as a hierarchy — performance first, then scale, then deliberately staying simple.
Performance
Sub-millisecond reads from in-process or networked workloads. Indexes designed around how offers and orders are actually queried — by passenger, by PNR, by effective-from window — not generic B-trees that punish you on document-shaped traffic.
- Memory-mapped reads, lock-free hot paths
- Zero-copy projections for partial document fetches
- Tiered cache for the working set you actually have
Scalability
Single binary at one airline; replicated across a cluster at the next. The same engine, the same query surface, the same data shape — no rewrite when you grow into multi-region deployment, sharding, or read replicas.
- Horizontal sharding when you grow into it — not before
- Replication with strong-consistent reads on the leader
- Multi-region topology, no architectural rewrite
Simplicity
One binary. One config file. No JVM, no Erlang VM, no sidecar service mesh. Embed it directly inside your application or run it as a network server — same engine, same SDK. Boring infrastructure, on purpose.
- One binary, embeddable or networked
- Zero-dependency runtime, deploys anywhere
- Clean SDKs in TS, Go, Java, .NET, Python

The full reference site.
Architecture, deployment, and SDKs
live on the dedicated docs site.
Storage engine internals, replication model, query language reference, client SDK guides, deployment topologies, and benchmark methodology — maintained alongside the source. Open for contributions.
Help us build the database
airlines deserve.
Storage engineers, replication nerds, document-DB enthusiasts — and especially airlines who'd run it. We'd love to hear from you.
