Core Concepts · Performance

v1.0.0 vs v2.0.0

rs-x v2 rewrites the parser and introduces a compiled expression engine. This page shows every benchmark metric side by side between v1.0.0 and v2.0.0 (both tree and compiled modes). Measured on Apple M4, Node.js v25.4.0.

Summary

v2 ships two fundamental changes: a new recursive-descent parser that eliminates overhead from the old parser's fixed startup cost, and a compiled expression engine. The AOT compiler generates a native JS function for each expression at build time; at runtime rs-x looks up and calls the pre-generated function directly. The compiled engine is the source of most update improvements.

The one regression is upfront bind cost: v2 does more work per binding than v1 — it compiles the expression, sets up a plan cache entry, and registers typed watchers. For applications that are update-heavy relative to bind count (the common case), this cost pays back quickly.

MetricUnitv1.0.0v2 treev2 compiledTree gain
Parse 1 nodesus/op5.4820.7310.771+86.7%
Parse 3 nodesus/op6.9931.8661.887+73.3%
Parse 7 nodesus/op10.5244.0954.088+61.1%
Parse 15 nodesus/op17.7108.4868.678+52.1%
Parse 31 nodesus/op25.17317.52817.766+30.4%
Parse 63 nodesus/op44.29535.61835.775+19.6%
Parse+clone 63 nodesus/op80.98638.342199.400+52.7%
Bind unique 1,000ms35.09238.35032.317-9.3%
Bind same 1,000ms25.44443.37345.661-70.5%
Bind unique 10,000ms521.444737.067561.750-41.4%
Bind same 10,000ms638.054884.867440.759-38.7%
Single update 1,000~ms0.0890.0090.008+90.4%
Bulk update 1,000ms7.9042.3882.873+69.8%
Single update 10,000~ms0.1070.0020.002+98.1%
Bulk update 10,000ms146.23472.80961.112+50.2%

~ = high variance measurement; treat as indicative. Positive gain % = v2 faster than v1.

Parsing: up to 87% faster

The v2 parser uses a hand-written recursive-descent approach instead of the general-purpose parser used in v1. The most dramatic improvement is at the low end: a single-identifier expression dropped from 5.5 µs to 0.7 µs — 87% faster. Larger expressions improve 20–30% as the fixed overhead becomes a smaller fraction of total parse time.

Both modes use the same parser — the parse performance numbers are identical for tree and compiled modes.

Nodesv1.0.0 (µs)v2.0.0 (µs)Gain
15.4820.731+87%
36.9931.866+73%
710.5244.095+61%
1517.7108.486+52%
3125.17317.528+30%
6344.29535.618+20%

Binding: v2 costs more upfront

Bind cost in v2 is slightly higher than v1, but this is largely a cost shift. v2 resolves all expression dependencies and builds the full watch graph once at bind time — work that v1 deferred to every individual update evaluation. In tree mode, bind cost is within ~10% of v1 for unique expressions. Compiled mode has a similar bind profile; the savings show up at evaluation time.

The tradeoff is that subsequent calls to update the same binding are significantly faster, and memory usage is lower (compiled plans are shared). For most applications that bind once and update many times, this is a net positive.

Bindingsv1 bind (ms)v2 tree (ms)v2 compiled (ms)
1,00035.09238.35032.317
3,000121.675143.833106.509
5,000235.588260.666193.635
10,000521.444737.067561.750

Updates: 60–70% faster

Updates are where v2 wins clearly. Tree mode is already faster than v1 for bulk updates — the new watcher architecture notifies only affected expressions more efficiently. Compiled mode is faster still: it calls the pre-compiled JS function directly, which V8 JIT-optimises as a regular function call.

At 10,000 bindings with a bulk update, v2 compiled mode takes 61 ms vs 146 ms in v1 — 58% faster. Single update times are very small in all versions and not the meaningful comparison.

Bindingsv1 bulk update (ms)v2 tree (ms)v2 compiled (ms)Best gain
1,0007.9042.3882.873+70%
3,00029.48313.04818.277+56%
5,00055.09121.26328.310+61%
10,000146.23472.80961.112+58%

Parse cache (parse + clone): v2 tree is faster

When an expression is already cached and a new binding clones the cached AST, v2 tree mode is consistently faster than v1. The v1 clone included more allocations per node; v2 uses a tighter object structure. Compiled mode replaces AST cloning with a plan cache lookup, which has different cost characteristics.

Nodesv1 parse+clone (µs)v2 parse+clone (µs)v2 clone-only (µs)
15.4441.1220.543
311.0132.4031.587
715.6384.7073.580
1523.2769.5007.587
3141.11319.19315.729
6380.98638.34232.311