Core Concepts · Performance

Memory and disposal

rs-x uses a reference-counted binding graph. Watchers are shared across bindings and released when the last consumer disposes. Memory grows predictably with binding count and expression complexity — and compiled mode significantly reduces memory for expressions that are bound many times. Measured on Apple M4, Node.js v25.4.0.

How rs-x uses memory

Every binding holds references to:

  • The expression instance (AST nodes in tree mode; a compiled plan reference in compiled mode)
  • One watcher entry per unique dependency field
  • A subscription to each watcher it depends on

In tree mode, each binding holds its own copy of the full expression tree — cloned from the cached template. Memory grows with both binding count and expression complexity.

In compiled mode, all bindings of the same expression share a single compiled plan object. The plan is compiled once and stored in a plan cache that persists across bindings. Only the per-binding watcher entries are duplicated. For workloads where many bindings use the same expression string, compiled mode uses dramatically less memory.

Compiled vs tree: memory comparison

The table below compares heap and RSS usage for compiled and tree mode across three binding scenarios: sync identifier, async identifier (Observable), and same-model generated expressions (1,000 unique complex expressions bound to the same model).

For simple identifier bindings, both modes use similar memory — the expression tree is a single node, so there is little to share. For complex generated expressions, the difference is stark: compiled mode uses ~515 MB vs ~1,500 MB in tree mode at 1,000 bindings — roughly 3× less.

ScenarioMetricCompiled heap (MB)Tree heap (MB)Compiled RSS (MB)Tree RSS (MB)
Sync identifierbind7476223218
Sync identifiersingle update5147223218
Sync identifierbulk update5551223219
Async identifierbind165170720730
Async identifiersingle update149150720730
Async identifierbulk update152153720730
Same-model generated expressionsbind515150011041735
Same-model generated expressionsdispose515150011041735
Same-model generated expressionssingle update512149611041741
Same-model generated expressionsbulk update533152711051759

Conclusion: for applications binding many different complex expressions, compiled mode's plan-sharing makes a significant memory difference. For simple identifier-only bindings, the difference is negligible and either mode is fine.

Disposal: O(N) and predictable

Calling .dispose() on a binding decrements the reference count for each dependency watcher. When a watcher's count reaches zero it is removed. rs-x walks the binding graph in one pass — each removal is O(1) because watchers are stored in a Map. Total dispose cost for N bindings is O(N).

The table below shows bind time, dispose time, and GC time for progressively larger binding sets. Dispose time is 10–15× faster than bind time, and GC time after disposal is low — rs-x does not accumulate unreachable objects that require multiple GC cycles to collect.

BindingsBind (ms)Dispose (ms)GC after (ms)Heap after (MB)RSS after (MB)
1,000106141975292
2,0003146449234479
3,000696104101490749
4,00012381471838411129
10,000601071364335719

Conclusion: disposal is fast and memory is cleanly reclaimed. No manual teardown is needed beyond a single .dispose() call. GC runs quickly after disposal because the binding graph is fully disconnected.

Memory usage — full breakdown

The tabs below show median heap and peak RSS across all measured scenarios and binding counts.

Scenario
Parse (3 nodes)18.2109.5
Parse (7 nodes)18.7110.0
Parse (15 nodes)20.6110.9
Parse (1 nodes)20.7104.5
Parse (31 nodes)21.0129.0
Parse (63 nodes)25.1133.2