rs-x uses a reference-counted binding graph. Watchers are shared across bindings and released when the last consumer disposes. Memory grows predictably with binding count and expression complexity — and compiled mode significantly reduces memory for expressions that are bound many times. Measured on Apple M4, Node.js v25.4.0.
How rs-x uses memory
Every binding holds references to:
The expression instance (AST nodes in tree mode; a compiled plan reference in compiled mode)
One watcher entry per unique dependency field
A subscription to each watcher it depends on
In tree mode, each binding holds its own copy of the full expression tree — cloned from the cached template. Memory grows with both binding count and expression complexity.
In compiled mode, all bindings of the same expression share a single compiled plan object. The plan is compiled once and stored in a plan cache that persists across bindings. Only the per-binding watcher entries are duplicated. For workloads where many bindings use the same expression string, compiled mode uses dramatically less memory.
Compiled vs tree: memory comparison
The table below compares heap and RSS usage for compiled and tree mode across three binding scenarios: sync identifier, async identifier (Observable), and same-model generated expressions (1,000 unique complex expressions bound to the same model).
For simple identifier bindings, both modes use similar memory — the expression tree is a single node, so there is little to share. For complex generated expressions, the difference is stark: compiled mode uses ~515 MB vs ~1,500 MB in tree mode at 1,000 bindings — roughly 3× less.
Scenario
Metric
Compiled heap (MB)
Tree heap (MB)
Compiled RSS (MB)
Tree RSS (MB)
Sync identifier
bind
74
76
223
218
Sync identifier
single update
51
47
223
218
Sync identifier
bulk update
55
51
223
219
Async identifier
bind
165
170
720
730
Async identifier
single update
149
150
720
730
Async identifier
bulk update
152
153
720
730
Same-model generated expressions
bind
515
1500
1104
1735
Same-model generated expressions
dispose
515
1500
1104
1735
Same-model generated expressions
single update
512
1496
1104
1741
Same-model generated expressions
bulk update
533
1527
1105
1759
Conclusion: for applications binding many different complex expressions, compiled mode's plan-sharing makes a significant memory difference. For simple identifier-only bindings, the difference is negligible and either mode is fine.
Disposal: O(N) and predictable
Calling .dispose() on a binding decrements the reference count for each dependency watcher. When a watcher's count reaches zero it is removed. rs-x walks the binding graph in one pass — each removal is O(1) because watchers are stored in a Map. Total dispose cost for N bindings is O(N).
The table below shows bind time, dispose time, and GC time for progressively larger binding sets. Dispose time is 10–15× faster than bind time, and GC time after disposal is low — rs-x does not accumulate unreachable objects that require multiple GC cycles to collect.
Bindings
Bind (ms)
Dispose (ms)
GC after (ms)
Heap after (MB)
RSS after (MB)
1,000
106
14
19
75
292
2,000
314
64
49
234
479
3,000
696
104
101
490
749
4,000
1238
147
183
841
1129
10,000
6010
713
64
335
719
Conclusion: disposal is fast and memory is cleanly reclaimed. No manual teardown is needed beyond a single .dispose() call. GC runs quickly after disposal because the binding graph is fully disconnected.
Memory usage — full breakdown
The tabs below show median heap and peak RSS across all measured scenarios and binding counts.