Parsing turns an expression string into an AST. rs-x does this exactly once per unique expression string then caches the result. Every subsequent binding clones the cached AST — a clone costs a fraction of a full parse. Measured on Apple M4, Node.js v25.4.0, 5,000 operations per sample.
Raw parse speed
Parse time grows roughly linearly with expression size — each additional node costs a fixed amount of work. A single-identifier expression like price parses in under 1 µs. A 63-node arithmetic expression takes about 36 µs. Because parsing happens once per unique string, this cost is paid at cold-start only.
Nodes
Expression shape
µs / op
ops / sec
1
v0
0.731
1,368,847
3
v0 + v1
1.866
535,839
7
v0 + v1 + v2 + v3
4.095
244,205
15
v0 + ... + v7
8.486
117,836
31
v0 + ... + v15
17.528
57,051
63
v0 + ... + v31
35.618
28,076
Conclusion: parsing is fast and predictable. For typical application expressions (3–15 nodes), parse time is 1–9 µs. The parse cache means this cost is only paid once — not once per binding.
Parse cache: parse-and-clone vs clone-only
The first time an expression is bound, rs-x parses the string and stores the resulting AST in an in-memory cache. Every subsequent binding clones the cached AST. The chart below compares the cost of those two paths.
Parse-and-clone — the cold path: parse the expression string, store in cache, return a clone. Clone-only — the warm path: retrieve from cache, return a clone. No parsing.
Clone-only cost is typically 60–85% of the parse-and-clone cost — faster, but still not free because cloning a deep tree requires allocating and copying every node. For compiled mode the clone is replaced by a lightweight plan lookup, which is much cheaper.
Parse + cloneClone only
Nodes
Parse + clone (µs)
Clone only (µs)
Clone saving
1
1.122 µs
0.543 µs
52%
3
2.403 µs
1.587 µs
34%
7
4.707 µs
3.580 µs
24%
15
9.500 µs
7.587 µs
20%
31
19.2 µs
15.7 µs
18%
63
38.3 µs
32.3 µs
16%
Conclusion: the cache eliminates repeat parse work. After the first binding, every subsequent binding with the same expression string pays only the clone cost. For large-scale applications with many bindings per unique expression, the cache is critical to keeping bind time low.
Controlling when parsing happens
By default, an expression is parsed the first time it is bound. Two options let you control this timing:
preparse: true — parse and cache the expression at module load time, before any binding is created. The first binding pays only the clone cost, not the parse cost. Use this for expressions that are always used on page load and where you want predictable, consistently fast first-bind latency.
lazy: true — defer even the preparse until the expression is first accessed. Useful in large applications where many expressions are registered at startup but only a fraction are used on any given page. The lazy preload happens asynchronously on first access, spreading parse work over time rather than front-loading it.
rsx('price * quantity', { lazy: true })(model)
The AOT compiler (run at build time) can pre-generate parse caches and compiled plans for all expressions it detects in your source, so the first binding on the client pays neither parse nor compile cost.