Question Details

No question body available.

Tags

javascript performance functional-programming v8

Answers (6)

February 22, 2026 Score: 1 Rep: 444 Quality: Low Completeness: 60%

V8 doesn't treat curried functions specially. Each a => b => c => d => ... creates a new closure at every step, so addFourCurry(e)(e)(e)(e) allocates 3 intermediate functions per iteration. The uncurried version is a single call with no extra allocations, so it's strictly cheaper unless everything gets inlined and optimized away.

For direct calls, TurboFan can inline small functions if the call site is monomorphic and hot. The uncurried version is much easier to inline; the curried chain requires inlining multiple nested closures and escape analysis to remove allocations. Sometimes it succeeds, often it won't in real-world code.

For indirect calls (f(...)), V8 uses inline caches and tracks the target shape, but it does not perform arity-based stack reshaping like ML compilers. Saturated curried calls are still separate calls returning functions. So f(e)(e)(e)(e) stays multiple dynamic calls unless fully inlined.

In practice: curried style in JS is ergonomics, not a zero-cost abstraction. If performance matters in hot paths, prefer uncurried functions.

February 22, 2026 Score: 0 Rep: 42,270 Quality: Medium Completeness: 60%

(V8 developer here.)

V8 has no special handling for "curried functions", so I would assume the curried version to be quite inefficient: as Aniket K already describes, addFourCurry isn't one function; it's a function that returns a function that returns a function that returns a function that computes a sum; all those function creations are going to be massively more expensive than a handful of additions. Keep in mind that JavaScript functions are (a special kind of) objects, they have identity and properties and prototypes etc like other objects, so in many cases they cannot be optimized out -- at least not without whole-program analysis, which is generally infeasible for dynamic languages.

That said, V8 does have a fairly powerful optimizing compiler, which has evolved over the years to support many of JavaScript's dynamic shenanigans, including optimizing out unnecessary function creations in some cases. I was curious how well it would deal with your examples :)

So I've rephrased all four cases into a microbenchmark to make them directly comparable. I've replaced console.log (which is too slow in comparison, overshadowing everything else) with returning the sum of all values, and then I also check that result to prevent any dead-code elimination:

const N = 10000;
const example = new Array(N).fill(0).map((, i) => i + 1);
const exampleSum = N  (N + 1) / 2;

const addFourCurry = a => b => c => d => a + b + c + d;

const addFourUncurried = (a, b, c, d) => a + b + c + d;

function DirectCurry(list) { let sum = 0; for (let i = 0; i < list.length; i++) { let e = list[i]; sum += addFourCurry(e)(e)(e)(e); } return sum; }

function DirectUncurried(list) { let sum = 0; for (let i = 0; i < list.length; i++) { let e = list[i]; sum += addFourUncurried(e, e, e, e); } return sum; }

function IndirectCurry(f, list) { let sum = 0; for (let i = 0; i < list.length; i++) { let e = list[i]; sum += f(e)(e)(e)(e); } return sum; }

function IndirectUncurried(f, list) { let sum = 0; for (let i = 0; i < list.length; i++) { let e = list[i]; sum += f(e, e, e, e); } return sum; }

const kRuns = 5_000; const expected = exampleSum kRuns * 4; let t0 = Date.now(); let sum = 0; for (let i = 0; i < kRuns; i++) sum += DirectCurry(example); if (sum != expected) throw wanted ${expected}, got ${sum}; let t1 = Date.now(); sum = 0; for (let i = 0; i < kRuns; i++) sum += DirectUncurried(example); if (sum != expected) throw wanted ${expected}, got ${sum}; let t2 = Date.now(); sum = 0; for (let i = 0; i < kRuns; i++) sum += IndirectCurry(addFourCurry, example); if (sum != expected) throw wanted ${expected}, got ${sum}; let t3 = Date.now(); sum = 0; for (let i = 0; i < kRuns; i++) sum += IndirectUncurried(addFourUncurried, example); if (sum != expected) throw wanted ${expected}, got ${sum}; let t4 = Date.now();

console.log(DirectCurry took ${t1 - t0} ms.); console.log(DirectUncurried took ${t2 - t1} ms.); console.log(IndirectCurry took ${t3 - t2} ms.); console.log(IndirectUncurried took ${t4 - t3} ms.);

Result (fluctuating by about ±1 ms when running repeatedly):

DirectCurry took 23 ms.
DirectUncurried took 21 ms.
IndirectCurry took 25 ms.
IndirectUncurried took 21 ms.

They don't generate the same machine code because the respective optimized code needs to perform different checks every time it runs to ensure that the optimizations contained in it are still safe, and there appears to be a small cost to the curried versions, but by and large these numbers are close enough to each other that I'd say "write whichever style you prefer, it'll be fine".

However, if you toy around with that microbenchmark a bit, you'll find that it's lying to you, and you can't draw any reliable conclusions from it. That shouldn't be surprising; this just happens to be yet another great example illustrating the pitfalls of microbenchmarks.
Let's change the array length and the loop count in a way that should not change the result: N = 50000; kRuns = 1000 instead of N = 10000; kRuns = 5000:

DirectCurry took 139 ms.
DirectUncurried took 33 ms.
IndirectCurry took 138 ms.
IndirectUncurried took 32 ms.

Whoa! When something throws off the optimizer (in this case it has to do with ranges of numbers, but in other scenarios it could be all sorts of other things), you'll see the cost of the curried versions quite clearly. I have, frankly, not bothered to check whether all four functions are created every time; but it's quite easy to see using --trace-gc or other profiling that both curried functions are causing a bunch of GC cycles, whereas the uncurried versions don't allocate enough to cause even a single GC cycle. (You might expect them to allocate nothing; they will in fact allocate a few heap numbers, but not very many and that doesn't matter very much.)

When such a small tweak to the benchmark causes such a dramatic change in its results, then the primary takeaway becomes: studying this tiny snippet in isolation cannot produce reliable advice regarding what you should do in real/large applications.


So, to answer your original questions more directly:

Does V8 optimize direct/indirect calls?

Whether a call is "direct" or "indirect" in itself doesn't make a difference (as you can see in my benchmark results). What does matter (but isn't reflected in this particular benchmark) is whether the call is monomorphic: if the same function ...(..., f, ...) { ...; f(); ... } encounters different values of f, then the call won't be inlined any more, because inlining is expensive, and it becomes too difficult for the optimizer to guess which f it should inline for reasonable bang-for-buck ratio. That can have a massive impact on performance, especially when the functions are tiny. Large functions usually don't get inlined anyway, so there are also cases where this doesn't matter.

Assuming that they're not inlined, do both of their calls generate the same or at least similar machine code?

If they're not inlined, then it's one call vs. four, so the code is very different.

Does currying change whether either example is inlined?

Depends. As my experimental results show, it can prevent inlining, but won't always.

Here foreachFourCurry necessarily has to be slower because it's calling a function with an unknown arity

No, see above. As a mental model, you can assume that in JavaScript every function call is an indirect call to a function with unknown arity. Anything else is a rare exception created by sufficiently advanced optimizers. From a quick glance, the paper you referenced is largely not applicable to JavaScript (at least not to V8), because language semantics are different: any function can be called with too few arguments, and that's not an error, you just get undefined as the value of the other parameters; any function can also be called with too many arguments, in which case they get ignored, unless the called function uses the arguments array to access them. There's never a situation of "here's a function and ten things, and the function may consume any number of them". You either write f(1) or f(1, 2), and when you write f(1), 2 then the call to f and the 2 have nothing to do with each other.

Is the arity of a function tracked and used for saturated calls?

Yes, function calls track at runtime whether they had to adapt the number of arguments before (and optimized code then uses this information), but this happens for all calls, so it's not a difference between the scenarios you're comparing.

What can I expect in general of directly or indirectly calling curried functions?

Wasted memory and degraded performance, unless the overall situation happened to be simple enough for the optimizing compiler to inline all of it and optimize out the unnecessary function creations.
Of course, for sufficiently small scripts that might not matter. You can afford a lot of inefficiency within time frames that are barely measurable and certainly not noticeable if your programs and the data they operate on are small enough. In my benchmark above, I had to sum up 50 million array elements just to get the overall time taken into two-digit milliseconds range! If your arrays only contain 100 elements, you won't be able to measure a difference between any of the approaches.

February 22, 2026 Score: 0 Rep: 415,357 Quality: Low Completeness: 10%

Also because JavaScript is not a lazy evaluation language, the term "currying" does not really apply. There's no real difference between a function that happens to reference a symbol from an enclosing lexical scope and one that doesn't, in terms of how the code runs.

February 22, 2026 Score: 0 Rep: 1 Quality: Low Completeness: 0%

It still does apply. OCaml and SML are strict impure functional languages and they still use the term. Currying can be done in any language that has closures.

February 22, 2026 Score: 0 Rep: 42,270 Quality: Low Completeness: 0%

Whether everything gets inlined and optimized away or not is the key question, isn't it?

February 22, 2026 Score: 0 Rep: 1 Quality: Low Completeness: 80%

I wish I could mark this as the correct answer, but apparently I asked the wrong question type Can an Advice post be changed into a QA post (or vice versa)?.

From a quick glance, the paper you referenced is largely not applicable to JavaScript (at least not to V8), because language semantics are different: any function can be called with too few arguments, and that's not an error, you just get undefined as the value of the other parameters; any function can also be called with too many arguments, in which case they get ignored, unless the called function uses the arguments array to access them.

That paper is about optimizing curried functions, It applies to any language that supports currying (and therefore any language that has closures). What a saturated call here means is applying a curried function.

Consider something like this: f(1,2)(3)(4,5). If a programming language's runtime, like V8, knew that f was declared as (a,b) => c => (d, e) => ... it would optimize this into a normal call. If f was declared as (a,b) => c => ... it would only partially be do that.

Yes, function calls track at runtime whether they had to adapt the number of arguments before (and optimized code then uses this information), but this happens for all calls, so it's not a difference between the scenarios you're comparing.

Following from my previous comment. I intended arity here meant the arity of the curried function. (x,y,z) => ... would have arity 1 while x => y => z => ... would have arity 3. I guess I should have been more clear.