What are Generator Functions?
In JavaScript, generator functions look like regular functions with two exceptions:
- They have an asterisk between the
function
keyword and the function name. - They use the
yield
keyword within the function body.
function* range(from, to) {
for (let i = from; i <= to; i++) {
yield i;
}
}
These functions return an iterable, meaning that you can loop over the result using a for of
loop.
for (const step of range(1, 16)) {
console.info(`Completing step ${step} of 16`);
}
Generator functions aren’t unique to JavaScript. If you’ve programmed in Python or C# then you’ve perhaps come across them.
What Makes Generators Great
While you can do funky things with generators, like generating infinite sequences, their practical appeal stems from:
- Memory efficiency
- Lazy evaluation
To establish that claim, let’s compare generators with the filter
and map
functions available on Array instances. Imagine that we have a large list of elements that we want to perform a series of operations on.
const players = getAllPlayers();
players.map(enrichWithHighScores)
.filter(p => p.highScore >= 1000)
.map(enrichWithPlaytime)
.map(enrichWithGeoInfo)
.sort((a, b) => a.highScore - b.highScore)
.forEach(showPlayer);
Each invocation of map
and filter
creates a new array. In the example above, that’s 4 arrays. Whether that’s going to be a problem obviously depends on the size of the arrays. Usually this is fine. But when it’s not, generators provide an elegant alternative.
function* getEnrichedPlayers() {
for (let player of getAllPlayers()) {
player = enrichWithHighScores(player);
if (player.highScore >= 1000) {
player = enrichWithPlayTime(player);
player = enrichWithGeoInfo(player);
yield player;
}
}
}
players = Array.from(getEnrichedPlayers());
players.sort((a, b) => a.highScore - b.highScore);
players.forEach(showPlayer);
Instead of 4 arrays, the example above creates just 1 array (for sorting) which reduces memory consumption. But generators can also help you avoid doing extra work. Imagine that you want to look for a single element, but first you must perform some work on those elements.
const games = getGamesPlayed(); // returns an array
const found = games.map(partitionIntoActivitySegments)
.find(game => game.segments.length >= 25);
In the example above, let’s assume that partitionIntoActivitySegments
is a computationally intensive task. Since Array.map
is an eager operation, it will call the expensive function for each and every element before we even begin our search. Now imagine that the first element in the sequence satisfies our search condition. We have just wasted precious CPU cycles partitioning a bunch of games into segments when we didn’t need to! But if we wrapped that in a generator we would have:
function* getPartitionedGames() {
for (const game of getGamesPlayed()) {
yield partitionIntoActivitySegments(game)
}
}
function find(iterable, predicate) {
for (const element of iterable) {
if (predicate(element)) {
return element;
}
}
}
const found = find(getPartitionedGames(), game => game.segments >= 25);
In the example above, no work is wasted because we return from the for loop as soon as we’ve found the element of interest.
Yes, these examples are a bit contrived but the point is that if you start with generator functions at the core around large data collections, you may end up saving both memory and compute as you start layering functions that operate on your data.
Note đź”–
If you’re thinking: it sure would be neat if we could use map
and filter
on iterables (such as generator functions), then you’re not alone. There are at least three proposals to bring that into JavaScript in one form or another.
- Sync Iterator Helpers (Stage 3)
- Async Iterator Helpers (Stage 2)
- Pipeline Operator (Stage 2)
In the meantime, libraries exist that make it possible to chain iterator operations together. Here are some notable examples:
So What’s the Problem?
Turns out that generator functions come with quite a bit of overhead. But before we start looking at the numbers, let me explain why I care about JavaScript performance. I work for GRID. A key component of GRID is a very capable spreadsheet engine that runs entirely in the browser. Below is an example of a GRID document. I invite you to play around with the sliders.
The calculations that underlie the chart come from a spreadsheet. Those calculations need to be fast in order for the chart to update smoothly.
Note đź”–
If you’d like to learn about a previous performance optimization we did, then I recommend reading this article by my brilliant colleague Alex Harri.
First Realization: Iterators Are Faster
I was working on code that was consuming data from generator functions and found myself longing for general-purpose functions like filter
, map
, take
, skip
and find
. Instead of writing code like this:
let firstNonEmpty;
for (const cell of getCellsInRange(range)) {
if (!isBlank(cell)) {
firstNonEmpty = cell;
break;
}
}
I wanted to write code like this:
const firstNonEmpty = find(getCellsInRange(range), c => !isBlank(c));
While implementing these functions with generators is straightforward, I decided to look at existing libraries to see if there was anything out there that we could use. I noticed that some libraries were iterator-based and claimed superior performance. So I decided to gather some data. I created two implementations of the skip
function, one as a generator function and the other returning an iterator.
// Generator implementation
function* skipGen(howMany, sequence) {
let counter = 0;
for (const element of sequence) {
if (counter >= howMany) {
yield element;
}
counter++;
}
}
// Iterator implementation
function skipIt(howMany, sequence) {
return {
[Symbol.iterator]() {
const iterator = sequence[Symbol.iterator]();
for (let i = 0; i < howMany; i++) {
iterator.next();
}
return iterator;
},
};
}
I ran a simple benchmark that skips the first 5 elements from a sequence of 100,000 elements. The following results were obtained on a laptop with an Apple M1 Pro chip.
As the chart illustrates, the iterator version is twice as fast as the generator version 🤯
Note đź”–
This article by Aleksei Berezkin dives deeper into iterator vs generator performance and provides potential explanations for why generators are at a disadvantage.
Later, my partner in crime Gunnlaugur Briem started converting generator functions into iterator functions along the hot paths of our spreadsheet engine. As I reviewed his PRs, I couldn’t resist joining in on the fun. Early results were promising, showing an average 12% reduction in recalculation times over a corpus of more than 2800 spreadsheets. But we wanted to see if we could squeeze out even more performance. We tested different kinds of iterators (closures vs classes) and different methods of gathering the iterator results into arrays. We created lots of small benchmarks to try to build a better understanding of the performance characteristics of modern JavaScript engines. The results weren’t always consistent or intuitive, but it was this endeavor that led us to our next discovery.
Second Realization: Callbacks Are Faster Still
Let’s take a look at three different implementations of the filter
function;
- A generator function
- A function returning an iterator
- A function accepting a callback
// Generator implementation
function* filterGen(sequence, predicate) {
for (const element of sequence) {
if (predicate(element)) {
yield element;
}
}
}
// Iterator implementation
function filterIt(sequence, predicate) {
return {
[Symbol.iterator]() {
const itr = sequence[Symbol.iterator]();
return {
next() {
let current = itr.next();
while (!current.done &&
!predicate(current.value)) {
current = itr.next();
}
return current;
},
};
},
};
}
// Callback implementation
function filterCb(sequence, predicate, callback) {
for (const element of sequence) {
if (predicate(element)) {
callback(element);
}
}
}
Notice how the callback variant is almost as simple as the generator function, whereas the iterator variant is a lot more obscure. It’s likable already, but how does it perform? To find out, let’s run a benchmark that filters out odd numbers from a sequence of 100,000 integers.
That’s a spectacular result. The callback version is even faster than the native Array.filter
function! 🤯
Seems like we have a clear-cut winner. But there’s a slight problem that we need to address.
The Drawback: Early Stopping
With iterables, early stopping is as simple as breaking out of a loop. With the callback pattern, the caller is no longer in direct control of the loop. To address this problem, we introduce a Signal
object that allows the callback to indicate when it’s time to stop. A signal is really just a boolean value that is passed by reference.
class Signal {
constructor () {
this._set = false;
}
set () {
this._set = true;
}
get isSet () {
return this._set;
}
}
We now alter the callback version of the filter
function to check the stop signal:
function filterCb (sequence, predicate, callback) {
const stopSignal = new Signal();
for (const element of sequence) {
if (predicate(element)) {
callback(element, stopSignal);
if (stopSignal.isSet) {
return;
}
}
}
}
// Here's a simple example of how it can be used
const results = [];
const isEven = n => n % 2 === 0;
filterCb(bunchOfNumbers, isEven, (number, stopSignal) => {
results.push(number);
if (results.length === 3) {
stopSignal.set();
}
});
This takes away some of the elegance of the callback pattern. But it’s an acceptable tradeoff for code that needs to be performant. Checking the signal after every callback invocation does add a bit of overhead. If we add this version to our benchmark, we get the following results:
As expected, the callback pattern suffers slightly from the overhead of checking the signal, but it’s still plenty fast. But these are synthetic benchmarks. What about real-world performance?
Application and results
At GRID, we’ve replaced generator functions with callbacks in three different areas of our spreadsheet engine. Two of those were intended to improve recalculation performance whereas the third affected initialization performance. We measured the impact of these changes using an internal tool that looks at public GRID documents and measures both initialization and recalculation time.
1. R-tree
We use R-trees to store cell ranges in a couple of places. Our cell storage layer uses them for spill ranges and our dependency graph uses them to store range references (e.g. A2:F30
). As this is a foundational data structure for recalculation, we suspected that this would yield a nice performance boost. We were not disappointed as recalc time was reduced by 21.2% on average.
Proportional differences from baseline (for 2718 matching pairs where either value ≥ 50.0 ms)
Median -22.3%
Weighted geometric mean -21.2%
Min -68.5% | 1% -49.0% | 10% -37.8% | 90% -10.7% | 99% -5.15% | Max +4.01%
99.7% decreased to less than 0.97x, 0.04% increased to more than 1.03x
96.6% decreased to less than 0.93x, 0 increased to more than 1.08x
31.5% decreased to less than 0.71x, 0 increased to more than 1.4x
0.48% decreased to less than 0.5x, 0 increased to more than 2x
0 decreased to less than 0.1x, 0 increased to more than 10x
2. Dependency Graph
Our spreadsheet engine maintains a dependency graph to determine which formulas need to be re-evaluated when a cell changes. A fair amount of time is therefore spent traversing this graph during recalculation. By converting the graph traversal functions to use callbacks, we achieved a further 11.9% reduction in recalc time.
Proportional differences from baseline (for 1346 matching pairs where either value ≥ 50.0 ms)
Median -17.6%
Weighted geometric mean -11.9%
Min -62.5% | 1% -30.8% | 10% -22.8% | 90% -2.75% | 99% +11.4% | Max +73.0%
89.5% decreased to less than 0.97x, 3.1% increased to more than 1.03x
77.3% decreased to less than 0.93x, 1.8% increased to more than 1.08x
1.9% decreased to less than 0.71x, 0.22% increased to more than 1.4x
0.30% decreased to less than 0.5x, 0 increased to more than 2x
0 decreased to less than 0.1x, 0 increased to more than 10x
3. AST Traversal
When a spreadsheet is loaded, its formulas are parsed into an Abstract Syntax Tree (AST). The ASTs are then traversed to build the dependency graph. By converting those traversals to use callbacks, we were able to reduce initialization time by approximately 10%.
PERFORMANCE
Init duration
Baseline
Median 38.3 ms
Min 81.8 µs | 0.1% 107 µs | 1% 228 µs | 10% 3.18 ms | 90% 1.00 sec | 99% 8.37 sec | 99.9% 39.7 sec | Max 147 sec
Current
Median 37.5 ms
Min 85.2 µs | 0.1% 94.9 µs | 1% 205 µs | 10% 3.28 ms | 90% 885 ms | 99% 7.72 sec | 99.9% 35.3 sec | Max 142 sec
...
Proportional differences from baseline (for 1420 matching pairs where either value ≥ 200 ms)
Median -9.67%
Weighted geometric mean -10.6%
Min -42.9% | 1% -35.4% | 10% -18.6% | 90% -0.641% | 99% +3.25% | Max +38.8%
83.3% decreased to less than 0.97x, 1.1% increased to more than 1.03x
63.2% decreased to less than 0.93x, 0.56% increased to more than 1.08x
1.3% decreased to less than 0.71x, 0 increased to more than 1.4x
0 decreased to less than 0.5x, 0 increased to more than 2x
Conclusion
There’s a lot to like about generator functions. For code that needs to be performant and is CPU-bound, however, they are best avoided. The callback pattern is a great alternative that is both fast and easy to understand.
Acknowledgements
I would like to express my sincere gratitude to Alex Harri JĂłnsson and Gunnlaugur Briem for their valuable feedback and review of this article.