Skip to content

A Fast Bundler

February 9, 2026

We ran the rolldown/benchmarks suite (10,000 React JSX components) against 7 bundlers. howth came out on top at 275ms — ahead of bun, esbuild, rolldown, vite, rspack, and rsbuild.

Results

GCP c3-highcpu-8 — Linux x64

Intel Xeon Platinum 8481C @ 2.70GHz, 4 cores / 8 threads, 16GB RAM. Warm cache, 10 runs, measured with hyperfine.

ToolVersionTimevs howth
howth0.4.0275ms1.0x
Bun1.3.9307ms1.1x
esbuild0.27.3589ms2.1x
Rolldown1.0.0-rc.3735ms2.7x
Vite7.3.1922ms3.3x
rspack1.7.52,067ms7.5x
Rsbuild1.7.32,176ms7.9x

macOS — Apple M3 Pro

ToolVersionTimevs howth
howth0.4.0276ms1.0x
Bun1.3.9350ms1.3x
esbuild0.27.3724ms2.6x
Rolldown1.0.0-rc.3765ms2.8x
Vite7.3.11,203ms4.4x
Rsbuild1.7.31,587ms5.8x
rspack1.7.51,648ms6.0x

All tools configured identically: production mode, minification, sourcemaps, no gzip.

How we got here

A day earlier, howth was at 634ms on this benchmark — slower than bun. Here's what changed.

The bottleneck: SWC

Profiling showed 90% of time was spent in build_graph_parallel, and the single biggest cost was SWC transpilation — 48% of all worker thread samples. Every .jsx file was being parsed three times:

  1. By howth-parser for import extraction
  2. By SWC for JSX → _jsx() transformation
  3. By howth-parser again to re-extract imports from transpiled output

The apps/10000 benchmark is 100% .jsx files with no TypeScript. SWC was entirely unnecessary.

Optimization 1: JSX fast path — 634ms → 520ms

howth-parser already had complete JSX codegen — it just wasn't wired into the bundler. We added a transform_jsx() function that does parse + codegen + import extraction in a single pass, and routed .jsx files through it instead of SWC.

For this benchmark, SWC is completely eliminated from the hot path. TypeScript files still use SWC for type stripping.

Optimization 2: Parallel resolver — 520ms → 384ms

The module resolver was sequential. We made the Resolver struct thread-safe with RwLock<HashMap> caches and moved resolution into rayon parallel closures. Imports now resolve across all CPU cores simultaneously.

Optimization 3: Directory listing cache — 384ms → 246ms

The resolver was doing ~12 stat() syscalls per import, trying extensions like .js, .jsx, .ts, .tsx, /index.js, etc. For 10,000 modules, that's over 120,000 stat() calls.

The fix: read each directory once and cache file names as HashSets. One readdir replaces all per-extension probing. We also replaced canonicalize() (which calls realpath()) with in-memory path normalization that resolves . and .. without any syscalls.

Optimization 4: Drop SWC minifier — cleaner pipeline

Replaced SWC's minifier with howth-parser's CodegenOptions { minify: true }. Performance-neutral, but removes the last SWC dependency from the JS/JSX bundling path.

Why howth is fast

  1. Native binary — no Node.js startup cost
  2. Single-pass JSX — howth-parser does parse + import extraction + codegen in one pass
  3. Rayon parallelism — file I/O and transpilation across all cores
  4. Arena allocation — bump allocator for AST nodes, better cache locality
  5. Directory listing cache — one readdir per directory, HashSet lookups
  6. Dense module graphVec<Module> indexed by usize, no pointer chasing
  7. In-memory path normalization — zero realpath() syscalls

Methodology

All benchmarks use the rolldown/benchmarks suite (forked with howth added). The GCP benchmark ran on a dedicated c3-highcpu-8 spot instance with no other workloads. All tools configured identically. Measured with hyperfine (10 runs).

View the benchmark source


Update: v0.5.0 — Variable Name Mangling (February 10, 2026)

v0.5.0 adds variable name mangling to the minifier. --minify now shortens local variable names by default (myVariablea), matching the behavior of esbuild and bun.

GCP c3-highcpu-8 — Linux x64 (updated)

Intel Xeon Platinum 8481C @ 2.70GHz, 4 cores / 8 threads, 16GB RAM. 10 runs, hyperfine.

With mangling (default --minify behavior):

ToolVersionTimeJS Sizevs fastest
Bun1.3.9528ms5.34 MB1.0x
howth0.5.0670ms4.13 MB1.3x
Rolldown1.0.0-rc.31,144ms5.22 MB2.2x
esbuild0.27.31,248ms5.90 MB2.4x
Vite7.3.11,498ms5.28 MB2.8x
Rsbuild1.7.32,550ms5.70 MB4.8x
rspack1.7.52,676ms5.18 MB5.1x

Without mangling (--minify --no-mangle):

ToolVersionTimeJS Sizevs fastest
Bun1.3.9528ms5.34 MB1.0x
howth0.5.0549ms5.26 MB1.04x

Without mangling, howth and bun are neck and neck. Mangling adds a re-parse and rename pass (~120ms overhead) but drops the JS output by 25% (5.26 MB → 4.13 MB) — the smallest in the benchmark.

macOS — Apple M3 Pro (updated)

ToolVersionTimeJS Sizevs fastest
Bun1.3.9321ms5.34 MB1.0x
howth0.5.0460ms4.02 MB1.4x
esbuild0.27.3796ms5.91 MB2.5x
Rolldown1.0.0-rc.3813ms5.22 MB2.5x
Vite7.3.11,274ms5.28 MB4.0x
Rsbuild1.7.31,607ms5.70 MB5.0x
rspack1.7.51,696ms5.18 MB5.3x

See Removing SWC: Building a Custom TypeScript Parser and Minifier for details on how the mangler works.


Update: Per-Module Minification (February 10, 2026)

v0.5.0's minifier re-parsed the entire concatenated bundle (~5 MB) as a single pass. This added ~170ms on M3 Pro. The fix: minify and mangle each module inside the existing par_iter() loop instead of re-parsing the full bundle. Each module wrapper is only ~500 bytes, so 10,000 parallel parses are near-instant.

GCP c3-highcpu-8 — Linux x64 (updated)

Intel Xeon Platinum 8481C @ 2.70GHz, 4 cores / 8 threads, 16GB RAM. 10 runs, hyperfine.

ToolVersionTimeJS Sizevs howth
howth0.5.0290ms4.12 MB1.0x
Bun1.3.9541ms5.34 MB1.9x
esbuild0.27.31,090ms5.90 MB3.8x
Rolldown1.0.0-rc.31,179ms5.22 MB4.1x
Vite7.3.11,530ms5.28 MB5.3x
Rsbuild1.7.32,775ms5.70 MB9.6x
rspack1.7.52,930ms5.18 MB10.1x

macOS — Apple M3 Pro (updated)

ToolVersionTimeJS Sizevs fastest
Bun1.3.9315ms5.34 MB1.0x
howth0.5.0317ms4.01 MB1.0x
esbuild0.27.3736ms5.91 MB2.3x
Rolldown1.0.0-rc.3799ms5.22 MB2.5x
Vite7.3.11,229ms5.28 MB3.9x
Rsbuild1.7.31,569ms5.70 MB5.0x
rspack1.7.51,646ms5.18 MB5.2x

1.86x faster than Bun on C3, tied on M3 Pro. howth produces the smallest output in both benchmarks (4 MB vs 5.3 MB, 23% smaller).

Released under the MIT License.