To install Bun
curl -fsSL https://bun.sh/install | bashnpm install -g bunpowershell -c "irm bun.sh/install.ps1|iex"scoop install bunbrew tap oven-sh/bunbrew install bundocker pull oven/bundocker run --rm --init --ulimit memlock=-1:-1 oven/bunTo upgrade Bun
bun upgradebun run --parallel and bun run --sequential
Run multiple package.json scripts concurrently or sequentially with Foreman-style prefixed output. Includes full --filter and --workspaces integration for running scripts in parallel or sequential across workspace packages.
# Run "build" and "test" concurrently from the current package.json
bun run --parallel build test
# Run "build" and "test" sequentially with prefixed output
bun run --sequential build test
# Glob-matched script names
bun run --parallel "build:*"
# Run "build" in all workspace packages concurrently
bun run --parallel --filter '*' build
# Run "build" in all workspace packages sequentially
bun run --sequential --workspaces build
# Multiple scripts across all packages
bun run --parallel --filter '*' build lint test
# Continue running even if one package fails
bun run --parallel --no-exit-on-error --filter '*' test
# Skip packages missing the script
bun run --parallel --workspaces --if-present build
Each line of output is prefixed with a colored, padded label so you can tell which script produced it:
build | compiling...
test | running suite...
lint | checking files...
When combined with --filter or --workspaces, labels include the package name:
pkg-a:build | compiling...
pkg-b:build | compiling...
--parallel starts all scripts immediately with interleaved, prefixed output. --sequential runs scripts one at a time in order. By default, a failure in any script kills all remaining scripts — use --no-exit-on-error to let them all finish.
Pre/post scripts (prebuild/postbuild) are automatically grouped with their main script and run in the correct dependency order within each group.
How is this different from --filter?
bun --filter="pkg" <script> respects dependency order. It doesn't start a script until all it's dependendents are also run. This can be an issue when using long-lived watch-like scripts. --parallel and --sequential do not respect dependency order so they won't wait.
HTTP/2 Connection Upgrades via net.Server
The net.Server → Http2SecureServer connection upgrade pattern now works correctly. This pattern is used by libraries like http2-wrapper, crawlee, and custom HTTP/2 proxy servers that accept raw TCP connections on a net.Server and forward them to an Http2SecureServer via h2Server.emit('connection', rawSocket).
import { createServer } from "node:net";
import { createSecureServer } from "node:http2";
import { readFileSync } from "node:fs";
const h2Server = createSecureServer({
key: readFileSync("key.pem"),
cert: readFileSync("cert.pem"),
});
h2Server.on("stream", (stream, headers) => {
stream.respond({ ":status": 200 });
stream.end("Hello over HTTP/2!");
});
const netServer = createServer((rawSocket) => {
// Forward the raw TCP connection to the HTTP/2 server
h2Server.emit("connection", rawSocket);
});
netServer.listen(8443);
Symbol.dispose support for mock() and spyOn()
mock() and spyOn() now implement Symbol.dispose, enabling the using keyword to automatically restore mocks when they go out of scope. This eliminates the need to manually call mockRestore() or rely on afterEach cleanup.
import { spyOn, expect, test } from "bun:test";
test("auto-restores spy", () => {
const obj = { method: () => "original" };
{
using spy = spyOn(obj, "method").mockReturnValue("mocked");
expect(obj.method()).toBe("mocked");
}
// automatically restored when `spy` leaves scope
expect(obj.method()).toBe("original");
});
[Symbol.dispose] is aliased to mockRestore, so it works with both spyOn() and mock():
import { mock } from "bun:test";
const fn = mock(() => "original");
fn();
expect(fn).toHaveBeenCalledTimes(1);
fn[Symbol.dispose](); // same as fn.mockRestore()
expect(fn).toHaveBeenCalledTimes(0);
NO_PROXY now respected for explicit proxy options
Previously, setting NO_PROXY only worked when the proxy was auto-detected from http_proxy/HTTP_PROXY environment variables. If you explicitly passed a proxy option to fetch() or new WebSocket(), the NO_PROXY environment variable was ignored.
Now, NO_PROXY is always checked — even when a proxy is explicitly provided via the proxy option.
// NO_PROXY=localhost
// Previously, this would still use the proxy. Now it correctly bypasses it.
await fetch("http://localhost:3000/api", {
proxy: "http://my-proxy:8080",
});
// Same fix applies to WebSocket
const ws = new WebSocket("ws://localhost:3000/ws", {
proxy: "http://my-proxy:8080",
});
--cpu-prof-interval flag
Bun now supports the --cpu-prof-interval flag to configure the CPU profiler's sampling interval in microseconds, matching Node.js's flag of the same name. The default interval is 1000μs (1ms).
# Sample every 500μs for higher resolution profiling
bun --cpu-prof --cpu-prof-interval 500 index.js
If used without --cpu-prof or --cpu-prof-md, Bun will emit a warning.
ESM bytecode in --compile
Using --bytecode with --format=esm is now supported. Previously, this was unsupported due to missing functionality in JavaScriptCore and now it's fully supported.
When --bytecode is used without an explicit --format, it continues to default to CommonJS. In a future version of Bun, we may change that default to ESM to make the behavior more consistent.
Thanks to @alistair!
Fixed: Illegal instruction (SIGILL) crashes on ARMv8.0 aarch64 CPUs
Fixed crashes on older ARM64 processors (Cortex-A53, Raspberry Pi 4, AWS a1 instances) caused by mimalloc emitting LSE atomic instructions that require ARMv8.1 or later. Bun now correctly targets ARMv8.0 on Linux aarch64, using outline atomics for runtime dispatch.
Faster Markdown-to-HTML rendering
Bun.Markdown now uses SIMD-accelerated scanning to find characters that need HTML escaping (&, <, >, "), resulting in 3-15% faster Markdown-to-HTML rendering throughput. Larger documents with fewer special characters see the biggest gains.
Thanks to @billywhizz for the contribution!
Faster Bun.markdown.react()
Cached frequently-used HTML tag strings (div, p, h1-h6, etc.) in the React renderer for Bun.markdown.react(), avoiding repeated string allocations on every element creation.
| Input size | Before | After | Improvement |
|---|---|---|---|
| Small (121 chars) | 3.20 µs | 2.30 µs | 28% faster |
| Medium (1,039 chars) | 15.09 µs | 14.02 µs | 7% faster |
| Large (20,780 chars) | 288.48 µs | 267.14 µs | 7.4% faster |
String object count reduced by 40% and heap size reduced by 6% for a typical render.
Faster AbortSignal.abort() with no listeners
AbortSignal.abort() now skips creating and dispatching an Event object when there are no registered listeners, avoiding unnecessary object allocation and dispatch overhead. This results in a ~6% improvement in micro-benchmarks (~16ms saved per 1M calls).
| Case | Before | After | Improvement |
|---|---|---|---|
| no listener | 271 ms | 255 ms | ~6% |
| with listener | 368 ms | 370 ms | (same) |
Thanks to @sosukesuzuki for the contribution!
JavaScriptCore upgrade
RegExp SIMD Acceleration
Regular expressions got a major performance boost with a new SIMD-accelerated prefix search, inspired by V8's approach. When a regex has alternatives with known leading characters (e.g., /aaaa|bbbb/), JSC now uses SIMD instructions to scan 16 bytes at a time, rapidly rejecting non-matching positions before falling back to scalar matching. This is implemented for both ARM64 (using TBL2) and x86_64 (using PTEST), so all platforms benefit.
The x86_64 codegen also gained new constant materialization primitives (move128ToVector, move64ToDouble, move32ToFloat) using broadcast and shuffle instructions, which are necessary for the SIMD regex paths and future SIMD optimizations.
579b96614b75— SIMD fast prefix search for RegExp (ARM64)b7ed3dae4a6a— SIMD fast prefix search for RegExp (x86_64)aa596dded063— x86_64 constant materialization for SIMD masks
RegExp JIT: Fixed-Count Parentheses
Non-capturing parenthesized subpatterns with fixed-count quantifiers like (?:abc){3} previously fell back to the slower Yarr interpreter. They are now JIT-compiled using a counter-based loop, yielding a ~3.9x speedup on affected patterns. A follow-up patch also added JIT support for fixed-count subpatterns with capture groups (e.g., /(a+){2}b/), correctly saving and restoring capture state across iterations.
ac63cc259d74— JIT support for non-capturing fixed-count parentheses (~3.9x faster)c8b66aa0832b— JIT support for fixed-count subpatterns with captures
String#startsWith Optimized in DFG/FTL
String.prototype.startsWith is now an intrinsic in the DFG and FTL JIT tiers, with constant folding support when both the string and search term are known at compile time.
| Benchmark | Speedup |
|---|---|
string-prototype-startswith | 1.42x faster |
string-prototype-startswith-constant-folding | 5.76x faster |
string-prototype-startswith-with-index | 1.22x faster |
Set#size and Map#size Optimized in DFG/FTL and Inline Caches
The .size getter on Set and Map is now handled as an intrinsic in the DFG/FTL tiers and inline caches, eliminating the overhead of a generic getter call.
| Benchmark | Speedup |
|---|---|
set-size | 2.24x faster |
map-size | 2.74x faster |
String#trim Optimized
String.prototype.trim, trimStart, and trimEnd now use direct pointer access via span8()/span16() instead of indirect str[i] character access, avoiding repeated bounds checking.
| Benchmark | Speedup |
|---|---|
string-trim | 1.17x faster |
string-trim-end | 1.42x faster |
string-trim-start | 1.10x faster |
Object.defineProperty Handled in DFG/FTL
Object.defineProperty is now recognized as an intrinsic in the DFG and FTL JIT tiers. While this patch alone doesn't change benchmark numbers, it lays the groundwork for future optimizations that can specialize based on descriptor shape.
String.prototype.replace Returns Ropes
When using "string".replace("search", "replacement") with string arguments, JSC now constructs a rope (lazy concatenation) instead of eagerly copying the entire result. This avoids unnecessary allocations for the common case where the result is only used briefly. This aligns with V8's behavior.
Bugfixes
Node.js compatibility improvements
- Fixed:
existsSync('.'),statSync('.'), and othernode:fsoperations incorrectly failing on Windows due to'.'being normalized to an empty string instead of the current directory. - Fixed:
Function.prototype.toString()whitespace now matches V8/Node.js - Fixed 3 rare crashes in
node:http2
Bun APIs
- Fixed:
Bun.stringWidthincorrectly reporting Thai SARA AA (U+0E32), SARA AM (U+0E33), and their Lao equivalents (U+0EB2,U+0EB3) as zero-width characters instead of width 1. These are spacing vowels, not combining marks, so common Thai words likeคำnow correctly return a width of 2 instead of 1.
Web APIs
- Fixed: a crash that could occur in the
WebSocketclient when usingbinaryType = "blob"and receiving"data"events when no event listener attached. - Fixed: Sequential HTTP requests with proxy-style absolute URLs (e.g.
GET http://example.com/path HTTP/1.1) hanging on the 2nd+ request when using keep-alive connections. This affected HTTP proxy servers built with Bun, which could only handle one request per connection. - Fixed: A security issue in the HTTP server chunked encoding parser that could lead to request smuggling.
TypeScript types
- Fixed:
Bun.Build.CompileTargetTypeScript type was missing SIMD variants likebun-linux-x64-modern, causing type errors when cross-compiling with specific architecture targets.
- Fixed: Missing
bun-linux-x64-baselineandbun-linux-x64-moderncompile target types in TypeScript definitions, which caused type errors when usingBun.build()with these valid targets.
- Fixed:
Socket.reload()TypeScript types now correctly expect{ socket: handler }to match runtime behavior, which requires the handler to be wrapped in asocketproperty.