Bun

Bun v1.3.13


Jarred Sumner ยท April 20, 2026

To install Bun

curl
npm
powershell
scoop
brew
docker
curl
curl -fsSL https://bun.sh/install | bash
npm
npm install -g bun
powershell
powershell -c "irm bun.sh/install.ps1|iex"
scoop
scoop install bun
brew
brew tap oven-sh/bun
brew install bun
docker
docker pull oven/bun
docker run --rm --init --ulimit memlock=-1:-1 oven/bun

To upgrade Bun

bun upgrade

bun test --isolate and bun test --parallel

Two new flags for bun test that dramatically speed up large test suites:

--isolate runs each test file in a fresh global environment within the same process. Between files, Bun drains microtasks, closes all sockets, cancels timers, kills subprocesses, and creates a clean global object. A VM-level transpilation cache means shared dependencies are only parsed once โ€” subsequent files reuse the cached source, skipping redundant transpilation entirely.

--parallel[=N] distributes test files across up to N worker processes (defaults to CPU count). Files are partitioned for cache locality, and idle workers steal work from the busiest remaining queue. Workers automatically run with --isolate between files. Output remains identical to serial execution โ€” per-test console.log/console.error output is buffered and flushed atomically, so files never interleave.

# Run tests with isolation (fresh global per file)
bun test --isolate ./tests

# Run tests in parallel across all CPU cores
bun test --parallel ./tests

# Run tests in parallel with 8 workers
bun test --parallel=8 ./tests

Both flags work with existing options including --bail, --randomize, --dots, JUnit reporting, LCOV coverage, and snapshots. All transpiler/resolver flags (--define, --loader, --tsconfig-override, --conditions, etc.) are forwarded to workers. JEST_WORKER_ID and BUN_TEST_WORKER_ID in bun test --parallel are also set as environment variables.

bun test --shard=M/N for splitting tests across CI jobs

Split test files across multiple CI runners with the new --shard flag, matching the syntax used by Jest, Vitest, and Playwright.

# In a GitHub Actions matrix with 3 jobs:
bun test --shard=1/3
bun test --shard=2/3
bun test --shard=3/3

Test files are sorted by path for determinism and distributed round-robin across shards, keeping each shard balanced to within one file of each other. The shard index is 1-based (1 <= index <= count).

Composes naturally with other flags:

  • --changed โ€” sharding is applied after the changed-files filter
  • --randomize โ€” shuffle happens after shard selection, within the shard

If a shard ends up with zero files (e.g. 2 test files with --shard=5/5), it exits 0 gracefully rather than erroring with "No tests found!". Invalid inputs like 0/3, 4/3, or 1/0 produce a clear error message and exit non-zero.

--shard=2/3: running 3/10 test files

f01.test.ts:
(pass) t

f04.test.ts:
(pass) t

f07.test.ts:
(pass) t

bun test --changed

bun test now supports a --changed flag that only runs test files affected by your git changes. This works by building the full import graph of your test files and filtering down to only those that transitively depend on a file that git reports as changed.

# Run tests affected by uncommitted changes (unstaged + staged + untracked)
bun test --changed

# Run tests affected by changes since a specific commit, branch, or tag
bun test --changed=HEAD~1
bun test --changed=main

# Combine with --watch to re-filter on every restart
bun test --changed --watch

When combined with --watch, editing any local source file โ€” even one not currently imported by the selected tests โ€” triggers a re-run. Each restart re-queries git, so the filtered set always tracks the working tree.

The graph analysis scans imports without entering node_modules and without linking or emitting code, so the overhead is minimal. If no changed files are found, --watch keeps the process alive while bun test --changed without --watch exits cleanly.

bun install streams tarballs to disk

bun install now extracts package tarballs while they are still downloading, instead of buffering the entire .tgz and decompressed .tar in memory before extraction. Only the in-flight HTTP chunks plus libarchive's fixed per-archive buffers are needed โ€” the full archive is never materialized in memory.

Integrity hashing runs incrementally over the compressed bytes and is verified before the extracted tree is promoted into the cache. Error handling and retries remain unchanged.

Streaming extraction is enabled by default. If you encounter issues, it can be disabled by setting BUN_FEATURE_FLAG_DISABLE_STREAMING_INSTALL=1.

Faster bun install with isolated linker

In a peer-heavy monorepo, bun install --linker=isolated;

BeforeAfter
20.5s2.4s

The final .bun/ store layout is byte-identical to previous versions. Previously hanging installs now complete in seconds.

Thanks to @robobun for the contribution!

Source maps use up to 8x less memory

Bun's internal source map representation has been rearchitected. Instead of decoding VLQ mappings into a full in-memory list on first access, Bun now writes a compact bit-packed binary format directly during transpilation and reads it in place โ€” no whole-file decode step, no VLQ round-trip.

The new format exploits the structure of transpiler output (most mappings share the same source index, and generated/original columns frequently match) to store mappings at ~2.4 bytes per mapping, down from 20 bytes previously.

Memory usage (TypeScript compiler, _tsc.js, 563k mappings):

RepresentationResident after first .stack
Mapping.List (Bun v1.3.12)~11.3 MB (20 B/mapping)
LEB128 stream2.92 MB (5.4 B/mapping)
Bit-packed windows1.29 MB (2.4 B/mapping)

Decoding now costs close to 0. Lookups cost about 6% more. Encoding gets faster.

BenchmarkThis releasev1.3.12ฮ”
error-capturestack.mjs (mitata, multi-window)1.37โ€“1.41 ยตs1.27โ€“1.32 ยตs+6โ€“8%
plain while(1) new Error().stack loop657 ns~810 nsโˆ’19%
5-frame multi-window synthetic (1500-line file)818 ns769 ns+6%
first .stack on a 150k-line module~0.1 ms~5 msโˆ’98%
RSS load โ†’ first stack (150k-line module)+0.06 MB+2.3 MB

The main tradeoff is that after compression the new format is ~20% larger than the VLQ-encoded compressed equivalent โ€” but compressing sourcemaps is unnecessary for server-side JavaScript.

bun build --compile binaries also benefit: the blob is embedded directly and loaded as a zero-copy view at runtime, shrinking compiled binaries by ~1.8 MB for large source maps.

Bun's runtime uses 5% less memory

Bun's memory allocators have been upgraded:

  • mimalloc moves from v2 to v3 (along with several bugfixes in our internal fork)
  • We implemented libpas scavenger support for both Windows & Linux, reclaiming memory faster

Together these reduce baseline memory usage and fix a class of hangs and crashes in long-running processes across macOS, Linux, and Windows.

Upgraded JavaScriptCore engine

Bun's underlying JavaScript engine (WebKit's JavaScriptCore) has been upgraded, merging 1,316 upstream commits. This brings a wide range of performance improvements and bug fixes.

Performance improvements from upstream

  • Inline cache for array.length = N โ€” setting array length is now IC-cached
  • Inline cache for undefined, true, false, null as property keys
  • String length folding in DFG/FTL โ€” the compiler can now constant-fold .length on known strings
  • toUpperCase intrinsic โ€” toUpperCase() is now JIT-intrinsified
  • String#indexOf single-character fast path in DFG/FTL
  • Redundant mov removed from await/yield bytecode
  • Cached default date formatters โ€” Date.toLocaleString() and friends are faster on repeat calls
  • Wider bulk copy in GC-safe memcpy/memmove โ€” faster garbage collector memory operations
  • SIMD-accelerated equalIgnoringASCIICase โ€” faster case-insensitive string comparisons
  • SIMD fast path for identifier parsing

Thanks to @sosukesuzuki for doing the upgrade!

Faster addEventListener, dispatchEvent, and DOM events

Cherry-picked ~270 audited upstream WebKit commits into Bun's forked WebCore bindings layer, bringing performance wins to Bun's event system and promise internals.

File streaming improvements

When using new Response(Bun.file(path)) or routes: { "/route": new Response(Bun.file(path)) } in Bun.serve(), file responses on SSL and Windows now stream incrementally instead of buffering the entire file into memory. Previously, this was only supported when using HTTP and only for static routes.

This significantly reduces memory usage for large file responses in those environments. Previously, this was only supported for static routes.

Range Request Support in Bun.serve()

Bun.serve() now supports Range requests for file-backed responses, both in static routes: entries and in fetch & dynamic handler responses. Incoming Range: bytes=... headers on whole-file 200 responses are automatically handled, returning 206 Partial Content with the appropriate Content-Range header, or 416 Range Not Satisfiable when the range is invalid.

const server = Bun.serve({
  port: 3000,
  routes: {
    "/video.mp4": new Response(Bun.file("./video.mp4")),
  },
  fetch(req) {
    return new Response(Bun.file("./large-file.bin"));
  },
});

// Clients can now request byte ranges:
const res = await fetch("http://localhost:3000/video.mp4", {
  headers: { Range: "bytes=0-1023" },
});
console.log(res.status); // 206
console.log(res.headers.get("Content-Range")); // "bytes 0-1023/..."

Suffix ranges (bytes=-500), open-ended ranges (bytes=1024-), and all standard forms from RFC 9110 are supported. Multi-range requests fall through to a full-body response.

Up to 5.5x faster gzip compression with zlib-ng

Bun's zlib dependency has been upgraded from the Cloudflare zlib fork (last updated Oct 2023) to zlib-ng 2.3.3 โ€” the same library used by Node.js 24+ and Chromium. zlib-ng is actively maintained and provides runtime-dispatched SIMD acceleration across AVX-512, AVX2, SSE2, NEON, SVE, and RISC-V vector extensions for CRC32, Adler32, longest-match, and chunk-copy operations.

This is a drop-in improvement โ€” no API changes, no code changes required.

OperationBeforeAfterSpeedup
gzipSync html-128K L1275 ยตs107 ยตs2.59ร—
gzipSync html-1M L12.23 ms892 ยตs2.50ร—
gzipSync json-128K L6897 ยตs483 ยตs1.86ร—
deflate 123K L6 (async)373 ยตs68 ยตs5.48ร—
gunzipSync html-1M561 ยตs522 ยตs1.07ร—
gunzipSync binary-128K31.6 ยตs26.7 ยตs1.18ร—
createGzip stream L1 1M3.76 ms2.68 ms1.40ร—
createGunzip stream 1M1.24 ms1.18 ms1.05ร—
fetch() 11KB gzip decode42.9 ยตs41.6 ยตsparity

Compression is significantly faster across the board, with decompression seeing modest gains. The only trade-off is ~2ยตs higher per-stream initialization cost from larger internal state structs, which is amortized away on payloads โ‰ฅ4KB.

Faster array iteration in Bun's internals

Array iteration in Bun's internal C++/Zig code is now up to 1.43ร— faster for common cases. When iterating over a JavaScript array that uses simple Int32 or Contiguous storage (the common case), Bun now reads elements directly from JSC's butterfly memory instead of calling getIndex() per element.

This optimization is applied inside JSArrayIterator.next(), so every internal call site benefits automatically โ€” including expect().toContain(), expect().toBeOneOf(), new Blob([...]), and more.

BenchmarkBeforeAfterSpeedup
expect(arr).toContain(last) (1000 ints)11,493 ns8,031 ns1.43ร—
expect(x).toBeOneOf(arr) (1000 ints)13,736 ns10,643 ns1.29ร—
new Blob([100 strings + 100 buffers])9,703 ns8,301 ns1.17ร—
new Blob([1000 strings])56,817 ns49,630 ns1.14ร—

The fast path safely revalidates the butterfly pointer before each read, falling back to the generic path if the array is mutated during iteration (e.g. by a getter or toString side effect).

Thanks to @sosukesuzuki for the contribution!

SHA3 support in WebCrypto and node:crypto

Bun now supports SHA3-224, SHA3-256, SHA3-384, and SHA3-512 hash algorithms across both the Web Crypto API and node:crypto.

This works with crypto.createHash, crypto.createHmac, crypto.getHashes, crypto.subtle.digest, and crypto.subtle.sign/verify with HMAC.

import crypto from "crypto";

// node:crypto
const hash = crypto.createHash("sha3-256");
hash.update("Hello, world!");
console.log(hash.digest("hex"));
// => "f345a219da005ebe9c1a1eaad97bbf38a10c8473e41d0af7fb617caa0c6aa722"

const hmac = crypto.createHmac("sha3-256", "secret-key");
hmac.update("Hello, world!");
console.log(hmac.digest("hex"));

// Web Crypto API
const digest = await crypto.subtle.digest(
  "SHA3-256",
  new TextEncoder().encode("Hello, world!"),
);
console.log(Buffer.from(digest).toString("hex"));

This also includes an update to BoringSSL, which brings ML-KEM and ML-DSA (NIST FIPS 203/204) post-quantum algorithms into the underlying library for future support.

X25519 deriveBits support in SubtleCrypto

SubtleCrypto.deriveBits() now works with the X25519 algorithm, completing support for X25519-based key agreement in Bun's Web Crypto API.

Previously, calling crypto.subtle.deriveBits() with X25519 keys threw a NotSupportedError. This is now fully implemented per the spec, including proper rejection of small-order public keys per RFC 7748 ยง6.1.

const keyPair = await crypto.subtle.generateKey("X25519", false, [
  "deriveBits",
]);
const remoteKeyPair = await crypto.subtle.generateKey("X25519", false, [
  "deriveBits",
]);

const sharedSecret = await crypto.subtle.deriveBits(
  { name: "X25519", public: remoteKeyPair.publicKey },
  keyPair.privateKey,
  256,
);

console.log(new Uint8Array(sharedSecret)); // 32-byte shared secret

Passing null or 0 as the length returns the full 32-byte output:

const bits = await crypto.subtle.deriveBits(
  { name: "X25519", public: remoteKeyPair.publicKey },
  keyPair.privateKey,
  null, // returns full 32-byte output
);

Thanks to @panva for the contribution!

WebSocket client: support ws+unix:// and wss+unix://

The WebSocket client now supports connecting over Unix domain sockets via the ws+unix:// and wss+unix:// URL schemes, matching the convention used by the popular npm ws package.

// Connect to a Unix domain socket
const ws = new WebSocket("ws+unix:///tmp/app.sock");

// With a request path (split on first ':', same as the npm `ws` package)
const ws = new WebSocket("ws+unix:///tmp/app.sock:/api/stream?x=1");

// TLS over a Unix socket
const ws = new WebSocket("wss+unix:///tmp/app.sock", {
  tls: { rejectUnauthorized: false },
});
  • The Host header defaults to localhost, matching Node's http.request({ socketPath }) and the ws package.
  • Proxies are automatically skipped for Unix socket URLs.
  • wss+unix:// runs a full TLS handshake over the domain socket.

Standalone HTML now inlines file-loader assets imported from JS

When using bun build --compile --target browser on an HTML entry point, assets imported from JavaScript via the file loader (e.g. import logo from "./logo.svg") are now correctly inlined as data: URIs in the standalone HTML output.

Previously, only assets referenced directly in HTML (like <link rel="icon" href="./logo.svg">) were inlined. JS-imported assets were emitted as relative file paths (e.g. ./logo-kygw735p.svg), but no sidecar file was ever written โ€” resulting in broken images and missing resources.

Now, all file-loader assets are inlined regardless of whether they're referenced from HTML or JS:

// src/entry.ts
import logo from "./logo.svg";
import reactLogo from "./react.svg";

const img = document.createElement("img");
img.src = logo; // now a data:image/svg+xml;base64,... URI
bun build --compile --target browser --outdir ./dist ./src/index.html
ls dist/
index.html  # fully self-contained, no sidecar files

bunx claude and bunx @anthropic-ai/claude-code fix

bunx claude now works as a shorthand for bunx @anthropic-ai/claude-code, matching the existing bunx tsc โ†’ typescript alias.

Bugfixes

Node.js compatibility improvements

  • Fixed: Worker lifecycle crashes โ€” calling worker.terminate(), sending messages to a worker, or letting a worker exit naturally could crash
  • Fixed: tls.connect({ host, port }) matches Node.js behavior when servername is not explicitly passed
  • Fixed: rare crash involving the passphrase option in TLS configuration
  • Fixed: process.ppid was cached at startup and never updated. It now calls getppid() on every access, matching Node.js behavior, so orphan-detection patterns (if (process.ppid === 1)) work correctly. (@Jarred-Sumner)
  • Fixed: --cpu-prof output now matches the .cpuprofile format used by Node.js and Chrome DevTools โ€” lineNumber/columnNumber reflect the function definition site and per-line samples are reported via positionTicks, so profiles load correctly in Chrome DevTools, VS Code, and speedscope. (@Jarred-Sumner)
  • Fixed: socket.setTimeout() in node:net incorrectly firing the timeout event even while the socket was actively receiving data. The inactivity timer was only being reset on writes, not reads, which caused spurious timeouts for sockets consuming HTTP response bodies, database query results, or long-lived pipes. This also fixes mongoose timeout errors that occurred with Bun but not Node.js.
  • Fixed: node:http2.createServer (h2c) compatibility with strict HTTP/2 peers (curl, Node.js http2.connect, Envoy proxy) โ€” the server was advertising an invalid ENABLE_PUSH setting and emitting a malformed end-of-stream sequence that nghttp2-based clients reject. This also likely fixes @grpc/grpc-js servers running on Bun behind Envoy. Thanks to @robobun for the contribution!
  • Fixed: rare crash in StringDecoder.prototype.write()
  • Fixed: crash during process exit when native N-API modules (sqlite3, duckdb, kuzu, node-llama-cpp) wrap parent objects before children โ€” finalizers now run in LIFO order matching Node.js (@dylan-conway)
  • Fixed: node:dgram addSourceSpecificMembership and dropSourceSpecificMembership socket options were inverted for both IPv4 and IPv6
  • Fixed: os.freemem() on Linux returning significantly lower values than Node.js by reading MemAvailable from /proc/meminfo instead of using sysinfo.freeram, which excludes reclaimable page cache
  • Fixed: Rare fs.watch() deadlock
  • Fixed: Rare path.win32.resolve crash
  • Fixed: Rare fs.writeSync crash on Windows
  • Fixed: a race condition in root certificate initialization involving workers
  • Fixed: stat/lstat/fstat/fstatat/statfs syscalls on macOS could surface spurious EINTR errors to JavaScript (e.g. when accessing iCloud/FileProvider directories or autofs mounts) instead of automatically retrying (@dylan-conway)
  • Fixed: export { "a b c" } from './b.mjs' re-export clauses with string-literal names (containing spaces or other non-identifier characters) produced invalid output during single-file transpilation, causing a SyntaxError at runtime
  • Fixed: Error.captureStackTrace() crashing when Error.stackTraceLimit is set to a non-numeric value (e.g. "foo") or deleted
  • Fixed: preventDefault() not working on a non-passive event listener that follows a passive listener on the same event (@Jarred-Sumner)
  • Fixed: [Clamp] integer conversion in WebIDL bindings now rounds half-to-even per spec (@Jarred-Sumner)
  • Fixed: Data races in BroadcastChannel when using workers that could cause crashes during channel registration/unregistration and worker termination.

Bun APIs

  • Fixed: Bun.inspect() and console.log() showing [class Function] instead of the real class name for DOM and node:stream/web constructors like ReadableStreamBYOBReader, URL, Request, Response, Blob, and Event. (@Jarred-Sumner)
  • Fixed: server.stop() hanging indefinitely in certain cases
  • Fixed: File descriptor leaks in static file routes in certain cases
  • Fixed: Error handler returning a file-backed Response being silently dropped
  • Fixed: Duplicate Content-Length headers on file responses in certain cases
  • Fixed: Bun.SQL MySQL adapter hanging indefinitely on multi-statement queries against ManticoreSearch's MySQL interface, a regression introduced in Bun v1.2.12.
  • Fixed: crash in Bun.YAML.parse in rare cases
  • Fixed: crash when reading sliced non-regular file blobs
  • Fixed: panic when fstat reports an extremely large file size during Bun.file() reads, and improved error propagation for out-of-memory conditions in the same code path
  • Fixed: hypothetical crash in Bun.RedisClient when the client was garbage collected after a command threw during argument validation
  • Fixed: Bun.pathToFileURL crash on Windows in rare cases
  • Fixed: MKADDRESSBOOK CardDAV HTTP method being silently dropped by Bun.serve and rewritten to GET by fetch(). The method is now recognized across all HTTP handling paths.
  • Fixed: AbortSignal memory leak in WebSocket upgrade request when using the Bun.serve({ fetch, websocket }) catch-all path
  • Fixed: ~1 KB memory leak per --hot reload cycle
  • Fixed: memory leak in Glob.scan() and Glob.scanSync()

Web APIs

  • Fixed: calling controller.abort() on a fetch() queued behind the maximum simultaneous requests limit left the promise pending forever. Aborted queued requests now reject immediately with AbortError without consuming a connection slot. (@Jarred-Sumner)
  • Fixed: memory leak in AbortSignal.timeout() in certain cases
  • Fixed: fetch() hanging forever when using tls: { checkServerIdentity } in certain error cases
  • Fixed: Rare crash when aborting a fetch() request
  • Fixed: fetch() with HTTP_PROXY incorrectly injected :80 or :443 into the proxy request URI for URLs without an explicit port, breaking strict proxies like Charles, mitmproxy, and corporate middleboxes. Now matches curl and Node.js behavior per RFC 7230 ยง5.3.2.
  • Fixed: Rare crash when calling ws.close() on a wss:// WebSocket connecting through an HTTP CONNECT proxy during the TLS handshake (@dylan-conway)
  • Fixed: Tiny memory leak affecting WebSocket client send buffers and other internal queues that fully drain between writes
  • Fixed: Hypothetical crash when resolving a promise
  • Fixed: Hypothetical crash when converting records with numeric-index string keys, and resizable/growable-shared ArrayBuffer or TypedArray in certain web apis

bun install

  • Fixed: bunx @anthropic-ai/claude-code failing with "could not determine executable to run"

JavaScript bundler

  • Fixed: bun build --compile producing broken binaries in certain cases on macOS ARM64 since Bun v1.3.12
  • Fixed: HTML import etag not updating when referenced JS/CSS chunks changed, causing browsers to 304-cache stale HTML that pointed at old chunk filenames โ€” resulting in blank pages until hard refresh (@dylan-conway)
  • Fixed: bun build --compile producing broken executables on NixOS/Guix hosts โ€” the interpreter path rewrite is now skipped when running on a Nix/Guix-managed host.
  • Fixed: Crash in Bun.build when a bundler plugin's onResolve handler races with a failed sibling import in the same file. Now Bun.build correctly throws with the resolve errors.
  • Fixed: a crash caused by the resolver attempting to auto-install invalid npm package names (e.g. strings containing spaces, newlines, or braces) passed to mock.module(), Bun.resolveSync(), import(), or require.resolve()

CSS Parser

  • Fixed: CSS bundler incorrectly stripping top-level @layer ordering declarations (e.g. @layer theme, base, components, utilities;) when bundling, which broke Tailwind CSS and other cascade layer-dependent stylesheets. This was a regression from v1.3.0.

bun test

  • Fixed: toMatchSnapshot() failing with --rerun-each, retry, or repeats because the snapshot counter was not reset between iterations, causing subsequent runs to look for non-existent snapshot keys (e.g. "test name 2" instead of "test name 1") and erroring with "Snapshot creation is disabled" in CI (@chrislloyd)

Windows

  • Fixed: a race condition in the runtime transpiler that could cause a crash on Windows when multiple threads were transpiling concurrently (@dylan-conway)
  • Fixed: crash on Windows caused by NaN/+inf values reaching the internal timer when the GC scheduler computed degenerate delay values (@dylan-conway)
  • Fixed: Rare crash in readFile crash on Windows when reading files larger than 2GB
  • Fixed: Rare crash in fs.readdir on Windows when a third-party filesystem or filter driver (e.g. network redirector, virtual FS, or AV minifilter) returned a malformed directory entry.
  • Fixed: resource exhaustion in libpas that could cause crashes in long-running processes on Windows, pthread_once implementation issues, and a missing mprotect gate in memory decommit. (@Jarred-Sumner)

JavaScript engine

  • Fixed: a RegExp correctness issue where exec/test didn't reload the internal RegExp after ToLength(lastIndex) coercion (@sosukesuzuki)
  • Fixed: import { "*" as x } not being treated as a namespace import (@sosukesuzuki)
  • Fixed: TypedArray iterator .next() behavior with detached buffers (test262 compliance) (@sosukesuzuki)
  • Fixed: Date.toLocaleString() crash when called from a Worker (@sosukesuzuki)
  • Fixed: multiple YARR (RegExp engine) backtracking and capture bugs (@sosukesuzuki)
  • Fixed: class instance field eval context not propagating through arrow functions and nested scopes (@sosukesuzuki)
  • Fixed: DFG constant folding crash and isWithinPowerOfTwo producing unsound results for BitAnd with negative masks (@sosukesuzuki)

Internal / Runtime

  • Fixed: pausing at a breakpoint or debugger; statement pegged one CPU core at 100% for the entire pause. The paused thread now sleeps until the debugger sends a message. Closing the debugger's WebSocket while paused also now correctly resumes the program.
  • Fixed: a deadlock between the memory allocator and Bun's thread pool that could permanently freeze long-running compiled executables on macOS arm64 after sustained async I/O. (@Jarred-Sumner)
  • Fixed: 8 memory-accounting bugs in libpas
  • Fixed: several potential crashes where early-exit control flow could leave internal state partially initialized โ€” affecting HTTP decompression, Request/Response body handling, Bun.build plugin errors, and named pipe listeners on Windows.
  • Fixed: a potential crash in the socket layer where dereferencing a context's loop pointer after unlinking could fail if the context was freed during the unlink (@cirospaciari)
  • Fixed: Hypothetical crash involving the event loop timer sweep iterator
  • Improved: HTTP chunked encoding parser hardening