Bun.randomUUIDv7() returns a UUID v7, which is monotonic and suitable for sorting and databases.
Copy
Ask AI
import {randomUUIDv7} from 'bun';const id = randomUUIDv7();// => "0192ce11-26d5-7dc3-9305-1426de888c5a"
A UUID v7 is a 128-bit value that encodes the current timestamp, a random value, and a counter. The timestamp is encoded using the lowest 48 bits, and the random value and counter are encoded using the remaining bits.The timestamp parameter defaults to the current time in milliseconds. When the timestamp changes, the counter is reset to a pseudo-random integer wrapped to 4096. This counter is atomic and threadsafe, meaning that using Bun.randomUUIDv7() in many Workers within the same process running at the same timestamp will not have colliding counter values.The final 8 bytes of the UUID are a cryptographically secure random value. It uses the same random number generator used by crypto.randomUUID() (which comes from BoringSSL, which in turn comes from the platform-specific system random number generator usually provided by the underlying hardware).
Copy
Ask AI
namespace Bun { function randomUUIDv7( encoding?: 'hex' | 'base64' | 'base64url' = 'hex', timestamp?: number = Date.now(), ): string; /** * If you pass "buffer", you get a 16-byte buffer instead of a string. */ function randomUUIDv7(encoding: 'buffer', timestamp?: number = Date.now()): Buffer; // If you only pass a timestamp, you get a hex string function randomUUIDv7(timestamp?: number = Date.now()): string;}
You can optionally set encoding to "buffer" to get a 16-byte buffer instead of a string. This can sometimes avoid string conversion overhead.
buffer.ts
Copy
Ask AI
const buffer = Bun.randomUUIDv7('buffer');
base64 and base64url encodings are also supported when you want a slightly shorter string.
Bun.peek(prom: Promise)Reads a promise’s result without await or .then, but only if the promise has already fulfilled or rejected.
Copy
Ask AI
import {peek} from 'bun';const promise = Promise.resolve('hi');// no await!const result = peek(promise);console.log(result); // "hi"
This is important when attempting to reduce number of extraneous microticks in performance-sensitive code. It’s an advanced API and you probably shouldn’t use it unless you know what you’re doing.
Copy
Ask AI
import {peek} from 'bun';import {expect, test} from 'bun:test';test('peek', () => { const promise = Promise.resolve(true); // no await necessary! expect(peek(promise)).toBe(true); // if we peek again, it returns the same value const again = peek(promise); expect(again).toBe(true); // if we peek a non-promise, it returns the value const value = peek(42); expect(value).toBe(42); // if we peek a pending promise, it returns the promise again const pending = new Promise(() => {}); expect(peek(pending)).toBe(pending); // If we peek a rejected promise, it: // - returns the error // - does not mark the promise as handled const rejected = Promise.reject(new Error('Successfully tested promise rejection')); expect(peek(rejected).message).toBe('Successfully tested promise rejection');});
The peek.status function lets you read the status of a promise without resolving it.
Bun.escapeHTML(value: string | object | number | boolean): stringEscapes the following characters from an input string:
" becomes "
& becomes &
' becomes '
< becomes <
> becomes >
This function is optimized for large input. On an M1X, it processes 480 MB/s -
20 GB/s, depending on how much data is being escaped and whether there is non-ascii
text. Non-string types will be converted to a string before escaping.
Quickly checking if a string contains ANSI escape codes
Measuring the width of a string in a terminal
This API is designed to match the popular “string-width” package, so that
existing code can be easily ported to Bun and vice versa.In this benchmark, Bun.stringWidth is a ~6,756x faster than the string-width npm package for input larger than about 500 characters. Big thanks to sindresorhus for their work on string-width!
To make Bun.stringWidth fast, we’ve implemented it in Zig using optimized SIMD instructions, accounting for Latin1, UTF-16, and UTF-8 encodings. It passes string-width’s tests.
View full benchmark
As a reminder, 1 nanosecond (ns) is 1 billionth of a second. Here’s a quick reference for converting between units:
namespace Bun { export function stringWidth( /** * The string to measure */ input: string, options?: { /** * If `true`, count ANSI escape codes as part of the string width. If `false`, ANSI escape codes are ignored when calculating the string width. * * @default false */ countAnsiEscapeCodes?: boolean; /** * When it's ambiugous and `true`, count emoji as 1 characters wide. If `false`, emoji are counted as 2 character wide. * * @default true */ ambiguousIsNarrow?: boolean; }, ): number;}
Optionally, pass a parameters object as the second argument:
zlib compression options
Copy
Ask AI
export type ZlibCompressionOptions = { /** * The compression level to use. Must be between `-1` and `9`. * - A value of `-1` uses the default compression level (Currently `6`) * - A value of `0` gives no compression * - A value of `1` gives least compression, fastest speed * - A value of `9` gives best compression, slowest speed */ level?: -1 | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9; /** * How much memory should be allocated for the internal compression state. * * A value of `1` uses minimum memory but is slow and reduces compression ratio. * * A value of `9` uses maximum memory for optimal speed. The default is `8`. */ memLevel?: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9; /** * The base 2 logarithm of the window size (the size of the history buffer). * * Larger values of this parameter result in better compression at the expense of memory usage. * * The following value ranges are supported: * - `9..15`: The output will have a zlib header and footer (Deflate) * - `-9..-15`: The output will **not** have a zlib header or footer (Raw Deflate) * - `25..31` (16+`9..15`): The output will have a gzip header and footer (gzip) * * The gzip header will have no file name, no extra data, no comment, no modification time (set to zero) and no header CRC. */ windowBits?: | -9 | -10 | -11 | -12 | -13 | -14 | -15 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 25 | 26 | 27 | 28 | 29 | 30 | 31; /** * Tunes the compression algorithm. * * - `Z_DEFAULT_STRATEGY`: For normal data **(Default)** * - `Z_FILTERED`: For data produced by a filter or predictor * - `Z_HUFFMAN_ONLY`: Force Huffman encoding only (no string match) * - `Z_RLE`: Limit match distances to one (run-length encoding) * - `Z_FIXED` prevents the use of dynamic Huffman codes * * `Z_RLE` is designed to be almost as fast as `Z_HUFFMAN_ONLY`, but give better compression for PNG image data. * * `Z_FILTERED` forces more Huffman coding and less string matching, it is * somewhat intermediate between `Z_DEFAULT_STRATEGY` and `Z_HUFFMAN_ONLY`. * Filtered data consists mostly of small values with a somewhat random distribution. */ strategy?: number;};
This is the symbol that Bun uses to implement Bun.inspect. You can override this to customize how your objects are printed. It is identical to util.inspect.custom in Node.js.
Copy
Ask AI
class Foo { [Bun.inspect.custom]() { return 'foo'; }}const foo = new Foo();console.log(foo); // => "foo"
Bun implements a set of convenience functions for asynchronously consuming the body of a ReadableStream and converting it to various binary formats.
Copy
Ask AI
const stream = (await fetch('https://bun.com')).body;stream; // => ReadableStreamawait Bun.readableStreamToArrayBuffer(stream);// => ArrayBufferawait Bun.readableStreamToBytes(stream);// => Uint8Arrayawait Bun.readableStreamToBlob(stream);// => Blobawait Bun.readableStreamToJSON(stream);// => objectawait Bun.readableStreamToText(stream);// => string// returns all chunks as an arrayawait Bun.readableStreamToArray(stream);// => unknown[]// returns all chunks as a FormData object (encoded as x-www-form-urlencoded)await Bun.readableStreamToFormData(stream);// returns all chunks as a FormData object (encoded as multipart/form-data)await Bun.readableStreamToFormData(stream, multipartFormBoundary);
Resolves a file path or module specifier using Bun’s internal module resolution algorithm. The first argument is the path to resolve, and the second argument is the “root”. If no match is found, an Error is thrown.
The estimateShallowMemoryUsageOf function returns a best-effort estimate of the memory usage of an object in bytes, excluding the memory usage of properties or other objects it references. For accurate per-object memory usage, use Bun.generateHeapSnapshot.
Copy
Ask AI
import {estimateShallowMemoryUsageOf} from 'bun:jsc';const obj = {foo: 'bar'};const usage = estimateShallowMemoryUsageOf(obj);console.log(usage); // => 16const buffer = Buffer.alloc(1024 * 1024);estimateShallowMemoryUsageOf(buffer);// => 1048624const req = new Request('https://bun.com');estimateShallowMemoryUsageOf(req);// => 167const array = Array(1024).fill({a: 1});// Arrays are usually not stored contiguously in memory, so this will not return a useful value (which isn't a bug).estimateShallowMemoryUsageOf(array);// => 16