The Bun shell is a powerful tool for running shell commands.
module
bun
The 'bun'
module is where most of Bun's APIs are located. You can import all of the values and types in this module with an import statement, or by referencing the Bun
global namespace.
const result = await $`echo "Hello, world!"`.text(); console.log(result); // "Hello, world!"
namespace $
class ShellError
ShellError represents an error that occurred while executing a shell command with the Bun Shell.
try { const result = await $`exit 1`; } catch (error) { if (error instanceof $.ShellError) { console.log(error.exitCode); // 1 } }
- static stackTraceLimit: number
The
Error.stackTraceLimit
property specifies the number of stack frames collected by a stack trace (whether generated bynew Error().stack
orError.captureStackTrace(obj)
).The default value is
10
but may be set to any valid JavaScript number. Changes will affect any stack trace captured after the value has been changed.If set to a non-number value, or set to a negative number, stack traces will not capture any frames.
Read from stdout as an ArrayBuffer
@returnsStdout as an ArrayBuffer
const output = await $`echo hello`; console.log(output.arrayBuffer()); // ArrayBuffer { byteLength: 6 }
Read from stdout as an Uint8Array
@returnsStdout as an Uint8Array
const output = await $`echo hello`; console.log(output.bytes()); // Uint8Array { byteLength: 6 }
Read from stdout as a JSON object
@returnsStdout as a JSON object
const output = await $`echo '{"hello": 123}'`; console.log(output.json()); // { hello: 123 }
- @param encoding
The encoding to use when decoding the output
@returnsStdout as a string with the given encoding
Read as UTF-8 string
const output = await $`echo hello`; console.log(output.text()); // "hello\n"
Read as base64 string
const output = await $`echo ${atob("hello")}`; console.log(output.text("base64")); // "hello\n"
- targetObject: object,constructorOpt?: Function): void;
Create .stack property on a target object
class ShellPromise
The
Bun.$.ShellPromise
class represents a shell command that gets executed once awaited, or called with.text()
,.json()
, etc.const myShellPromise = $`echo "Hello, world!"`; const result = await myShellPromise.text(); console.log(result); // "Hello, world!"
Read from stdout as an ArrayBuffer
Automatically calls quiet
@returnsA promise that resolves with stdout as an ArrayBuffer
const output = await $`echo hello`.arrayBuffer(); console.log(output); // ArrayBuffer { byteLength: 6 }
- onrejected?: null | (reason: any) => TResult | PromiseLike<TResult>
Attaches a callback for only the rejection of the Promise.
@param onrejectedThe callback to execute when the Promise is rejected.
@returnsA Promise for the completion of the callback.
- @param newCwd
The new working directory
- env(newEnv: undefined | Dict<string> | Record<string, undefined | string>): this;
Set environment variables for the shell.
@param newEnvThe new environment variables
await $`echo $FOO`.env({ ...process.env, FOO: "LOL!" }) expect(stdout.toString()).toBe("LOL!");
- onfinally?: null | () => void
Attaches a callback that is invoked when the Promise is settled (fulfilled or rejected). The resolved value cannot be modified from the callback.
@param onfinallyThe callback to execute when the Promise is settled (fulfilled or rejected).
@returnsA Promise for the completion of the callback.
Read from stdout as a JSON object
Automatically calls quiet
@returnsA promise that resolves with stdout as a JSON object
const output = await $`echo '{"hello": 123}'`.json(); console.log(output); // { hello: 123 }
Read from stdout as a string, line by line
Automatically calls quiet to disable echoing to stdout.
Configure the shell to not throw an exception on non-zero exit codes. Throwing can be re-enabled with
.throws(true)
.By default, the shell with throw an exception on commands which return non-zero exit codes.
By default, the shell will write to the current process's stdout and stderr, as well as buffering that output.
This configures the shell to only buffer the output.
- text(encoding?: BufferEncoding): Promise<string>;
Read from stdout as a string.
Automatically calls quiet to disable echoing to stdout.
@param encodingThe encoding to use when decoding the output
@returnsA promise that resolves with stdout as a string
Read as UTF-8 string
const output = await $`echo hello`.text(); console.log(output); // "hello\n"
Read as base64 string
const output = await $`echo ${atob("hello")}`.text("base64"); console.log(output); // "hello\n"
- onrejected?: null | (reason: any) => TResult2 | PromiseLike<TResult2>): Promise<TResult1 | TResult2>;
Attaches callbacks for the resolution and/or rejection of the Promise.
@param onfulfilledThe callback to execute when the Promise is resolved.
@param onrejectedThe callback to execute when the Promise is rejected.
@returnsA Promise for the completion of which ever callback is executed.
- shouldThrow: boolean): this;
Configure whether or not the shell should throw an exception on non-zero exit codes.
By default, this is configured to
true
. - values: T): Promise<{ [K in string | number | symbol]: Awaited<T[P<P>]> }>;
Creates a Promise that is resolved with an array of results when all of the provided Promises resolve, or rejected when any Promise is rejected.
@param valuesAn array of Promises.
@returnsA new Promise.
- values: Iterable<T | PromiseLike<T>>): Promise<PromiseSettledResult<Awaited<T>>[]>;
Creates a Promise that is resolved with an array of results when all of the provided Promises resolve or reject.
@param valuesAn array of Promises.
@returnsA new Promise.
- values: T): Promise<Awaited<T[number]>>;
The any function returns a promise that is fulfilled by the first given promise to be fulfilled, or rejected with an AggregateError containing an array of rejection reasons if all of the given promises are rejected. It resolves all elements of the passed iterable to promises as it runs this algorithm.
@param valuesAn array or iterable of Promises.
@returnsA new Promise.
- values: T): Promise<Awaited<T[number]>>;
Creates a Promise that is resolved or rejected when any of the provided Promises are resolved or rejected.
@param valuesAn array of Promises.
@returnsA new Promise.
- reason?: any): Promise<T>;
Creates a new rejected promise for the provided reason.
@param reasonThe reason the promise was rejected.
@returnsA new rejected Promise.
- value: T): Promise<Awaited<T>>;
Creates a new resolved promise for the provided value.
@param valueA promise.
@returnsA promise whose internal state matches the provided promise.
- fn: (...args: A) => T | PromiseLike<T>,...args: A): Promise<T>;
Try to run a function and return the result. If the function throws, return the result of the
catch
function.@param fnThe function to run
@param argsThe arguments to pass to the function. This is similar to
setTimeout
and avoids the extra closure.@returnsThe result of the function or the result of the
catch
function - static withResolvers<T>(): { promise: Promise<T>; reject: (reason?: any) => void; resolve: (value?: T | PromiseLike<T>) => void };
Create a deferred promise, with exposed
resolve
andreject
methods which can be called separately.This is useful when you want to return a Promise and have code outside the Promise resolve or reject it.
const { promise, resolve, reject } = Promise.withResolvers(); setTimeout(() => { resolve("Hello world!"); }, 1000); await promise; // "Hello world!"
interface ShellOutput
Read from stdout as an ArrayBuffer
@returnsStdout as an ArrayBuffer
const output = await $`echo hello`; console.log(output.arrayBuffer()); // ArrayBuffer { byteLength: 6 }
Read from stdout as an Uint8Array
@returnsStdout as an Uint8Array
const output = await $`echo hello`; console.log(output.bytes()); // Uint8Array { byteLength: 6 }
Read from stdout as a JSON object
@returnsStdout as a JSON object
const output = await $`echo '{"hello": 123}'`; console.log(output.json()); // { hello: 123 }
- @param encoding
The encoding to use when decoding the output
@returnsStdout as a string with the given encoding
Read as UTF-8 string
const output = await $`echo hello`; console.log(output.text()); // "hello\n"
Read as base64 string
const output = await $`echo ${atob("hello")}`; console.log(output.text("base64")); // "hello\n"
- @param pattern
Brace pattern to expand
const result = braces('index.{js,jsx,ts,tsx}'); console.log(result) // ['index.js', 'index.jsx', 'index.ts', 'index.tsx']
- newEnv?: Dict<string> | Record<string, undefined | string>
Change the default environment variables for shells created by this instance.
@param newEnvDefault environment variables to use for shells created by this instance.
import {$} from 'bun'; $.env({ BUN: "bun" }); await $`echo $BUN`; // "bun"
namespace CSRF
Generate and verify CSRF tokens
- @param secret
The secret to use for the token. If not provided, a random default secret will be generated in memory and used.
@param optionsThe options for the token.
@returnsThe generated token.
- @param token
The token to verify.
@param optionsThe options for the token.
@returnsTrue if the token is valid, false otherwise.
namespace dns
DNS Related APIs
- function getCacheStats(): { cacheHitsCompleted: number; cacheHitsInflight: number; cacheMisses: number; errors: number; size: number; totalCount: number };
Experimental API
- hostname: string,options?: { backend: 'system' | 'libc' | 'c-ares' | 'getaddrinfo'; family: 0 | 4 | 6 | 'IPv4' | 'IPv6' | 'any'; flags: number; port: number; socketType: 'udp' | 'tcp' }
Lookup the IP address for a hostname
Uses non-blocking APIs by default
@param hostnameThe hostname to lookup
@param optionsOptions for the lookup
Basic usage
const [{ address }] = await Bun.dns.lookup('example.com');
Filter results to IPv4
import { dns } from 'bun'; const [{ address }] = await dns.lookup('example.com', {family: 4}); console.log(address); // "123.122.22.126"
Filter results to IPv6
import { dns } from 'bun'; const [{ address }] = await dns.lookup('example.com', {family: 6}); console.log(address); // "2001:db8::1"
DNS resolver client
Bun supports three DNS resolvers:
c-ares
- Uses the c-ares library to perform DNS resolution. This is the default on Linux.system
- Uses the system's non-blocking DNS resolver API if available, falls back togetaddrinfo
. This is the default on macOS and the same asgetaddrinfo
on Linux.getaddrinfo
- Uses the posix standardgetaddrinfo
function. Will cause performance issues under concurrent loads.
To customize the DNS resolver, pass a
backend
option todns.lookup
:import { dns } from 'bun'; const [{ address }] = await dns.lookup('example.com', {backend: 'getaddrinfo'}); console.log(address); // "19.42.52.62"
- hostname: string,port?: number): void;
Experimental API
Prefetch a hostname.
This will be used by fetch() and Bun.connect() to avoid DNS lookups.
@param hostnameThe hostname to prefetch
@param portThe port to prefetch. Default is 443. Port helps distinguish between IPv6 vs IPv4-only connections.
import { dns } from 'bun'; dns.prefetch('example.com'); // ... something expensive await fetch('https://example.com');
- arg: any,): string;
Pretty-print an object the same as console.log to a
string
Supports JSX
@param argThe value to inspect
@param optionsOptions for the inspection
namespace inspect
That can be used to declare custom inspect functions.
- tabularData: object | unknown[],properties?: string[],options?: { colors: boolean }): string;
Pretty-print an object or array as a table
Like console.table, except it returns a string
tabularData: object | unknown[],options?: { colors: boolean }): string;Pretty-print an object or array as a table
Like console.table, except it returns a string
namespace semver
Bun.semver provides a fast way to parse and compare version numbers.
- ): -1 | 0 | 1;
Returns 0 if the versions are equal, 1 if
v1
is greater, or -1 ifv2
is greater. Throws an error if either version is invalid. - ): boolean;
Test if the version satisfies the range. Stringifies both arguments. Returns
true
orfalse
.
namespace SQL
class MySQLError
- static stackTraceLimit: number
The
Error.stackTraceLimit
property specifies the number of stack frames collected by a stack trace (whether generated bynew Error().stack
orError.captureStackTrace(obj)
).The default value is
10
but may be set to any valid JavaScript number. Changes will affect any stack trace captured after the value has been changed.If set to a non-number value, or set to a negative number, stack traces will not capture any frames.
- targetObject: object,constructorOpt?: Function): void;
Create .stack property on a target object
class PostgresError
- static stackTraceLimit: number
The
Error.stackTraceLimit
property specifies the number of stack frames collected by a stack trace (whether generated bynew Error().stack
orError.captureStackTrace(obj)
).The default value is
10
but may be set to any valid JavaScript number. Changes will affect any stack trace captured after the value has been changed.If set to a non-number value, or set to a negative number, stack traces will not capture any frames.
- targetObject: object,constructorOpt?: Function): void;
Create .stack property on a target object
class SQLError
- static stackTraceLimit: number
The
Error.stackTraceLimit
property specifies the number of stack frames collected by a stack trace (whether generated bynew Error().stack
orError.captureStackTrace(obj)
).The default value is
10
but may be set to any valid JavaScript number. Changes will affect any stack trace captured after the value has been changed.If set to a non-number value, or set to a negative number, stack traces will not capture any frames.
- targetObject: object,constructorOpt?: Function): void;
Create .stack property on a target object
class SQLiteError
- static stackTraceLimit: number
The
Error.stackTraceLimit
property specifies the number of stack frames collected by a stack trace (whether generated bynew Error().stack
orError.captureStackTrace(obj)
).The default value is
10
but may be set to any valid JavaScript number. Changes will affect any stack trace captured after the value has been changed.If set to a non-number value, or set to a negative number, stack traces will not capture any frames.
- targetObject: object,constructorOpt?: Function): void;
Create .stack property on a target object
interface PostgresOrMySQLOptions
- bigint?: boolean
By default values outside i32 range are returned as strings. If this is true, values outside i32 range are returned as BigInts.
interface Query<T>
Represents a SQL query that can be executed, with additional control methods Extends Promise to allow for async/await usage
- onrejected?: null | (reason: any) => TResult | PromiseLike<TResult>): Promise<T | TResult>;
Attaches a callback for only the rejection of the Promise.
@param onrejectedThe callback to execute when the Promise is rejected.
@returnsA Promise for the completion of the callback.
- onfinally?: null | () => void): Promise<T>;
Attaches a callback that is invoked when the Promise is settled (fulfilled or rejected). The resolved value cannot be modified from the callback.
@param onfinallyThe callback to execute when the Promise is settled (fulfilled or rejected).
@returnsA Promise for the completion of the callback.
- onfulfilled?: null | (value: T) => TResult1 | PromiseLike<TResult1>,onrejected?: null | (reason: any) => TResult2 | PromiseLike<TResult2>): Promise<TResult1 | TResult2>;
Attaches callbacks for the resolution and/or rejection of the Promise.
@param onfulfilledThe callback to execute when the Promise is resolved.
@param onrejectedThe callback to execute when the Promise is rejected.
@returnsA Promise for the completion of which ever callback is executed.
interface SQLiteOptions
Options for Database
- readonly?: boolean
Open the database as read-only (no write operations, no create).
Equivalent to constants.SQLITE_OPEN_READONLY
- safeIntegers?: boolean
When set to
true
, integers are returned asbigint
types.When set to
false
, integers are returned asnumber
types and truncated to 52 bits. - strict?: boolean
When set to
false
orundefined
:- Queries missing bound parameters will NOT throw an error
- Bound named parameters in JavaScript need to exactly match the SQL query.
const db = new Database(":memory:", { strict: false }); db.run("INSERT INTO foo (name) VALUES ($name)", { $name: "foo" });
When set to
true
:- Queries missing bound parameters will throw an error
- Bound named parameters in JavaScript no longer need to be
$
,:
, or@
. The SQL query will remain prefixed.
- type AwaitPromisesArray<T extends PromiseLike<any>[]> = { [K in keyof T]: Awaited<T[K]> }
- type ContextCallback<T, SQL> = (sql: SQL) => Bun.MaybePromise<T>
- type ContextCallbackResult<T> = T extends PromiseLike<any>[] ? AwaitPromisesArray<T> : Awaited<T>
- type Options = SQLiteOptions | PostgresOrMySQLOptions
Configuration options for SQL client connection and behavior
const config: Bun.SQL.Options = { host: 'localhost', port: 5432, user: 'dbuser', password: 'secretpass', database: 'myapp', idleTimeout: 30, max: 20, onconnect: (client) => { console.log('Connected to database'); } };
- type SavepointContextCallback<T> = ContextCallback<T, SavepointSQL>
Callback function type for savepoint contexts
- type TransactionContextCallback<T> = ContextCallback<T, TransactionSQL>
Callback function type for transaction contexts
namespace unsafe
- ): string;
Cast bytes to a
String
without copying. This is the fastest way to get aString
from aUint8Array
orArrayBuffer
.Only use this for ASCII strings. If there are non-ascii characters, your application may crash and/or very confusing bugs will happen such as
"foo" !== "foo"
.The input buffer must not be garbage collected. That means you will need to hold on to it for the duration of the string's lifetime.
buffer: Uint16Array): string;Cast bytes to a
String
without copying. This is the fastest way to get aString
from aUint16Array
The input must be a UTF-16 encoded string. This API does no validation whatsoever.
The input buffer must not be garbage collected. That means you will need to hold on to it for the duration of the string's lifetime.
- level?: 0 | 1 | 2): 0 | 1 | 2;
Force the garbage collector to run extremely often, especially inside
bun:test
.0
: default, disable1
: asynchronously call the garbage collector more often2
: synchronously call the garbage collector more often.
This is a global setting. It's useful for debugging seemingly random crashes.
BUN_GARBAGE_COLLECTOR_LEVEL
environment variable is also supported.@returnsThe previous level
Dump the mimalloc heap to the console
namespace YAML
YAML related APIs
- @param input
The YAML string to parse
@returnsA JavaScript value
import { YAML } from "bun"; console.log(YAML.parse("123")) // 123 console.log(YAML.parse("null")) // null console.log(YAML.parse("false")) // false console.log(YAML.parse("abc")) // "abc" console.log(YAML.parse("- abc")) // [ "abc" ] console.log(YAML.parse("abc: def")) // { "abc": "def" }
- input: unknown,replacer?: null,space?: string | number): string;
Convert a JavaScript value into a YAML string. Strings are double quoted if they contain keywords, non-printable or escaped characters, or if a YAML parser would parse them as numbers. Anchors and aliases are inferred from objects, allowing cycles.
@param inputThe JavaScript value to stringify.
@param replacerCurrently not supported.
@param spaceA number for how many spaces each level of indentation gets, or a string used as indentation. Without this parameter, outputs flow-style (single-line) YAML. With this parameter, outputs block-style (multi-line) YAML. The number is clamped between 0 and 10, and the first 10 characters of the string are used.
@returnsA string containing the YAML document.
import { YAML } from "bun"; const input = { abc: "def", num: 123 }; // Without space - flow style (single-line) console.log(YAML.stringify(input)); // {abc: def,num: 123} // With space - block style (multi-line) console.log(YAML.stringify(input, null, 2)); // abc: def // num: 123 const cycle = {}; cycle.obj = cycle; console.log(YAML.stringify(cycle, null, 2)); // &1 // obj: *1
class ArrayBufferSink
Fast incremental writer that becomes an ArrayBuffer on end().
Flush the internal buffer
If ArrayBufferSink.start was passed a
stream
option, this will return aArrayBuffer
If ArrayBufferSink.start was passed astream
option andasUint8Array
, this will return aUint8Array
Otherwise, this will return the number of bytes written since the last flushThis API might change later to separate Uint8ArraySink and ArrayBufferSink
class Cookie
A class for working with a single cookie
const cookie = new Bun.Cookie("name", "value"); console.log(cookie.toString()); // "name=value; Path=/; SameSite=Lax"
Whether the cookie is expired
Serialize the cookie to a string
const cookie = Bun.Cookie.from("session", "abc123", { domain: "example.com", path: "/", secure: true, httpOnly: true }).serialize(); // "session=abc123; Domain=example.com; Path=/; Secure; HttpOnly; SameSite=Lax"
Serialize the cookie to a JSON object
Serialize the cookie to a string
Alias of Cookie.serialize
- name: string,value: string,
Create a new cookie from a name and value and optional options
class CookieMap
A Map-like interface for working with collections of cookies.
Implements the
Iterable
interface, allowing use withfor...of
loops.Returns the default iterator for the CookieMap. Used by for...of loops to iterate over all entries.
@returnsAn iterator for the entries in the map
- @param name
The name of the cookie to delete
@param optionsThe options for the cookie to delete
name: string,): void;Removes a cookie from the map.
@param nameThe name of the cookie to delete
@param optionsThe options for the cookie to delete
Returns an iterator of [name, value] pairs for every cookie in the map.
@returnsAn iterator for the entries in the map
- @param name
The name of the cookie to retrieve
@returnsThe cookie value as a string, or null if the cookie doesn't exist
- @param name
The name of the cookie to check
@returnstrue if the cookie exists, false otherwise
Returns an iterator of all cookie names in the map.
@returnsAn iterator for the cookie names
- @param name
The name of the cookie
@param valueThe value of the cookie
@param optionsOptional cookie attributes
@param optionsCookie options including name and value
Converts the cookie map to a serializable format.
@returnsAn array of name/value pairs
Gets an array of values for Set-Cookie headers in order to apply all changes to cookies.
@returnsAn array of values for Set-Cookie headers
Returns an iterator of all cookie values in the map.
@returnsAn iterator for the cookie values
class CryptoHasher
Hardware-accelerated cryptographic hash functions
Used for
crypto.createHash()
- readonly static algorithms: SupportedCryptoAlgorithms[]
List of supported hash algorithms
These are hardware accelerated with BoringSSL
Perform a deep copy of the hasher
- ): string;
Finalize the hash. Resets the CryptoHasher so it can be reused.
@param encodingDigestEncoding
to return the hash in. If none is provided, it will return aUint8Array
.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time Update the hash with data
Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.hashInto: TypedArray): TypedArray;Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time): string;Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param encodingDigestEncoding
to return the hash in
class CryptoHashInterface<T>
This class only exists in types
- @param encoding
DigestEncoding
to return the hash in. If none is provided, it will return aUint8Array
.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time - hashInto?: TypedArray<ArrayBufferLike>): TypedArray;
Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time): string;Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param encodingDigestEncoding
to return the hash in
class FileSystemRouter
class Glob
Match files using glob patterns.
The supported pattern syntax for is:
?
Matches any single character.*
Matches zero or more characters, except for path separators ('/' or '').**
Matches zero or more characters, including path separators. Must match a complete path segment, i.e. followed by a path separator or at the end of the pattern.[ab]
Matches one of the characters contained in the brackets. Character ranges (e.g. "[a-z]") are also supported. Use "[!ab]" or "[^ab]" to match any character except those contained in the brackets.{a,b}
Match one of the patterns contained in the braces. Any of the wildcards listed above can be used in the sub patterns. Braces may be nested up to 10 levels deep.!
Negates the result when at the start of the pattern. Multiple "!" characters negate the pattern multiple times.- `` Used to escape any of the special characters above.
const glob = new Glob("*.{ts,tsx}"); const scannedFiles = await Array.fromAsync(glob.scan({ cwd: './src' }))
const glob = new Glob("*.{ts,tsx}"); expect(glob.match('foo.ts')).toBeTrue();
- scan(): AsyncIterableIterator<string>;
Scan a root directory recursively for files that match this glob pattern. Returns an async iterator.
const glob = new Glob("*.{ts,tsx}"); const scannedFiles = await Array.fromAsync(glob.scan({ cwd: './src' }))
- ): IterableIterator<string>;
Synchronously scan a root directory recursively for files that match this glob pattern. Returns an iterator.
const glob = new Glob("*.{ts,tsx}"); const scannedFiles = Array.from(glob.scan({ cwd: './src' }))
class MD4
This class only exists in types
- @param encoding
DigestEncoding
to return the hash in. If none is provided, it will return aUint8Array
.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time - hashInto?: TypedArray<ArrayBufferLike>): TypedArray;
Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time): string;Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param encodingDigestEncoding
to return the hash in
class MD5
This class only exists in types
- @param encoding
DigestEncoding
to return the hash in. If none is provided, it will return aUint8Array
.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time - hashInto?: TypedArray<ArrayBufferLike>): TypedArray;
Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time): string;Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param encodingDigestEncoding
to return the hash in
class RedisClient
- onclose: null | (this: RedisClient, error: Error) => void
Callback fired when the client disconnects from the Redis server
- onconnect: null | (this: RedisClient) => void
Callback fired when the client connects to the Redis server
- from: 'LEFT' | 'RIGHT',to: 'LEFT' | 'RIGHT',timeout: number): Promise<null | string>;
Blocking move from one list to another
Atomically moves an element from source to destination list, blocking until an element is available or the timeout expires. Allows specifying which end to pop from (LEFT/RIGHT) and which end to push to (LEFT/RIGHT).
@param sourceSource list key
@param destinationDestination list key
@param fromDirection to pop from source: "LEFT" or "RIGHT"
@param toDirection to push to destination: "LEFT" or "RIGHT"
@param timeoutTimeout in seconds (can be fractional, 0 = block indefinitely)
@returnsPromise that resolves with the moved element or null on timeout
// Move from right of source to left of destination (like BRPOPLPUSH) const element = await redis.blmove("mylist", "otherlist", "RIGHT", "LEFT", 1.0); if (element) { console.log(`Moved element: ${element}`); } // Move from left to left await redis.blmove("list1", "list2", "LEFT", "LEFT", 0.5);
- timeout: number,numkeys: number,...args: string | number[]): Promise<null | [string, string[]]>;
Blocking pop multiple elements from lists
Blocks until an element is available from one of the specified lists or the timeout expires. Can pop from the LEFT or RIGHT end and optionally pop multiple elements at once using COUNT.
@param timeoutTimeout in seconds (can be fractional, 0 = block indefinitely)
@param numkeysNumber of keys that follow
@param argsKeys, direction ("LEFT" or "RIGHT"), and optional COUNT modifier
@returnsPromise that resolves with [key, [elements]] or null on timeout
// Pop from left end of first available list, wait 1 second const result = await redis.blmpop(1.0, 2, "list1", "list2", "LEFT"); if (result) { const [key, elements] = result; console.log(`Popped from ${key}: ${elements.join(", ")}`); } // Pop 3 elements from right end const result2 = await redis.blmpop(0.5, 1, "mylist", "RIGHT", "COUNT", 3); // Returns: ["mylist", ["elem1", "elem2", "elem3"]] or null if timeout
- ): Promise<null | [string, string]>;
Blocking pop from head of one or more lists
Blocks until an element is available in one of the lists or the timeout expires. Checks keys in order and pops from the first non-empty list.
@param argsKeys followed by timeout in seconds (can be fractional, 0 = block indefinitely)
@returnsPromise that resolves with [key, element] or null on timeout
// Block for up to 1 second const result = await redis.blpop("mylist", 1.0); if (result) { const [key, element] = result; console.log(`Popped ${element} from ${key}`); } // Block indefinitely (timeout = 0) const result2 = await redis.blpop("list1", "list2", 0);
- ): Promise<null | [string, string]>;
Blocking pop from tail of one or more lists
Blocks until an element is available in one of the lists or the timeout expires. Checks keys in order and pops from the first non-empty list.
@param argsKeys followed by timeout in seconds (can be fractional, 0 = block indefinitely)
@returnsPromise that resolves with [key, element] or null on timeout
// Block for up to 1 second const result = await redis.brpop("mylist", 1.0); if (result) { const [key, element] = result; console.log(`Popped ${element} from ${key}`); } // Block indefinitely (timeout = 0) const result2 = await redis.brpop("list1", "list2", 0);
- timeout: number): Promise<null | string>;
Blocking right pop from source and left push to destination
Atomically pops an element from the tail of source list and pushes it to the head of destination list, blocking until an element is available or the timeout expires. This is the blocking version of RPOPLPUSH.
@param sourceSource list key
@param destinationDestination list key
@param timeoutTimeout in seconds (can be fractional, 0 = block indefinitely)
@returnsPromise that resolves with the moved element or null on timeout
// Block for up to 1 second const element = await redis.brpoplpush("tasks", "processing", 1.0); if (element) { console.log(`Processing task: ${element}`); } else { console.log("No tasks available"); } // Block indefinitely (timeout = 0) const task = await redis.brpoplpush("queue", "active", 0);
- timeout: number,numkeys: number,...args: string | number[]): Promise<null | [string, [string, number][]]>;
Blocking version of ZMPOP. Blocks until a member is available or timeout expires.
// Block for 5 seconds waiting for a member const result1 = await redis.bzmpop(5, 1, "myzset", "MIN"); // Returns: ["myzset", [["member1", 1]]] or null if timeout // Block indefinitely (timeout 0) const result2 = await redis.bzmpop(0, 2, "zset1", "zset2", "MAX"); // Returns: ["zset1", [["member5", 5]]] // Block with COUNT option const result3 = await redis.bzmpop(1, 1, "myzset", "MIN", "COUNT", 2); // Returns: ["myzset", [["member1", 1], ["member2", 2]]] or null if timeout
- ): Promise<null | [string, string, number]>;
Remove and return the member with the highest score from one or more sorted sets, or block until one is available
@param argsKeys followed by timeout in seconds (e.g., "key1", "key2", 1.0)
@returnsPromise that resolves with [key, member, score] or null if timeout
// Block for up to 1 second waiting for an element const result = await redis.bzpopmax("myzset", 1.0); if (result) { const [key, member, score] = result; console.log(`Popped ${member} with score ${score} from ${key}`); }
- ): Promise<null | [string, string, number]>;
Remove and return the member with the lowest score from one or more sorted sets, or block until one is available
@param argsKeys followed by timeout in seconds (e.g., "key1", "key2", 1.0)
@returnsPromise that resolves with [key, member, score] or null if timeout
// Block for up to 1 second waiting for an element const result = await redis.bzpopmin("myzset", 1.0); if (result) { const [key, member, score] = result; console.log(`Popped ${member} with score ${score} from ${key}`); }
Disconnect from the Redis server
Connect to the Redis server
@returnsA promise that resolves when connected
- copy(): Promise<number>;
Copy the value stored at the source key to the destination key
By default, the destination key is created in the logical database used by the connection. The REPLACE option removes the destination key before copying the value to it.
@param sourceThe source key to copy from
@param destinationThe destination key to copy to
@returnsPromise that resolves with 1 if the key was copied, 0 if not
await redis.set("mykey", "Hello"); await redis.copy("mykey", "myotherkey"); console.log(await redis.get("myotherkey")); // "Hello"
copy(replace: 'REPLACE'): Promise<number>;Copy the value stored at the source key to the destination key, optionally replacing it
The REPLACE option removes the destination key before copying the value to it.
@param sourceThe source key to copy from
@param destinationThe destination key to copy to
@param replace"REPLACE" - Remove the destination key before copying
@returnsPromise that resolves with 1 if the key was copied, 0 if not
await redis.set("mykey", "Hello"); await redis.set("myotherkey", "World"); await redis.copy("mykey", "myotherkey", "REPLACE"); console.log(await redis.get("myotherkey")); // "Hello"
- timestamp: number): Promise<number>;
Set the expiration for a key as a Unix timestamp (in seconds)
@param keyThe key to set expiration on
@param timestampUnix timestamp in seconds when the key should expire
@returnsPromise that resolves with 1 if timeout was set, 0 if key does not exist
- ): Promise<number>;
Get the expiration time of a key as a UNIX timestamp in seconds
@param keyThe key to check
@returnsPromise that resolves with the timestamp, or -1 if the key has no expiration, or -2 if the key doesn't exist
Get the value of a key as a Uint8Array
@param keyThe key to get
@returnsPromise that resolves with the key's value as a Uint8Array, or null if the key doesn't exist
- ): Promise<null | string>;
Get the value of a key and optionally set its expiration
@param keyThe key to get
@returnsPromise that resolves with the value of the key, or null if the key doesn't exist
ex: 'EX',seconds: number): Promise<null | string>;Get the value of a key and set its expiration in seconds
@param keyThe key to get
@param exSet the specified expire time, in seconds
@param secondsThe number of seconds until expiration
@returnsPromise that resolves with the value of the key, or null if the key doesn't exist
px: 'PX',milliseconds: number): Promise<null | string>;Get the value of a key and set its expiration in milliseconds
@param keyThe key to get
@param pxSet the specified expire time, in milliseconds
@param millisecondsThe number of milliseconds until expiration
@returnsPromise that resolves with the value of the key, or null if the key doesn't exist
exat: 'EXAT',timestampSeconds: number): Promise<null | string>;Get the value of a key and set its expiration at a specific Unix timestamp in seconds
@param keyThe key to get
@param exatSet the specified Unix time at which the key will expire, in seconds
@param timestampSecondsThe Unix timestamp in seconds
@returnsPromise that resolves with the value of the key, or null if the key doesn't exist
pxat: 'PXAT',timestampMilliseconds: number): Promise<null | string>;Get the value of a key and set its expiration at a specific Unix timestamp in milliseconds
@param keyThe key to get
@param pxatSet the specified Unix time at which the key will expire, in milliseconds
@param timestampMillisecondsThe Unix timestamp in milliseconds
@returnsPromise that resolves with the value of the key, or null if the key doesn't exist
- start: number,end: number): Promise<string>;
Get a substring of the string stored at a key
@param keyThe key to get the substring from
@param startThe starting offset (can be negative to count from the end)
@param endThe ending offset (can be negative to count from the end)
@returnsPromise that resolves with the substring, or an empty string if the key doesn't exist
- seconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number[]>;
Set expiration for hash fields (Redis 7.4+) Syntax: HEXPIRE key seconds [NX | XX | GT | LT] FIELDS numfields field [field ...]
@returnsArray where each element is: -2 (field doesn't exist), 0 (condition not met), 1 (expiration set), 2 (field deleted)
redis.hexpire("mykey", 10, "FIELDS", 1, "field1")
seconds: number,condition: 'NX' | 'XX' | 'GT' | 'LT',fieldsKeyword: 'FIELDS',numfields: number,): Promise<number[]>;Set expiration for hash fields (Redis 7.4+) Syntax: HEXPIRE key seconds [NX | XX | GT | LT] FIELDS numfields field [field ...]
@returnsArray where each element is: -2 (field doesn't exist), 0 (condition not met), 1 (expiration set), 2 (field deleted)
redis.hexpire("mykey", 10, "FIELDS", 1, "field1")
- unixTimeSeconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number[]>;
Set expiration for hash fields using Unix timestamp in seconds (Redis 7.4+) Syntax: HEXPIREAT key unix-time-seconds [NX | XX | GT | LT] FIELDS numfields field [field ...]
@returnsArray where each element is: -2 (field doesn't exist), 0 (condition not met), 1 (expiration set), 2 (field deleted)
redis.hexpireat("mykey", 1735689600, "FIELDS", 1, "field1")
unixTimeSeconds: number,condition: 'NX' | 'XX' | 'GT' | 'LT',fieldsKeyword: 'FIELDS',numfields: number,): Promise<number[]>;Set expiration for hash fields using Unix timestamp in seconds (Redis 7.4+) Syntax: HEXPIREAT key unix-time-seconds [NX | XX | GT | LT] FIELDS numfields field [field ...]
@returnsArray where each element is: -2 (field doesn't exist), 0 (condition not met), 1 (expiration set), 2 (field deleted)
redis.hexpireat("mykey", 1735689600, "FIELDS", 1, "field1")
- fieldsKeyword: 'FIELDS',numfields: number,): Promise<number[]>;
Get expiration time of hash fields as Unix timestamp in seconds (Redis 7.4+) Syntax: HEXPIRETIME key FIELDS numfields field [field ...]
@returnsArray where each element is: -2 (field doesn't exist), -1 (no expiration), Unix timestamp in seconds
redis.hexpiretime("mykey", "FIELDS", 2, "field1", "field2")
- fieldsKeyword: 'FIELDS',numfields: number,): Promise<null | string[]>;
Get and delete one or more hash fields (Redis 8.0.0+) Syntax: HGETDEL key FIELDS numfields field [field ...]
@param keyThe hash key
@param fieldsKeywordMust be the literal string "FIELDS"
@param numfieldsNumber of fields to follow
@param fieldsThe field names to get and delete
@returnsPromise that resolves with array of field values (null for non-existent fields)
redis.hgetdel("mykey", "FIELDS", 2, "field1", "field2")
- fieldsKeyword: 'FIELDS',numfields: number,): Promise<null | string[]>;
Get hash field values with expiration options (Redis 8.0.0+) Syntax: HGETEX key [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | PERSIST] FIELDS numfields field [field ...]
redis.hgetex("mykey", "FIELDS", 1, "field1")
ex: 'EX',seconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<null | string[]>;Get hash field values with expiration options (Redis 8.0.0+) Syntax: HGETEX key [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | PERSIST] FIELDS numfields field [field ...]
redis.hgetex("mykey", "FIELDS", 1, "field1")
px: 'PX',milliseconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<null | string[]>;Get hash field values with expiration options (Redis 8.0.0+) Syntax: HGETEX key [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | PERSIST] FIELDS numfields field [field ...]
redis.hgetex("mykey", "FIELDS", 1, "field1")
exat: 'EXAT',unixTimeSeconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<null | string[]>;Get hash field values with expiration options (Redis 8.0.0+) Syntax: HGETEX key [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | PERSIST] FIELDS numfields field [field ...]
redis.hgetex("mykey", "FIELDS", 1, "field1")
pxat: 'PXAT',unixTimeMilliseconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<null | string[]>;Get hash field values with expiration options (Redis 8.0.0+) Syntax: HGETEX key [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | PERSIST] FIELDS numfields field [field ...]
redis.hgetex("mykey", "FIELDS", 1, "field1")
persist: 'PERSIST',fieldsKeyword: 'FIELDS',numfields: number,): Promise<null | string[]>;Get hash field values with expiration options (Redis 8.0.0+) Syntax: HGETEX key [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | PERSIST] FIELDS numfields field [field ...]
redis.hgetex("mykey", "FIELDS", 1, "field1")
- field: string,increment: string | number): Promise<string>;
Increment the float value of a hash field by the given amount
@param keyThe hash key
@param fieldThe field to increment
@param incrementThe amount to increment by
@returnsPromise that resolves with the new value as a string
- fieldsKeyword: 'FIELDS',numfields: number,): Promise<number[]>;
Remove expiration from hash fields (Redis 7.4+) Syntax: HPERSIST key FIELDS numfields field [field ...]
@returnsArray where each element is: -2 (field doesn't exist), -1 (no expiration), 1 (expiration removed)
redis.hpersist("mykey", "FIELDS", 1, "field1")
- milliseconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number[]>;
Set expiration for hash fields in milliseconds (Redis 7.4+) Syntax: HPEXPIRE key milliseconds [NX | XX | GT | LT] FIELDS numfields field [field ...]
@returnsArray where each element is: -2 (field doesn't exist), 0 (condition not met), 1 (expiration set), 2 (field deleted)
redis.hpexpire("mykey", 10000, "FIELDS", 1, "field1")
milliseconds: number,condition: 'NX' | 'XX' | 'GT' | 'LT',fieldsKeyword: 'FIELDS',numfields: number,): Promise<number[]>;Set expiration for hash fields in milliseconds (Redis 7.4+) Syntax: HPEXPIRE key milliseconds [NX | XX | GT | LT] FIELDS numfields field [field ...]
@returnsArray where each element is: -2 (field doesn't exist), 0 (condition not met), 1 (expiration set), 2 (field deleted)
redis.hpexpire("mykey", 10000, "FIELDS", 1, "field1")
- unixTimeMilliseconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number[]>;
Set expiration for hash fields using Unix timestamp in milliseconds (Redis 7.4+) Syntax: HPEXPIREAT key unix-time-milliseconds [NX | XX | GT | LT] FIELDS numfields field [field ...]
@returnsArray where each element is: -2 (field doesn't exist), 0 (condition not met), 1 (expiration set), 2 (field deleted)
redis.hpexpireat("mykey", 1735689600000, "FIELDS", 1, "field1")
unixTimeMilliseconds: number,condition: 'NX' | 'XX' | 'GT' | 'LT',fieldsKeyword: 'FIELDS',numfields: number,): Promise<number[]>;Set expiration for hash fields using Unix timestamp in milliseconds (Redis 7.4+) Syntax: HPEXPIREAT key unix-time-milliseconds [NX | XX | GT | LT] FIELDS numfields field [field ...]
@returnsArray where each element is: -2 (field doesn't exist), 0 (condition not met), 1 (expiration set), 2 (field deleted)
redis.hpexpireat("mykey", 1735689600000, "FIELDS", 1, "field1")
- fieldsKeyword: 'FIELDS',numfields: number,): Promise<number[]>;
Get expiration time of hash fields as Unix timestamp in milliseconds (Redis 7.4+) Syntax: HPEXPIRETIME key FIELDS numfields field [field ...]
@returnsArray where each element is: -2 (field doesn't exist), -1 (no expiration), Unix timestamp in milliseconds
redis.hpexpiretime("mykey", "FIELDS", 2, "field1", "field2")
- fieldsKeyword: 'FIELDS',numfields: number,): Promise<number[]>;
Get TTL of hash fields in milliseconds (Redis 7.4+) Syntax: HPTTL key FIELDS numfields field [field ...]
@returnsArray where each element is: -2 (field doesn't exist), -1 (no expiration), TTL in milliseconds
redis.hpttl("mykey", "FIELDS", 2, "field1", "field2")
- @param key
The hash key
@returnsPromise that resolves with a random field name, or null if the hash doesn't exist
count: number): Promise<string[]>;Get one or multiple random fields from a hash
@param keyThe hash key
@param countThe number of fields to return (positive for unique fields, negative for potentially duplicate fields)
@returnsPromise that resolves with an array of random field names
count: number,withValues: 'WITHVALUES'): Promise<[string, string][]>;Get one or multiple random fields with values from a hash
@param keyThe hash key
@param countThe number of fields to return
@param withValuesLiteral "WITHVALUES" to include values
@returnsPromise that resolves with an array of alternating field names and values
- cursor: string | number): Promise<[string, string[]]>;
Incrementally iterate hash fields and values
@param keyThe hash key
@param cursorThe cursor value (0 to start iteration)
@returnsPromise that resolves with [next_cursor, [field1, value1, field2, value2, ...]]
cursor: string | number,match: 'MATCH',pattern: string): Promise<[string, string[]]>;Incrementally iterate hash fields and values with pattern matching
@param keyThe hash key
@param cursorThe cursor value (0 to start iteration)
@param matchLiteral "MATCH"
@param patternPattern to match field names against
@returnsPromise that resolves with [next_cursor, [field1, value1, field2, value2, ...]]
cursor: string | number,count: 'COUNT',limit: number): Promise<[string, string[]]>;Incrementally iterate hash fields and values with count limit
@param keyThe hash key
@param cursorThe cursor value (0 to start iteration)
@param countLiteral "COUNT"
@param limitMaximum number of fields to return per call
@returnsPromise that resolves with [next_cursor, [field1, value1, field2, value2, ...]]
cursor: string | number,match: 'MATCH',pattern: string,count: 'COUNT',limit: number): Promise<[string, string[]]>;Incrementally iterate hash fields and values with pattern and count
@param keyThe hash key
@param cursorThe cursor value (0 to start iteration)
@param matchLiteral "MATCH"
@param patternPattern to match field names against
@param countLiteral "COUNT"
@param limitMaximum number of fields to return per call
@returnsPromise that resolves with [next_cursor, [field1, value1, field2, value2, ...]]
- @param key
The hash key
@param fieldsObject/Record with field-value pairs
@returnsPromise that resolves with the number of fields that were added
@param keyThe hash key
@param fieldThe field name
@param valueThe value to set
@param restAdditional field-value pairs
@returnsPromise that resolves with the number of fields that were added
- fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;
Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
fnx: 'FNX',fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
fxx: 'FXX',fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
ex: 'EX',seconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
px: 'PX',milliseconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
exat: 'EXAT',unixTimeSeconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
pxat: 'PXAT',unixTimeMilliseconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
keepttl: 'KEEPTTL',fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
fnx: 'FNX',ex: 'EX',seconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
fnx: 'FNX',px: 'PX',milliseconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
fnx: 'FNX',exat: 'EXAT',unixTimeSeconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
fnx: 'FNX',pxat: 'PXAT',unixTimeMilliseconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
fnx: 'FNX',keepttl: 'KEEPTTL',fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
fxx: 'FXX',ex: 'EX',seconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
fxx: 'FXX',px: 'PX',milliseconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
fxx: 'FXX',exat: 'EXAT',unixTimeSeconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
fxx: 'FXX',pxat: 'PXAT',unixTimeMilliseconds: number,fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
fxx: 'FXX',keepttl: 'KEEPTTL',fieldsKeyword: 'FIELDS',numfields: number,): Promise<number>;Set hash fields with expiration options (Redis 8.0.0+) Syntax: HSETEX key [FNX | FXX] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] FIELDS numfields field value [field value ...]
redis.hsetex("mykey", "FIELDS", 1, "field1", "value1")
- httl(fieldsKeyword: 'FIELDS',numfields: number,): Promise<number[]>;
Get TTL of hash fields in seconds (Redis 7.4+) Syntax: HTTL key FIELDS numfields field [field ...]
@returnsArray where each element is: -2 (field doesn't exist), -1 (no expiration), TTL in seconds
redis.httl("mykey", "FIELDS", 2, "field1", "field2")
- increment: string | number): Promise<string>;
Increment the float value of a key by the given amount
@param keyThe key to increment
@param incrementThe amount to increment by (can be a float)
@returnsPromise that resolves with the new value as a string after incrementing
- @param pattern
The pattern to match
@returnsPromise that resolves with an array of matching keys
- @param key
The list key
@param indexZero-based index (negative indexes count from the end, -1 is last element)
@returnsPromise that resolves with the element at index, or null if index is out of range
await redis.lpush("mylist", "three", "two", "one"); console.log(await redis.lindex("mylist", 0)); // "one" console.log(await redis.lindex("mylist", -1)); // "three" console.log(await redis.lindex("mylist", 5)); // null
- position: 'BEFORE' | 'AFTER',): Promise<number>;
Insert an element before or after another element in a list
@param keyThe list key
@param position"BEFORE" or "AFTER" to specify where to insert
@param pivotThe pivot element to insert before or after
@param elementThe element to insert
@returnsPromise that resolves with the length of the list after insert, -1 if pivot not found, or 0 if key doesn't exist
await redis.lpush("mylist", "World"); await redis.lpush("mylist", "Hello"); await redis.linsert("mylist", "BEFORE", "World", "There"); // List is now: ["Hello", "There", "World"]
- from: 'LEFT' | 'RIGHT',to: 'LEFT' | 'RIGHT'): Promise<null | string>;
Atomically pop an element from a source list and push it to a destination list
Pops an element from the source list (from LEFT or RIGHT) and pushes it to the destination list (to LEFT or RIGHT).
@param sourceThe source list key
@param destinationThe destination list key
@param fromDirection to pop from source: "LEFT" (head) or "RIGHT" (tail)
@param toDirection to push to destination: "LEFT" (head) or "RIGHT" (tail)
@returnsPromise that resolves with the element moved, or null if the source list is empty
await redis.lpush("source", "a", "b", "c"); const result1 = await redis.lmove("source", "dest", "LEFT", "RIGHT"); // result1: "c" (popped from head of source, pushed to tail of dest) const result2 = await redis.lmove("source", "dest", "RIGHT", "LEFT"); // result2: "a" (popped from tail of source, pushed to head of dest)
- numkeys: number,...args: string | number[]): Promise<null | [string, string[]]>;
Pop one or more elements from one or more lists
Pops elements from the first non-empty list in the specified order (LEFT = from head, RIGHT = from tail). Optionally specify COUNT to pop multiple elements at once.
@param numkeysThe number of keys that follow
@param argsKeys followed by LEFT or RIGHT, optionally followed by "COUNT" and count value
@returnsPromise that resolves with [key, [elements]] or null if all lists are empty
await redis.lpush("list1", "a", "b", "c"); const result1 = await redis.lmpop(1, "list1", "LEFT"); // result1: ["list1", ["c"]] const result2 = await redis.lmpop(1, "list1", "RIGHT", "COUNT", 2); // result2: ["list1", ["a", "b"]] const result3 = await redis.lmpop(2, "emptylist", "list1", "LEFT"); // result3: null (if both lists are empty)
- @param key
The list key
@returnsPromise that resolves with the first element, or null if the list is empty
- lpos(...options: string | number[]): Promise<null | number | number[]>;
Find the position(s) of an element in a list
Returns the index of matching elements inside a Redis list. By default, returns the index of the first match. Use RANK to find the nth occurrence, COUNT to get multiple positions, and MAXLEN to limit the search.
@param keyThe list key
@param elementThe element to search for
@param optionsOptional arguments: "RANK", rank, "COUNT", num, "MAXLEN", len
@returnsPromise that resolves with the index (number), an array of indices (number[]), or null if element is not found. Returns array when COUNT option is used.
await redis.lpush("mylist", "a", "b", "c", "b", "d"); const pos1 = await redis.lpos("mylist", "b"); // pos1: 1 (first occurrence of "b") const pos2 = await redis.lpos("mylist", "b", "RANK", 2); // pos2: 3 (second occurrence of "b") const positions = await redis.lpos("mylist", "b", "COUNT", 0); // positions: [1, 3] (all occurrences of "b") const pos3 = await redis.lpos("mylist", "x"); // pos3: null (element not found)
- start: number,stop: number): Promise<string[]>;
Get a range of elements from a list
@param keyThe list key
@param startZero-based start index (negative indexes count from the end)
@param stopZero-based stop index (negative indexes count from the end)
@returnsPromise that resolves with array of elements in the specified range
await redis.lpush("mylist", "three", "two", "one"); console.log(await redis.lrange("mylist", 0, -1)); // ["one", "two", "three"] console.log(await redis.lrange("mylist", 0, 1)); // ["one", "two"] console.log(await redis.lrange("mylist", -2, -1)); // ["two", "three"]
- @param key
The list key
@param countNumber of elements to remove
- count > 0: Remove count occurrences from head to tail
- count < 0: Remove count occurrences from tail to head
- count = 0: Remove all occurrences
@param elementThe element to remove
@returnsPromise that resolves with the number of elements removed
await redis.rpush("mylist", "hello", "hello", "world", "hello"); await redis.lrem("mylist", 2, "hello"); // Removes first 2 "hello" // List is now: ["world", "hello"]
- @param key
The list key
@param indexZero-based index (negative indexes count from the end)
@param elementThe value to set
@returnsPromise that resolves with "OK" on success
await redis.lpush("mylist", "three", "two", "one"); await redis.lset("mylist", 0, "zero"); console.log(await redis.lrange("mylist", 0, -1)); // ["zero", "two", "three"] await redis.lset("mylist", -1, "last"); console.log(await redis.lrange("mylist", 0, -1)); // ["zero", "two", "last"]
- @param key
The list key
@param startThe start index (0-based, can be negative)
@param stopThe stop index (0-based, can be negative)
@returnsPromise that resolves with "OK"
await redis.rpush("mylist", "one", "two", "three", "four"); await redis.ltrim("mylist", 1, 2); // List is now: ["two", "three"]
- mset(): Promise<'OK'>;
Set multiple keys to multiple values atomically
Sets the given keys to their respective values. MSET replaces existing values with new values, just as regular SET. Use MSETNX if you don't want to overwrite existing values.
MSET is atomic, so all given keys are set at once. It is not possible for clients to see that some of the keys were updated while others are unchanged.
@param keyValuePairsAlternating keys and values (key1, value1, key2, value2, ...)
@returnsPromise that resolves with "OK" on success
await redis.mset("key1", "value1", "key2", "value2");
- ): Promise<number>;
Set multiple keys to multiple values, only if none of the keys exist
Sets the given keys to their respective values. MSETNX will not perform any operation at all even if just a single key already exists.
Because of this semantic, MSETNX can be used in order to set different keys representing different fields of a unique logic object in a way that ensures that either all the fields or none at all are set.
MSETNX is atomic, so all given keys are set at once. It is not possible for clients to see that some of the keys were updated while others are unchanged.
@param keyValuePairsAlternating keys and values (key1, value1, key2, value2, ...)
@returnsPromise that resolves with 1 if all keys were set, 0 if no key was set
// Returns 1 if keys don't exist await redis.msetnx("key1", "value1", "key2", "value2"); // Returns 0 if any key already exists await redis.msetnx("key1", "newvalue", "key3", "value3");
- millisecondsTimestamp: number): Promise<number>;
Set the expiration for a key as a Unix timestamp in milliseconds
@param keyThe key to set expiration on
@param millisecondsTimestampUnix timestamp in milliseconds when the key should expire
@returnsPromise that resolves with 1 if timeout was set, 0 if key does not exist
- ): Promise<number>;
Get the expiration time of a key as a UNIX timestamp in milliseconds
@param keyThe key to check
@returnsPromise that resolves with the timestamp, or -1 if the key has no expiration, or -2 if the key doesn't exist
Ping the server
@returnsPromise that resolves with "PONG" if the server is reachable, or throws an error if the server is not reachable
- milliseconds: number,): Promise<'OK'>;
Set key to hold the string value with expiration time in milliseconds
@param keyThe key to set
@param millisecondsThe expiration time in milliseconds
@param valueThe value to set
@returnsPromise that resolves with "OK" on success
await redis.psetex("mykey", 10000, "Hello"); // Key will expire after 10000 milliseconds (10 seconds)
- @param channel
The channel to publish to.
@param messageThe message to publish.
@returnsThe number of clients that received the message. Note that in a cluster this returns the total number of clients in the same node.
Return a random key from the keyspace
Returns a random key from the currently selected database.
@returnsPromise that resolves with a random key name, or null if the database is empty
await redis.set("key1", "value1"); await redis.set("key2", "value2"); await redis.set("key3", "value3"); const randomKey = await redis.randomkey(); console.log(randomKey); // One of: "key1", "key2", or "key3"
- ): Promise<'OK'>;
Rename a key to a new key
Renames key to newkey. If newkey already exists, it is overwritten. If key does not exist, an error is returned.
@param keyThe key to rename
@param newkeyThe new key name
@returnsPromise that resolves with "OK" on success
await redis.set("mykey", "Hello"); await redis.rename("mykey", "myotherkey"); const value = await redis.get("myotherkey"); // "Hello" const oldValue = await redis.get("mykey"); // null
- ): Promise<number>;
Rename a key to a new key only if the new key does not exist
Renames key to newkey only if newkey does not yet exist. If key does not exist, an error is returned.
@param keyThe key to rename
@param newkeyThe new key name
@returnsPromise that resolves with 1 if the key was renamed, 0 if newkey already exists
await redis.set("mykey", "Hello"); await redis.renamenx("mykey", "myotherkey"); // Returns 1 await redis.set("mykey2", "World"); await redis.renamenx("mykey2", "myotherkey"); // Returns 0 (myotherkey exists)
- @param key
The list key
@returnsPromise that resolves with the last element, or null if the list is empty
- ): Promise<null | string>;
Atomically pop the last element from a source list and push it to the head of a destination list
This is equivalent to LMOVE with "RIGHT" "LEFT". It's an atomic operation that removes the last element (tail) from the source list and pushes it to the head of the destination list.
@param sourceThe source list key
@param destinationThe destination list key
@returnsPromise that resolves with the element moved, or null if the source list is empty
await redis.lpush("source", "a", "b", "c"); // source: ["c", "b", "a"] const result = await redis.rpoplpush("source", "dest"); // result: "a" (removed from tail of source, added to head of dest) // source: ["c", "b"] // dest: ["a"]
- scan(cursor: string | number): Promise<[string, string[]]>;
Incrementally iterate the keyspace
The SCAN command is used to incrementally iterate over a collection of elements. SCAN iterates the set of keys in the currently selected Redis database.
SCAN is a cursor based iterator. This means that at every call of the command, the server returns an updated cursor that the user needs to use as the cursor argument in the next call.
An iteration starts when the cursor is set to "0", and terminates when the cursor returned by the server is "0".
@param cursorThe cursor value (use "0" to start a new iteration)
@returnsPromise that resolves with a tuple [cursor, keys[]] where cursor is the next cursor to use (or "0" if iteration is complete) and keys is an array of matching keys
// Basic scan - iterate all keys let cursor = "0"; const allKeys: string[] = []; do { const [nextCursor, keys] = await redis.scan(cursor); allKeys.push(...keys); cursor = nextCursor; } while (cursor !== "0");
scan(cursor: string | number,match: 'MATCH',pattern: string): Promise<[string, string[]]>;Incrementally iterate the keyspace with a pattern match
@param cursorThe cursor value (use "0" to start a new iteration)
@param matchThe "MATCH" keyword
@param patternThe pattern to match (supports glob-style patterns like "user:*")
@returnsPromise that resolves with a tuple [cursor, keys[]]
scan(cursor: string | number,count: 'COUNT',hint: number): Promise<[string, string[]]>;Incrementally iterate the keyspace with a count hint
@param cursorThe cursor value (use "0" to start a new iteration)
@param countThe "COUNT" keyword
@param hintThe number of elements to return per call (hint only, not exact)
@returnsPromise that resolves with a tuple [cursor, keys[]]
scan(cursor: string | number,match: 'MATCH',pattern: string,count: 'COUNT',hint: number): Promise<[string, string[]]>;Incrementally iterate the keyspace with pattern match and count hint
@param cursorThe cursor value (use "0" to start a new iteration)
@param matchThe "MATCH" keyword
@param patternThe pattern to match
@param countThe "COUNT" keyword
@param hintThe number of elements to return per call
@returnsPromise that resolves with a tuple [cursor, keys[]]
scan(cursor: string | number,...options: string | number[]): Promise<[string, string[]]>;Incrementally iterate the keyspace with options
@param cursorThe cursor value
@param optionsAdditional SCAN options (MATCH pattern, COUNT hint, etc.)
@returnsPromise that resolves with a tuple [cursor, keys[]]
- ): Promise<number>;
Store the difference of multiple sets in a key
@param destinationThe destination key to store the result
@param keyThe first set key
@param keysAdditional set keys to subtract from the first set
@returnsPromise that resolves with the number of elements in the resulting set
- @param command
The command to send
@param argsThe arguments to the command
@returnsA promise that resolves with the command result
- @param key
The key to set
@param valueThe value to set
@returnsPromise that resolves with "OK" on success
@param keyThe key to set
@param valueThe value to set
@param exSet the specified expire time, in seconds
@returnsPromise that resolves with "OK" on success
@param keyThe key to set
@param valueThe value to set
@param pxSet the specified expire time, in milliseconds
@returnsPromise that resolves with "OK" on success
set(exat: 'EXAT',timestampSeconds: number): Promise<'OK'>;Set key to hold the string value with expiration at a specific Unix timestamp
@param keyThe key to set
@param valueThe value to set
@param exatSet the specified Unix time at which the key will expire, in seconds
@returnsPromise that resolves with "OK" on success
set(pxat: 'PXAT',timestampMilliseconds: number): Promise<'OK'>;Set key to hold the string value with expiration at a specific Unix timestamp
@param keyThe key to set
@param valueThe value to set
@param pxatSet the specified Unix time at which the key will expire, in milliseconds
@returnsPromise that resolves with "OK" on success
@param keyThe key to set
@param valueThe value to set
@param nxOnly set the key if it does not already exist
@returnsPromise that resolves with "OK" on success, or null if the key already exists
@param keyThe key to set
@param valueThe value to set
@param xxOnly set the key if it already exists
@returnsPromise that resolves with "OK" on success, or null if the key does not exist
@param keyThe key to set
@param valueThe value to set
@param getReturn the old string stored at key, or null if key did not exist
@returnsPromise that resolves with the old value, or null if key did not exist
@param keyThe key to set
@param valueThe value to set
@param keepttlRetain the time to live associated with the key
@returnsPromise that resolves with "OK" on success
set(...options: string[]): Promise<null | string>;Set key to hold the string value with various options
@param keyThe key to set
@param valueThe value to set
@param optionsArray of options (EX, PX, EXAT, PXAT, NX, XX, KEEPTTL, GET)
@returnsPromise that resolves with "OK" on success, null if NX/XX condition not met, or the old value if GET is specified
- offset: number,value: 0 | 1): Promise<number>;
Sets or clears the bit at offset in the string value stored at key
@param keyThe key to modify
@param offsetThe bit offset (zero-based)
@param valueThe bit value to set (0 or 1)
@returnsPromise that resolves with the original bit value stored at offset
- seconds: number,): Promise<'OK'>;
Set key to hold the string value with expiration time in seconds
@param keyThe key to set
@param secondsThe expiration time in seconds
@param valueThe value to set
@returnsPromise that resolves with "OK" on success
await redis.setex("mykey", 10, "Hello"); // Key will expire after 10 seconds
- offset: number,): Promise<number>;
Overwrite part of a string at key starting at the specified offset
@param keyThe key to modify
@param offsetThe offset at which to start overwriting (zero-based)
@param valueThe string value to write at the offset
@returnsPromise that resolves with the length of the string after modification
- numkeys: number,): Promise<number>;
Get the cardinality of the intersection of multiple sets
@param numkeysThe number of keys to intersect
@param keyThe first set key
@param argsAdditional set keys and optional LIMIT argument
@returnsPromise that resolves with the number of elements in the intersection
- ): Promise<number>;
Store the intersection of multiple sets in a key
@param destinationThe destination key to store the result
@param keyThe first set key
@param keysAdditional set keys to intersect
@returnsPromise that resolves with the number of elements in the resulting set
- ): Promise<number[]>;
Check if multiple members are members of a set
@param keyThe set key
@param memberThe first member to check
@param membersAdditional members to check
@returnsPromise that resolves with an array of 1s and 0s indicating membership
- member: string): Promise<boolean>;
Move a member from one set to another
@param sourceThe source set key
@param destinationThe destination set key
@param memberThe member to move
@returnsPromise that resolves with true if the element was moved, false if it wasn't a member of source
- @param key
The set key
@returnsPromise that resolves with the removed member, or null if the set is empty
- @param key
The set key
@returnsPromise that resolves with a random member, or null if the set is empty
count: number): Promise<null | string[]>;Get count random members from a set
@param keyThe set key
@returnsPromise that resolves with an array of up to count random members, or null if the set doesn't exist
- cursor: string | number,...args: string | number[]): Promise<[string, string[]]>;
Incrementally iterate over a set
@param keyThe set key
@param cursorThe cursor value
@param argsAdditional SSCAN options (MATCH pattern, COUNT hint)
@returnsPromise that resolves with a tuple [cursor, members[]]
- channel: string,): Promise<number>;
Subscribe to a Redis channel.
Subscribing disables automatic pipelining, so all commands will be received immediately.
Subscribing moves the channel to a dedicated subscription state which prevents most other commands from being executed until unsubscribed. Only
.ping()
,.subscribe()
, and.unsubscribe()
are legal to invoke in a subscribed upon channel.@param channelThe channel to subscribe to.
@param listenerThe listener to call when a message is received on the channel. The listener will receive the message as the first argument and the channel as the second argument.
await client.subscribe("my-channel", (message, channel) => { console.log(`Received message on ${channel}: ${message}`); });
channels: string[],): Promise<number>;Subscribe to multiple Redis channels.
Subscribing disables automatic pipelining, so all commands will be received immediately.
Subscribing moves the channels to a dedicated subscription state in which only a limited set of commands can be executed.
@param channelsAn array of channels to subscribe to.
@param listenerThe listener to call when a message is received on any of the subscribed channels. The listener will receive the message as the first argument and the channel as the second argument.
- ): Promise<number>;
Store the union of multiple sets in a key
@param destinationThe destination key to store the result
@param keyThe first set key
@param keysAdditional set keys to union
@returnsPromise that resolves with the number of elements in the resulting set
- ): Promise<number>;
Alters the last access time of one or more keys
A key is ignored if it does not exist. The command returns the number of keys that were touched.
This command is useful in conjunction with maxmemory-policy allkeys-lru / volatile-lru to change the last access time of keys for eviction purposes.
@param keysOne or more keys to touch
@returnsPromise that resolves with the number of keys that were touched
await redis.set("key1", "Hello"); await redis.set("key2", "World"); const touched = await redis.touch("key1", "key2", "key3"); console.log(touched); // 2 (key3 doesn't exist)
- type(): Promise<'string' | 'stream' | 'none' | 'set' | 'list' | 'zset' | 'hash'>;
Determine the type of value stored at key
The TYPE command returns the string representation of the type of the value stored at key. The different types that can be returned are: string, list, set, zset, hash and stream.
@param keyThe key to check
@returnsPromise that resolves with the type of value stored at key, or "none" if the key doesn't exist
await redis.set("mykey", "Hello"); console.log(await redis.type("mykey")); // "string" await redis.lpush("mylist", "value"); console.log(await redis.type("mylist")); // "list" await redis.sadd("myset", "value"); console.log(await redis.type("myset")); // "set" await redis.hset("myhash", "field", "value"); console.log(await redis.type("myhash")); // "hash" console.log(await redis.type("nonexistent")); // "none"
- ): Promise<number>;
Asynchronously delete one or more keys
This command is very similar to DEL: it removes the specified keys. Just like DEL a key is ignored if it does not exist. However, the command performs the actual memory reclaiming in a different thread, so it is not blocking, while DEL is. This is particularly useful when deleting large values or large numbers of keys.
@param keysThe keys to delete
@returnsPromise that resolves with the number of keys that were unlinked
await redis.set("key1", "Hello"); await redis.set("key2", "World"); const count = await redis.unlink("key1", "key2", "key3"); console.log(count); // 2
- @param channel
The channel to unsubscribe from.
If there are no more channels subscribed to, the client automatically re-enables pipelining if it was previously enabled.
Unsubscribing moves the channel back to a normal state out of the subscription state if all channels have been unsubscribed from. For further details on the subscription state, see
.subscribe()
.channel: string,): Promise<void>;Remove a listener from a given Redis channel.
If there are no more channels subscribed to, the client automatically re-enables pipelining if it was previously enabled.
Unsubscribing moves the channel back to a normal state out of the subscription state if all channels have been unsubscribed from. For further details on the subscription state, see
.subscribe()
.@param channelThe channel to unsubscribe from.
@param listenerThe listener to remove. This is tested against referential equality so you must pass the exact same listener instance as when subscribing.
Unsubscribe from all registered Redis channels.
The client will automatically re-enable pipelining if it was previously enabled.
Unsubscribing moves the channel back to a normal state out of the subscription state if all channels have been unsubscribed from. For further details on the subscription state, see
.subscribe()
.@param channelsAn array of channels to unsubscribe from.
If there are no more channels subscribed to, the client automatically re-enables pipelining if it was previously enabled.
Unsubscribing moves the channel back to a normal state out of the subscription state if all channels have been unsubscribed from. For further details on the subscription state, see
.subscribe()
. - zadd(...args: string | number[]): Promise<number>;
Add one or more members to a sorted set, or update scores if they already exist
ZADD adds all the specified members with the specified scores to the sorted set stored at key. It is possible to specify multiple score / member pairs. If a specified member is already a member of the sorted set, the score is updated and the element reinserted at the right position to ensure the correct ordering.
If key does not exist, a new sorted set with the specified members as sole members is created. If the key exists but does not hold a sorted set, an error is returned.
The score values should be the string representation of a double precision floating point number. +inf and -inf values are valid values as well.
Options:
- NX: Only add new elements. Don't update already existing elements.
- XX: Only update elements that already exist. Never add elements.
- GT: Only update existing elements if the new score is greater than the current score. This flag doesn't prevent adding new elements.
- LT: Only update existing elements if the new score is less than the current score. This flag doesn't prevent adding new elements.
- CH: Modify the return value from the number of new elements added, to the total number of elements changed (CH is an abbreviation of changed).
- INCR: When this option is specified ZADD acts like ZINCRBY. Only one score-member pair can be specified in this mode.
Note: The GT, LT and NX options are mutually exclusive.
@param keyThe sorted set key
@param argsScore-member pairs and optional flags (NX, XX, GT, LT, CH, INCR)
@returnsPromise that resolves with the number of elements added (or changed if CH is used, or new score if INCR is used)
// Add members with scores await redis.zadd("myzset", "1", "one", "2", "two", "3", "three"); // Add with NX option (only if member doesn't exist) await redis.zadd("myzset", "NX", "4", "four"); // Add with XX option (only if member exists) await redis.zadd("myzset", "XX", "2.5", "two"); // Add with CH option (return count of changed elements) await redis.zadd("myzset", "CH", "5", "five", "2.1", "two"); // Use INCR option (increment score) await redis.zadd("myzset", "INCR", "1.5", "one");
- min: string | number,max: string | number): Promise<number>;
Count the members in a sorted set with scores within the given range
@param keyThe sorted set key
@param minMinimum score (inclusive, use "-inf" for negative infinity)
@param maxMaximum score (inclusive, use "+inf" for positive infinity)
@returnsPromise that resolves with the count of elements in the specified score range
- numkeys: number,): Promise<[string, number][]>;
Compute the difference between sorted sets with scores
@param numkeysThe number of sorted set keys
@returnsPromise that resolves with an array of [member, score] pairs
await redis.send("ZADD", ["zset1", "1", "one", "2", "two", "3", "three"]); await redis.send("ZADD", ["zset2", "1", "one", "2", "two"]); const diff = await redis.zdiff(2, "zset1", "zset2", "WITHSCORES"); console.log(diff); // ["three", "3"]
numkeys: number,): Promise<string[]>;Compute the difference between the first sorted set and all successive sorted sets
Returns the members of the sorted set resulting from the difference between the first sorted set and all the successive sorted sets. The first key is the only one used to compute the members of the difference.
@param numkeysThe number of sorted set keys
@param keysThe sorted set keys to compare
@returnsPromise that resolves with an array of members
await redis.send("ZADD", ["zset1", "1", "one", "2", "two", "3", "three"]); await redis.send("ZADD", ["zset2", "1", "one", "2", "two"]); const diff = await redis.zdiff(2, "zset1", "zset2"); console.log(diff); // ["three"]
- numkeys: number,): Promise<number>;
Compute the difference between sorted sets and store the result
Computes the difference between the first and all successive sorted sets given by the specified keys and stores the result in destination. Keys that do not exist are considered to be empty sets.
@param destinationThe destination key to store the result
@param numkeysThe number of input sorted set keys
@param keysThe sorted set keys to compare
@returnsPromise that resolves with the number of elements in the resulting sorted set
await redis.send("ZADD", ["zset1", "1", "one", "2", "two", "3", "three"]); await redis.send("ZADD", ["zset2", "1", "one"]); const count = await redis.zdiffstore("out", 2, "zset1", "zset2"); console.log(count); // 2 (two, three)
- numkeys: number,...args: [...args: string | number[], withscores: 'WITHSCORES']): Promise<[string, number][]>;
Compute the intersection of multiple sorted sets
Returns the members of the set resulting from the intersection of all the given sorted sets. Keys that do not exist are considered to be empty sets.
By default, the resulting score of each member is the sum of its scores in the sorted sets where it exists.
Options:
- WEIGHTS: Multiply the score of each member in the corresponding sorted set by the given weight before aggregation
- AGGREGATE SUM|MIN|MAX: Specify how the scores are aggregated (default: SUM)
- WITHSCORES: Return the scores along with the members
@param numkeysThe number of input keys (sorted sets)
@returnsPromise that resolves with an array of members (or [member, score] pairs if WITHSCORES)
// Set up sorted sets await redis.zadd("zset1", "1", "a", "2", "b", "3", "c"); await redis.zadd("zset2", "1", "b", "2", "c", "3", "d"); // Basic intersection - returns members that exist in all sets const result1 = await redis.zinter(2, "zset1", "zset2"); // Returns: ["b", "c"] // With scores (sum by default) const result2 = await redis.zinter(2, "zset1", "zset2", "WITHSCORES"); // Returns: ["b", "3", "c", "5"] (b: 2+1=3, c: 3+2=5) // With weights const result3 = await redis.zinter(2, "zset1", "zset2", "WEIGHTS", "2", "3", "WITHSCORES"); // Returns: ["b", "7", "c", "12"] (b: 2*2+1*3=7, c: 3*2+2*3=12) // With MIN aggregation const result4 = await redis.zinter(2, "zset1", "zset2", "AGGREGATE", "MIN", "WITHSCORES"); // Returns: ["b", "1", "c", "2"] (minimum scores)
numkeys: number,...args: string | number[]): Promise<string[]>;Compute the intersection of multiple sorted sets
Returns the members of the set resulting from the intersection of all the given sorted sets. Keys that do not exist are considered to be empty sets.
By default, the resulting score of each member is the sum of its scores in the sorted sets where it exists.
Options:
- WEIGHTS: Multiply the score of each member in the corresponding sorted set by the given weight before aggregation
- AGGREGATE SUM|MIN|MAX: Specify how the scores are aggregated (default: SUM)
- WITHSCORES: Return the scores along with the members
@param numkeysThe number of input keys (sorted sets)
@returnsPromise that resolves with an array of members (or [member, score] pairs if WITHSCORES)
// Set up sorted sets await redis.zadd("zset1", "1", "a", "2", "b", "3", "c"); await redis.zadd("zset2", "1", "b", "2", "c", "3", "d"); // Basic intersection - returns members that exist in all sets const result1 = await redis.zinter(2, "zset1", "zset2"); // Returns: ["b", "c"] // With scores (sum by default) const result2 = await redis.zinter(2, "zset1", "zset2", "WITHSCORES"); // Returns: ["b", "3", "c", "5"] (b: 2+1=3, c: 3+2=5) // With weights const result3 = await redis.zinter(2, "zset1", "zset2", "WEIGHTS", "2", "3", "WITHSCORES"); // Returns: ["b", "7", "c", "12"] (b: 2*2+1*3=7, c: 3*2+2*3=12) // With MIN aggregation const result4 = await redis.zinter(2, "zset1", "zset2", "AGGREGATE", "MIN", "WITHSCORES"); // Returns: ["b", "1", "c", "2"] (minimum scores)
- numkeys: number,): Promise<number>;
Count the number of members in the intersection of multiple sorted sets
Computes the cardinality of the intersection of the sorted sets at the specified keys. The intersection includes only elements that exist in all of the given sorted sets.
When a LIMIT is provided, the command stops counting once the limit is reached, which is useful for performance when you only need to know if the cardinality exceeds a certain threshold.
@param numkeysThe number of sorted set keys
@param keysThe sorted set keys to intersect
@returnsPromise that resolves with the number of elements in the intersection
await redis.send("ZADD", ["zset1", "1", "one", "2", "two", "3", "three"]); await redis.send("ZADD", ["zset2", "1", "one", "2", "two", "4", "four"]); const count = await redis.zintercard(2, "zset1", "zset2"); console.log(count); // 2 (one, two)
numkeys: number,): Promise<number>;Count the number of members in the intersection with a limit
@param numkeysThe number of sorted set keys
@returnsPromise that resolves with the number of elements (up to limit)
await redis.send("ZADD", ["zset1", "1", "a", "2", "b", "3", "c"]); await redis.send("ZADD", ["zset2", "1", "a", "2", "b", "3", "c"]); const count = await redis.zintercard(2, "zset1", "zset2", "LIMIT", 2); console.log(count); // 2 (stopped at limit)
- numkeys: number,...args: string | number[]): Promise<number>;
Compute the intersection of multiple sorted sets and store in destination
This command is similar to ZINTER, but instead of returning the result, it stores it in the destination key. If the destination key already exists, it is overwritten.
Options:
- WEIGHTS: Multiply the score of each member in the corresponding sorted set by the given weight before aggregation
- AGGREGATE SUM|MIN|MAX: Specify how the scores are aggregated (default: SUM)
@param destinationThe destination key to store the result
@param numkeysThe number of input keys (sorted sets)
@returnsPromise that resolves with the number of elements in the resulting sorted set
// Set up sorted sets await redis.zadd("zset1", "1", "a", "2", "b", "3", "c"); await redis.zadd("zset2", "1", "b", "2", "c", "3", "d"); // Basic intersection store const count1 = await redis.zinterstore("out", 2, "zset1", "zset2"); // Returns: 2 (stored "b" and "c" in "out") // With weights const count2 = await redis.zinterstore("out2", 2, "zset1", "zset2", "WEIGHTS", "2", "3"); // Returns: 2 // With MAX aggregation const count3 = await redis.zinterstore("out3", 2, "zset1", "zset2", "AGGREGATE", "MAX"); // Returns: 2 (stores maximum scores)
- min: string,max: string): Promise<number>;
Count the members in a sorted set within a lexicographical range
@param keyThe sorted set key
@param minMinimum value (use "[" for inclusive, "(" for exclusive, e.g., "[aaa")
@param maxMaximum value (use "[" for inclusive, "(" for exclusive, e.g., "[zzz")
@returnsPromise that resolves with the count of elements in the specified range
- numkeys: number,...args: string | number[]): Promise<null | [string, [string, number][]]>;
Remove and return members with scores from one or more sorted sets. Pops from the first non-empty sorted set.
// Pop lowest score from one set const result1 = await redis.zmpop(1, "myzset", "MIN"); // Returns: ["myzset", [["member1", 1]]] // Pop highest score from multiple sets const result2 = await redis.zmpop(2, "zset1", "zset2", "MAX"); // Returns: ["zset1", [["member5", 5]]] (pops from first non-empty) // Pop multiple members const result3 = await redis.zmpop(1, "myzset", "MIN", "COUNT", 3); // Returns: ["myzset", [["member1", 1], ["member2", 2], ["member3", 3]]] // Empty set returns null const result4 = await redis.zmpop(1, "emptyset", "MIN"); // Returns: null
- ): Promise<null | number[]>;
Returns the scores associated with the specified members in the sorted set
@param keyThe sorted set key
@param memberThe first member to get the score for
@param membersAdditional members to get scores for
@returnsPromise that resolves with an array of scores (number for each score, or null if member doesn't exist)
- ): Promise<[] | [string, number]>;
Remove and return members with the highest scores in a sorted set
@param keyThe sorted set key
@returnsPromise that resolves with either [member, score] or empty array if the set is empty
- ): Promise<[] | [string, number]>;
Remove and return members with the lowest scores in a sorted set
@param keyThe sorted set key
@returnsPromise that resolves with array of [member, score] tuples, or empty array if the set is empty
- ): Promise<null | string>;
Get one or multiple random members from a sorted set
@param keyThe sorted set key
@returnsPromise that resolves with a random member, or null if the set is empty
count: number): Promise<null | string[]>;Get one or multiple random members from a sorted set
@param keyThe sorted set key
@returnsPromise that resolves with a random member, or null if the set is empty
count: number,withscores: 'WITHSCORES'): Promise<null | [string, number][]>;Get one or multiple random members from a sorted set, with scores
@param keyThe sorted set key
@returnsPromise that resolves with a random member, or null if the set is empty
- start: string | number,stop: string | number,withscores: 'WITHSCORES'): Promise<[string, number][]>;
Return a range of members in a sorted set with their scores
@param keyThe sorted set key
@param startThe starting index
@param stopThe stopping index
@param withscoresReturn members with their scores
@returnsPromise that resolves with an array of [member, score, member, score, ...]
const results = await redis.zrange("myzset", 0, -1, "WITHSCORES"); // Returns ["member1", "1.5", "member2", "2.5", ...]
start: string | number,stop: string | number,byscore: 'BYSCORE'): Promise<string[]>;Return a range of members in a sorted set by score
@param keyThe sorted set key
@param startThe minimum score (use "-inf" for negative infinity, "(" prefix for exclusive)
@param stopThe maximum score (use "+inf" for positive infinity, "(" prefix for exclusive)
@param byscoreIndicates score-based range
@returnsPromise that resolves with an array of members with scores in the range
// Get members with score between 1 and 3 const members = await redis.zrange("myzset", "1", "3", "BYSCORE"); // Get members with score > 1 and <= 3 (exclusive start) const members2 = await redis.zrange("myzset", "(1", "3", "BYSCORE");
start: string,stop: string,bylex: 'BYLEX'): Promise<string[]>;Return a range of members in a sorted set lexicographically
@param keyThe sorted set key
@param startThe minimum lexicographical value (use "-" for start, "[" for inclusive, "(" for exclusive)
@param stopThe maximum lexicographical value (use "+" for end, "[" for inclusive, "(" for exclusive)
@param bylexIndicates lexicographical range
@returnsPromise that resolves with an array of members in the lexicographical range
// Get members lexicographically from "a" to "c" (inclusive) const members = await redis.zrange("myzset", "[a", "[c", "BYLEX");
start: string | number,stop: string | number,...options: string[]): Promise<string[]>;Return a range of members in a sorted set with various options
@param keyThe sorted set key
@param startThe starting value (index, score, or lex depending on options)
@param stopThe stopping value
@param optionsAdditional options (BYSCORE, BYLEX, REV, LIMIT offset count, WITHSCORES)
@returnsPromise that resolves with an array of members (or with scores if WITHSCORES)
// Get members by score with limit const members = await redis.zrange("myzset", "1", "10", "BYSCORE", "LIMIT", "0", "5"); // Get members in reverse order with scores const reversed = await redis.zrange("myzset", "0", "-1", "REV", "WITHSCORES");
start: string | number,stop: string | number): Promise<string[]>;Return a range of members in a sorted set
Returns the specified range of elements in the sorted set stored at key. The elements are considered to be ordered from the lowest to the highest score by default.
@param keyThe sorted set key
@param startThe starting index (0-based, can be negative to count from end)
@param stopThe stopping index (0-based, can be negative to count from end)
@returnsPromise that resolves with an array of members in the specified range
// Get all members const members = await redis.zrange("myzset", 0, -1); // Get first 3 members const top3 = await redis.zrange("myzset", 0, 2);
- min: string,max: string): Promise<string[]>;
Return members in a sorted set within a lexicographical range
When all the elements in a sorted set have the same score, this command returns the elements between min and max in lexicographical order.
Lex ranges:
[member
for inclusive lower bound(member
for exclusive lower bound-
for negative infinity+
for positive infinity
@param keyThe sorted set key (all members must have the same score)
@param minMinimum lexicographical value (use "-" for negative infinity, "[" or "(" for inclusive/exclusive)
@param maxMaximum lexicographical value (use "+" for positive infinity, "[" or "(" for inclusive/exclusive)
@returnsPromise that resolves with array of members
await redis.send("ZADD", ["myzset", "0", "apple", "0", "banana", "0", "cherry"]); const members = await redis.zrangebylex("myzset", "[banana", "[cherry"); // Returns: ["banana", "cherry"]
min: string,max: string,limit: 'LIMIT',offset: number,count: number): Promise<string[]>;Return members in a sorted set within a lexicographical range, with pagination
@param keyThe sorted set key
@param minMinimum lexicographical value
@param maxMaximum lexicographical value
@param limitThe "LIMIT" keyword
@param offsetThe number of elements to skip
@param countThe maximum number of elements to return
@returnsPromise that resolves with array of members
await redis.send("ZADD", ["myzset", "0", "a", "0", "b", "0", "c", "0", "d"]); const result = await redis.zrangebylex("myzset", "-", "+", "LIMIT", 1, 2); // Returns: ["b", "c"]
min: string,max: string,...options: string | number[]): Promise<string[]>;Return members in a sorted set within a lexicographical range, with options
@param keyThe sorted set key
@param minMinimum lexicographical value
@param maxMaximum lexicographical value
@param optionsAdditional options (LIMIT offset count)
@returnsPromise that resolves with array of members
- min: string | number,max: string | number): Promise<string[]>;
Return members in a sorted set with scores within a given range
Returns all the elements in the sorted set at key with a score between min and max (inclusive by default). The elements are considered to be ordered from low to high scores.
Score ranges support:
-inf
and+inf
for negative and positive infinity(
prefix for exclusive bounds (e.g.,(5
means greater than 5, not including 5)
@param keyThe sorted set key
@param minMinimum score (can be "-inf", a number, or prefixed with "(" for exclusive)
@param maxMaximum score (can be "+inf", a number, or prefixed with "(" for exclusive)
@returnsPromise that resolves with array of members
await redis.send("ZADD", ["myzset", "1", "one", "2", "two", "3", "three"]); const members = await redis.zrangebyscore("myzset", 1, 2); // Returns: ["one", "two"]
min: string | number,max: string | number,withscores: 'WITHSCORES'): Promise<[string, number][]>;Return members in a sorted set with scores within a given range, with scores
@param keyThe sorted set key
@param minMinimum score
@param maxMaximum score
@param withscoresThe "WITHSCORES" keyword to return scores along with members
@returnsPromise that resolves with array of [member, score, member, score, ...]
await redis.send("ZADD", ["myzset", "1", "one", "2", "two", "3", "three"]); const result = await redis.zrangebyscore("myzset", 1, 2, "WITHSCORES"); // Returns: ["one", "1", "two", "2"]
min: string | number,max: string | number,limit: 'LIMIT',offset: number,count: number): Promise<string[]>;Return members in a sorted set with scores within a given range, with pagination
@param keyThe sorted set key
@param minMinimum score
@param maxMaximum score
@param limitThe "LIMIT" keyword
@param offsetThe number of elements to skip
@param countThe maximum number of elements to return
@returnsPromise that resolves with array of members
await redis.send("ZADD", ["myzset", "1", "one", "2", "two", "3", "three", "4", "four"]); const result = await redis.zrangebyscore("myzset", "-inf", "+inf", "LIMIT", 1, 2); // Returns: ["two", "three"]
min: string | number,max: string | number,withscores: 'WITHSCORES',...options: string | number[]): Promise<[string, number][]>;Return members in a sorted set with scores within a given range, with the score values
@param keyThe sorted set key
@param minMinimum score
@param maxMaximum score
@param optionsAdditional options (WITHSCORES, LIMIT offset count)
@returnsPromise that resolves with array of members (and scores if WITHSCORES is used)
min: string | number,max: string | number,withscores: 'WITHSCORES',limit: 'LIMIT',offset: number,count: number,...options: string | number[]): Promise<[string, number][]>;Return members in a sorted set with scores within a given range, with the score values
@param keyThe sorted set key
@param minMinimum score
@param maxMaximum score
@param optionsAdditional options (WITHSCORES, LIMIT offset count)
@returnsPromise that resolves with array of members (and scores if WITHSCORES is used)
min: string | number,max: string | number,...options: string | number[]): Promise<string[]>;Return members in a sorted set with scores within a given range, with various options
@param keyThe sorted set key
@param minMinimum score
@param maxMaximum score
@param optionsAdditional options (WITHSCORES, LIMIT offset count)
@returnsPromise that resolves with array of members (and scores if WITHSCORES is used)
- start: string | number,stop: string | number,...options: string[]): Promise<number>;
Store a range of members from a sorted set into a destination key
This command is like ZRANGE but stores the result in a destination key instead of returning it. Supports all the same options as ZRANGE including BYSCORE, BYLEX, REV, and LIMIT.
@param destinationThe destination key to store results
@param sourceThe source sorted set key
@param startThe starting index or score
@param stopThe ending index or score
@param optionsOptional flags: ["BYSCORE"], ["BYLEX"], ["REV"], ["LIMIT", offset, count]
@returnsPromise that resolves with the number of elements in the resulting sorted set
// Add members to source set await redis.send("ZADD", ["source", "1", "one", "2", "two", "3", "three"]); // Store range by rank const count1 = await redis.zrangestore("dest1", "source", 0, 1); console.log(count1); // 2 // Store range by score const count2 = await redis.zrangestore("dest2", "source", "1", "2", "BYSCORE"); console.log(count2); // 2 // Store in reverse order with limit const count3 = await redis.zrangestore("dest3", "source", "0", "-1", "REV", "LIMIT", "0", "2"); console.log(count3); // 2
- member: string): Promise<null | number>;
Determine the index of a member in a sorted set
@param keyThe sorted set key
@param memberThe member to find
@returnsPromise that resolves with the rank (index) of the member, or null if the member doesn't exist
member: string,withscore: 'WITHSCORE'): Promise<null | [number, number]>;Determine the index of a member in a sorted set with score
@param keyThe sorted set key
@param memberThe member to find
@param withscore"WITHSCORE" to include the score
@returnsPromise that resolves with [rank, score] or null if the member doesn't exist
- @param key
The sorted set key
@param memberThe first member to remove
@param membersAdditional members to remove
@returnsPromise that resolves with the number of members removed (not including non-existing members)
- min: string,max: string): Promise<number>;
Remove all members in a sorted set within the given lexicographical range
@param keyThe sorted set key
@param minMinimum value (use "[" for inclusive, "(" for exclusive, e.g., "[aaa")
@param maxMaximum value (use "[" for inclusive, "(" for exclusive, e.g., "[zzz")
@returnsPromise that resolves with the number of elements removed
- start: number,stop: number): Promise<number>;
Remove all members in a sorted set within the given rank range
@param keyThe sorted set key
@param startStart rank (0-based, can be negative to indicate offset from end)
@param stopStop rank (0-based, can be negative to indicate offset from end)
@returnsPromise that resolves with the number of elements removed
- min: string | number,max: string | number): Promise<number>;
Remove all members in a sorted set within the given score range
@param keyThe sorted set key
@param minMinimum score (inclusive, use "-inf" for negative infinity, "(" prefix for exclusive)
@param maxMaximum score (inclusive, use "+inf" for positive infinity, "(" prefix for exclusive)
@returnsPromise that resolves with the number of elements removed
- start: number,stop: number): Promise<string[]>;
Return a range of members in a sorted set, by index, with scores ordered from high to low
This is equivalent to ZRANGE with the REV option. Returns members in reverse order.
@param keyThe sorted set key
@param startThe starting index (0-based, can be negative to count from end)
@param stopThe stopping index (0-based, can be negative to count from end)
@returnsPromise that resolves with an array of members in reverse order
// Get all members in reverse order (highest to lowest score) const members = await redis.zrevrange("myzset", 0, -1); // Get top 3 members with highest scores const top3 = await redis.zrevrange("myzset", 0, 2);
start: number,stop: number,withscores: 'WITHSCORES'): Promise<[string, number][]>;Return a range of members in a sorted set with their scores, ordered from high to low
@param keyThe sorted set key
@param startThe starting index
@param stopThe stopping index
@param withscoresReturn members with their scores
@returnsPromise that resolves with an array of [member, score, member, score, ...] in reverse order
const results = await redis.zrevrange("myzset", 0, -1, "WITHSCORES"); // Returns ["member3", "3.5", "member2", "2.5", "member1", "1.5", ...]
start: number,stop: number,...options: string[]): Promise<string[]>;Return a range of members in a sorted set with options, ordered from high to low
@param keyThe sorted set key
@param startThe starting index
@param stopThe stopping index
@param optionsAdditional options (WITHSCORES)
@returnsPromise that resolves with an array of members (or with scores if WITHSCORES)
- max: string,min: string,...options: string[]): Promise<string[]>;
Return members in a sorted set within a lexicographical range, ordered from high to low
All members in a sorted set must have the same score for this command to work correctly. The max and min arguments have the same meaning as in ZRANGEBYLEX, but in reverse order.
Use "[" for inclusive bounds and "(" for exclusive bounds. Use "-" for negative infinity and "+" for positive infinity.
@param keyThe sorted set key
@param maxThe maximum lexicographical value (inclusive with "[", exclusive with "(")
@param minThe minimum lexicographical value (inclusive with "[", exclusive with "(")
@param optionsOptional LIMIT clause: ["LIMIT", offset, count]
@returnsPromise that resolves with an array of members in reverse lexicographical order
// Add members with same score await redis.send("ZADD", ["myzset", "0", "a", "0", "b", "0", "c", "0", "d"]); // Get range from highest to lowest const members = await redis.zrevrangebylex("myzset", "[d", "[b"); console.log(members); // ["d", "c", "b"] // With LIMIT const limited = await redis.zrevrangebylex("myzset", "+", "-", "LIMIT", "0", "2"); console.log(limited); // ["d", "c"] (first 2 members)
- max: string | number,min: string | number): Promise<string[]>;
Return members in a sorted set with scores within a given range, ordered from high to low
Returns all the elements in the sorted set at key with a score between max and min (note: max comes before min). The elements are considered to be ordered from high to low scores.
Score ranges support:
-inf
and+inf
for negative and positive infinity(
prefix for exclusive bounds (e.g.,(5
means less than 5, not including 5)
@param keyThe sorted set key
@param maxMaximum score (can be "+inf", a number, or prefixed with "(" for exclusive)
@param minMinimum score (can be "-inf", a number, or prefixed with "(" for exclusive)
@returnsPromise that resolves with array of members
await redis.send("ZADD", ["myzset", "1", "one", "2", "two", "3", "three"]); const members = await redis.zrevrangebyscore("myzset", 2, 1); // Returns: ["two", "one"]
max: string | number,min: string | number,withscores: 'WITHSCORES'): Promise<[string, number][]>;Return members in a sorted set with scores within a given range, ordered from high to low, with scores
@param keyThe sorted set key
@param maxMaximum score
@param minMinimum score
@param withscoresThe "WITHSCORES" keyword to return scores along with members
@returnsPromise that resolves with array of [member, score, member, score, ...]
await redis.send("ZADD", ["myzset", "1", "one", "2", "two", "3", "three"]); const result = await redis.zrevrangebyscore("myzset", 2, 1, "WITHSCORES"); // Returns: ["two", "2", "one", "1"]
max: string | number,min: string | number,limit: 'LIMIT',offset: number,count: number): Promise<string[]>;Return members in a sorted set with scores within a given range, ordered from high to low, with pagination
@param keyThe sorted set key
@param maxMaximum score
@param minMinimum score
@param limitThe "LIMIT" keyword
@param offsetThe number of elements to skip
@param countThe maximum number of elements to return
@returnsPromise that resolves with array of members
max: string | number,min: string | number,...options: string | number[]): Promise<string[]>;Return members in a sorted set with scores within a given range, ordered from high to low, with options
@param keyThe sorted set key
@param maxMaximum score
@param minMinimum score
@param optionsAdditional options (WITHSCORES, LIMIT offset count)
@returnsPromise that resolves with array of members (and scores if WITHSCORES is used)
- member: string): Promise<null | number>;
Determine the index of a member in a sorted set, with scores ordered from high to low
@param keyThe sorted set key
@param memberThe member to find
@returnsPromise that resolves with the rank (index) of the member, or null if the member doesn't exist
member: string,withscore: 'WITHSCORE'): Promise<null | [number, number]>;Determine the index of a member in a sorted set with score, with scores ordered from high to low
@param keyThe sorted set key
@param memberThe member to find
@param withscore"WITHSCORE" to include the score
@returnsPromise that resolves with [rank, score] or null if the member doesn't exist
- cursor: string | number,...options: string[]): Promise<[string, string[]]>;
Incrementally iterate sorted set elements and their scores
The ZSCAN command is used in order to incrementally iterate over sorted set elements and their scores. ZSCAN is a cursor based iterator. This means that at every call of the command, the server returns an updated cursor that the user needs to use as the cursor argument in the next call.
An iteration starts when the cursor is set to 0, and terminates when the cursor returned by the server is 0.
ZSCAN and the other SCAN family commands are able to provide to the user a set of guarantees associated to full iterations:
- A full iteration always retrieves all the elements that were present in the collection from the start to the end of a full iteration. This means that if a given element is inside the collection when an iteration is started, and is still there when an iteration terminates, then at some point ZSCAN returned it.
- A full iteration never returns any element that was NOT present in the collection from the start to the end of a full iteration. So if an element was removed before the start of an iteration, and is never added back to the collection for all the time an iteration lasts, ZSCAN ensures that this element will never be returned.
Options:
- MATCH pattern: Only return elements matching the pattern (glob-style)
- COUNT count: Amount of work done at every call (hint, not exact)
@param keyThe sorted set key
@param cursorThe cursor value (use 0 to start a new iteration)
@param optionsAdditional ZSCAN options (MATCH pattern, COUNT hint, etc.)
@returnsPromise that resolves with a tuple [cursor, [member1, score1, member2, score2, ...]]
// Basic scan - iterate all elements let cursor = "0"; const allElements: string[] = []; do { const [nextCursor, elements] = await redis.zscan("myzset", cursor); allElements.push(...elements); cursor = nextCursor; } while (cursor !== "0");
- member: string): Promise<null | number>;
Get the score associated with the given member in a sorted set
@param keyThe sorted set key
@param memberThe member to get the score for
@returnsPromise that resolves with the score of the member as a number, or null if the member or key doesn't exist
- numkeys: number,...args: [...args: string | number[], withscores: 'WITHSCORES']): Promise<[string, number][]>;
Compute the union of multiple sorted sets
Returns the union of the sorted sets given by the specified keys. For every element that appears in at least one of the input sorted sets, the output will contain that element.
Options:
- WEIGHTS: Multiply the score of each member in the corresponding sorted set by the given weight before aggregation
- AGGREGATE SUM|MIN|MAX: Specify how the scores are aggregated (default: SUM)
- WITHSCORES: Include scores in the result
@param numkeysThe number of input keys (sorted sets)
@returnsPromise that resolves with an array of members (or members with scores if WITHSCORES is used)
// Set up sorted sets await redis.zadd("zset1", "1", "a", "2", "b", "3", "c"); await redis.zadd("zset2", "4", "b", "5", "c", "6", "d"); // Basic union const members1 = await redis.zunion(2, "zset1", "zset2"); // Returns: ["a", "b", "c", "d"] // With weights const members2 = await redis.zunion(2, "zset1", "zset2", "WEIGHTS", "2", "3"); // Returns: ["a", "b", "c", "d"] with calculated scores // With MIN aggregation const members3 = await redis.zunion(2, "zset1", "zset2", "AGGREGATE", "MIN"); // Returns: ["a", "b", "c", "d"] with minimum scores // With scores const withScores = await redis.zunion(2, "zset1", "zset2", "WITHSCORES"); // Returns: ["a", "1", "b", "2", "c", "3", "d", "6"] (alternating member and score)
numkeys: number,...args: string | number[]): Promise<string[]>;Compute the union of multiple sorted sets
Returns the union of the sorted sets given by the specified keys. For every element that appears in at least one of the input sorted sets, the output will contain that element.
Options:
- WEIGHTS: Multiply the score of each member in the corresponding sorted set by the given weight before aggregation
- AGGREGATE SUM|MIN|MAX: Specify how the scores are aggregated (default: SUM)
- WITHSCORES: Include scores in the result
@param numkeysThe number of input keys (sorted sets)
@returnsPromise that resolves with an array of members (or members with scores if WITHSCORES is used)
// Set up sorted sets await redis.zadd("zset1", "1", "a", "2", "b", "3", "c"); await redis.zadd("zset2", "4", "b", "5", "c", "6", "d"); // Basic union const members1 = await redis.zunion(2, "zset1", "zset2"); // Returns: ["a", "b", "c", "d"] // With weights const members2 = await redis.zunion(2, "zset1", "zset2", "WEIGHTS", "2", "3"); // Returns: ["a", "b", "c", "d"] with calculated scores // With MIN aggregation const members3 = await redis.zunion(2, "zset1", "zset2", "AGGREGATE", "MIN"); // Returns: ["a", "b", "c", "d"] with minimum scores // With scores const withScores = await redis.zunion(2, "zset1", "zset2", "WITHSCORES"); // Returns: ["a", "1", "b", "2", "c", "3", "d", "6"] (alternating member and score)
- numkeys: number,...args: string | number[]): Promise<number>;
Compute the union of multiple sorted sets and store in destination
This command is similar to ZUNION, but instead of returning the result, it stores it in the destination key. If the destination key already exists, it is overwritten.
Options:
- WEIGHTS: Multiply the score of each member in the corresponding sorted set by the given weight before aggregation
- AGGREGATE SUM|MIN|MAX: Specify how the scores are aggregated (default: SUM)
@param destinationThe destination key to store the result
@param numkeysThe number of input keys (sorted sets)
@returnsPromise that resolves with the number of elements in the resulting sorted set
// Set up sorted sets await redis.zadd("zset1", "1", "a", "2", "b", "3", "c"); await redis.zadd("zset2", "4", "b", "5", "c", "6", "d"); // Basic union store const count1 = await redis.zunionstore("out", 2, "zset1", "zset2"); // Returns: 4 (stored "a", "b", "c", "d" in "out") // With weights const count2 = await redis.zunionstore("out2", 2, "zset1", "zset2", "WEIGHTS", "2", "3"); // Returns: 4 // With MAX aggregation const count3 = await redis.zunionstore("out3", 2, "zset1", "zset2", "AGGREGATE", "MAX"); // Returns: 4 (stores maximum scores)
class S3Client
A configured S3 bucket instance for managing files. The instance is callable to create S3File instances and provides methods for common operations.
// Basic bucket setup const bucket = new S3Client({ bucket: "my-bucket", accessKeyId: "key", secretAccessKey: "secret" }); // Get file instance const file = bucket.file("image.jpg"); // Common operations await bucket.write("data.json", JSON.stringify({hello: "world"})); const url = bucket.presign("file.pdf"); await bucket.unlink("old.txt");
- path: string,): Promise<void>;
Delete a file from the bucket. Alias for S3Client.unlink.
@param pathThe path to the file in the bucket
@param optionsAdditional S3 options to override defaults
@returnsA promise that resolves when deletion is complete
// Simple delete await bucket.delete("old-file.txt"); // With error handling try { await bucket.delete("file.dat"); console.log("File deleted"); } catch (err) { console.error("Delete failed:", err); }
- path: string,): Promise<boolean>;
Check if a file exists in the bucket. Uses HEAD request to check existence.
@param pathThe path to the file in the bucket
@param optionsAdditional S3 options to override defaults
@returnsA promise that resolves to true if the file exists, false otherwise
// Check existence if (await bucket.exists("config.json")) { const file = bucket.file("config.json"); const config = await file.json(); } // With error handling try { if (!await bucket.exists("required.txt")) { throw new Error("Required file missing"); } } catch (err) { console.error("Check failed:", err); }
- @param path
The path to the file in the bucket
@param optionsAdditional S3 options to override defaults
@returnsAn S3File instance
const file = bucket.file("image.jpg"); await file.write(imageData); const configFile = bucket.file("config.json", { type: "application/json", acl: "private" });
- list(options?: Pick<S3Options, 'accessKeyId' | 'secretAccessKey' | 'sessionToken' | 'region' | 'bucket' | 'endpoint'>
Returns some or all (up to 1,000) of the objects in a bucket with each request.
You can use the request parameters as selection criteria to return a subset of the objects in a bucket.
@param inputOptions for listing objects in the bucket
@param optionsAdditional S3 options to override defaults
@returnsA promise that resolves to the list response
// List (up to) 1000 objects in the bucket const allObjects = await bucket.list(); // List (up to) 500 objects under `uploads/` prefix, with owner field for each object const uploads = await bucket.list({ prefix: 'uploads/', maxKeys: 500, fetchOwner: true, }); // Check if more results are available if (uploads.isTruncated) { // List next batch of objects under `uploads/` prefix const moreUploads = await bucket.list({ prefix: 'uploads/', maxKeys: 500, startAfter: uploads.contents!.at(-1).key fetchOwner: true, }); }
- path: string,): string;
Generate a presigned URL for temporary access to a file. Useful for generating upload/download URLs without exposing credentials.
@param pathThe path to the file in the bucket
@param optionsOptions for generating the presigned URL
@returnsA presigned URL string
// Download URL const downloadUrl = bucket.presign("file.pdf", { expiresIn: 3600 // 1 hour }); // Upload URL const uploadUrl = bucket.presign("uploads/image.jpg", { method: "PUT", expiresIn: 3600, type: "image/jpeg", acl: "public-read" }); // Long-lived public URL const publicUrl = bucket.presign("public/doc.pdf", { expiresIn: 7 * 24 * 60 * 60, // 7 days acl: "public-read" });
- size(path: string,): Promise<number>;
Get the size of a file in bytes. Uses HEAD request to efficiently get size.
@param pathThe path to the file in the bucket
@param optionsAdditional S3 options to override defaults
@returnsA promise that resolves to the file size in bytes
// Get size const bytes = await bucket.size("video.mp4"); console.log(`Size: ${bytes} bytes`); // Check if file is large if (await bucket.size("data.zip") > 100 * 1024 * 1024) { console.log("File is larger than 100MB"); }
- @param path
The path to the file in the bucket
@param optionsAdditional S3 options to override defaults
@returnsA promise that resolves to the file stats
const stat = await bucket.stat("my-file.txt");
- @param path
The path to the file in the bucket
@param optionsAdditional S3 options to override defaults
@returnsA promise that resolves when deletion is complete
// Simple delete await bucket.unlink("old-file.txt"); // With error handling try { await bucket.unlink("file.dat"); console.log("File deleted"); } catch (err) { console.error("Delete failed:", err); }
- path: string,data: string | ArrayBuffer | SharedArrayBuffer | Blob | BunFile | Request | Response | File | ArrayBufferView<ArrayBufferLike> | S3File,): Promise<number>;
Writes data directly to a path in the bucket. Supports strings, buffers, streams, and web API types.
@param pathThe path to the file in the bucket
@param dataThe data to write to the file
@param optionsAdditional S3 options to override defaults
@returnsThe number of bytes written
// Write string await bucket.write("hello.txt", "Hello World"); // Write JSON with type await bucket.write( "data.json", JSON.stringify({hello: "world"}), {type: "application/json"} ); // Write from fetch const res = await fetch("https://example.com/data"); await bucket.write("data.bin", res); // Write with ACL await bucket.write("public.html", html, { acl: "public-read", type: "text/html" });
- path: string,): Promise<void>;
Delete a file from the bucket. Alias for S3Client.unlink.
@param pathThe path to the file in the bucket
@param optionsS3 credentials and configuration options
@returnsA promise that resolves when deletion is complete
// Simple delete await S3Client.delete("old-file.txt", credentials); // With error handling try { await S3Client.delete("file.dat", credentials); console.log("File deleted"); } catch (err) { console.error("Delete failed:", err); }
- path: string,): Promise<boolean>;
Check if a file exists in the bucket. Uses HEAD request to check existence.
@param pathThe path to the file in the bucket
@param optionsS3 credentials and configuration options
@returnsA promise that resolves to true if the file exists, false otherwise
// Check existence if (await S3Client.exists("config.json", credentials)) { const file = bucket.file("config.json"); const config = await file.json(); } // With error handling try { if (!await S3Client.exists("required.txt", credentials)) { throw new Error("Required file missing"); } } catch (err) { console.error("Check failed:", err); }
- path: string,
Creates an S3File instance for the given path.
@param pathThe path to the file in the bucket
@param optionsS3 credentials and configuration options
@returnsAn S3File instance
const file = S3Client.file("image.jpg", credentials); await file.write(imageData); const configFile = S3Client.file("config.json", { ...credentials, type: "application/json", acl: "private" });
- options?: Pick<S3Options, 'accessKeyId' | 'secretAccessKey' | 'sessionToken' | 'region' | 'bucket' | 'endpoint'>
Returns some or all (up to 1,000) of the objects in a bucket with each request.
You can use the request parameters as selection criteria to return a subset of the objects in a bucket.
@param inputOptions for listing objects in the bucket
@param optionsS3 credentials and configuration options
@returnsA promise that resolves to the list response
// List (up to) 1000 objects in the bucket const allObjects = await S3Client.list(null, credentials); // List (up to) 500 objects under `uploads/` prefix, with owner field for each object const uploads = await S3Client.list({ prefix: 'uploads/', maxKeys: 500, fetchOwner: true, }, credentials); // Check if more results are available if (uploads.isTruncated) { // List next batch of objects under `uploads/` prefix const moreUploads = await S3Client.list({ prefix: 'uploads/', maxKeys: 500, startAfter: uploads.contents!.at(-1).key fetchOwner: true, }, credentials); }
- path: string,): string;
Generate a presigned URL for temporary access to a file. Useful for generating upload/download URLs without exposing credentials.
@param pathThe path to the file in the bucket
@param optionsS3 credentials and presigned URL configuration
@returnsA presigned URL string
// Download URL const downloadUrl = S3Client.presign("file.pdf", { ...credentials, expiresIn: 3600 // 1 hour }); // Upload URL const uploadUrl = S3Client.presign("uploads/image.jpg", { ...credentials, method: "PUT", expiresIn: 3600, type: "image/jpeg", acl: "public-read" }); // Long-lived public URL const publicUrl = S3Client.presign("public/doc.pdf", { ...credentials, expiresIn: 7 * 24 * 60 * 60, // 7 days acl: "public-read" });
- path: string,): Promise<number>;
Get the size of a file in bytes. Uses HEAD request to efficiently get size.
@param pathThe path to the file in the bucket
@param optionsS3 credentials and configuration options
@returnsA promise that resolves to the file size in bytes
// Get size const bytes = await S3Client.size("video.mp4", credentials); console.log(`Size: ${bytes} bytes`); // Check if file is large if (await S3Client.size("data.zip", credentials) > 100 * 1024 * 1024) { console.log("File is larger than 100MB"); }
- path: string,
Get the stat of a file in an S3-compatible storage service.
@param pathThe path to the file in the bucket
@param optionsS3 credentials and configuration options
@returnsA promise that resolves to the file stats
const stat = await S3Client.stat("my-file.txt", credentials);
- @param path
The path to the file in the bucket
@param optionsS3 credentials and configuration options
@returnsA promise that resolves when deletion is complete
// Simple delete await S3Client.unlink("old-file.txt", credentials); // With error handling try { await S3Client.unlink("file.dat", credentials); console.log("File deleted"); } catch (err) { console.error("Delete failed:", err); }
- path: string,data: string | ArrayBuffer | SharedArrayBuffer | Blob | BunFile | Request | Response | File | ArrayBufferView<ArrayBufferLike> | S3File,): Promise<number>;
Writes data directly to a path in the bucket. Supports strings, buffers, streams, and web API types.
@param pathThe path to the file in the bucket
@param dataThe data to write to the file
@param optionsS3 credentials and configuration options
@returnsThe number of bytes written
// Write string await S3Client.write("hello.txt", "Hello World", credentials); // Write JSON with type await S3Client.write( "data.json", JSON.stringify({hello: "world"}), { ...credentials, type: "application/json" } ); // Write from fetch const res = await fetch("https://example.com/data"); await S3Client.write("data.bin", res, credentials); // Write with ACL await S3Client.write("public.html", html, { ...credentials, acl: "public-read", type: "text/html" });
class SHA1
This is not the default because it's not cryptographically secure and it's slower than SHA512
Consider using the ugly-named SHA512_256 instead
- @param encoding
DigestEncoding
to return the hash in. If none is provided, it will return aUint8Array
.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time - hashInto?: TypedArray<ArrayBufferLike>): TypedArray;
Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time): string;Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param encodingDigestEncoding
to return the hash in
class SHA224
This class only exists in types
- @param encoding
DigestEncoding
to return the hash in. If none is provided, it will return aUint8Array
.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time - hashInto?: TypedArray<ArrayBufferLike>): TypedArray;
Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time): string;Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param encodingDigestEncoding
to return the hash in
class SHA256
This class only exists in types
- @param encoding
DigestEncoding
to return the hash in. If none is provided, it will return aUint8Array
.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time - hashInto?: TypedArray<ArrayBufferLike>): TypedArray;
Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time): string;Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param encodingDigestEncoding
to return the hash in
class SHA384
This class only exists in types
- @param encoding
DigestEncoding
to return the hash in. If none is provided, it will return aUint8Array
.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time - hashInto?: TypedArray<ArrayBufferLike>): TypedArray;
Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time): string;Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param encodingDigestEncoding
to return the hash in
class SHA512
This class only exists in types
- @param encoding
DigestEncoding
to return the hash in. If none is provided, it will return aUint8Array
.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time - hashInto?: TypedArray<ArrayBufferLike>): TypedArray;
Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time): string;Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param encodingDigestEncoding
to return the hash in
class SHA512_256
See also sha
- @param encoding
DigestEncoding
to return the hash in. If none is provided, it will return aUint8Array
.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time - hashInto?: TypedArray<ArrayBufferLike>): TypedArray;
Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param hashIntoTypedArray
to write the hash into. Faster than creating a new one each time): string;Run the hash over the given data
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
is faster.@param encodingDigestEncoding
to return the hash in
class SQL
Main SQL client interface providing connection and transaction management
- options: Merge<SQLiteOptions, PostgresOrMySQLOptions> | Merge<PostgresOrMySQLOptions, SQLiteOptions>
Current client options
- values: any[],
Creates a new SQL array parameter
@param valuesThe values to create the array parameter from
@param typeNameOrTypeIDThe type name or type ID to create the array parameter from, if omitted it will default to JSON
@returnsA new SQL array parameter
const array = sql.array([1, 2, 3], "INT"); await sql`CREATE TABLE users_posts (user_id INT, posts_id INT[])`; await sql`INSERT INTO users_posts (user_id, posts_id) VALUES (${user.id}, ${array})`;
Begins a new transaction.
Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.begin will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.begin(async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] })
options: string,Begins a new transaction with options.
Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.begin will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.begin("read write", async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] })
- name: string,
Begins a distributed transaction Also know as Two-Phase Commit, in a distributed transaction, Phase 1 involves the coordinator preparing nodes by ensuring data is written and ready to commit, while Phase 2 finalizes with nodes committing or rolling back based on the coordinator's decision, ensuring durability and releasing locks. In PostgreSQL and MySQL distributed transactions persist beyond the original session, allowing privileged users or coordinators to commit/rollback them, ensuring support for distributed transactions, recovery, and administrative tasks. beginDistributed will automatic rollback if any exception are not caught, and you can commit and rollback later if everything goes well. PostgreSQL natively supports distributed transactions using PREPARE TRANSACTION, while MySQL uses XA Transactions, and MSSQL also supports distributed/XA transactions. However, in MSSQL, distributed transactions are tied to the original session, the DTC coordinator, and the specific connection. These transactions are automatically committed or rolled back following the same rules as regular transactions, with no option for manual intervention from other sessions, in MSSQL distributed transactions are used to coordinate transactions using Linked Servers.
await sql.beginDistributed("numbers", async sql => { await sql`create table if not exists numbers (a int)`; await sql`insert into numbers values(1)`; }); // later you can call await sql.commitDistributed("numbers"); // or await sql.rollbackDistributed("numbers");
- options?: { timeout: number }): Promise<void>;
Closes the database connection with optional timeout in seconds. If timeout is 0, it will close immediately, if is not provided it will wait for all queries to finish before closing.
@param optionsThe options for the close
await sql.close({ timeout: 1 });
- name: string): Promise<void>;
Commits a distributed transaction also know as prepared transaction in postgres or XA transaction in MySQL
@param nameThe name of the distributed transaction
await sql.commitDistributed("my_distributed_transaction");
- name: string,
Alternative method to begin a distributed transaction
- end(options?: { timeout: number }): Promise<void>;
Closes the database connection with optional timeout in seconds. If timeout is 0, it will close immediately, if is not provided it will wait for all queries to finish before closing. This is an alias of SQL.close
@param optionsThe options for the close
await sql.end({ timeout: 1 });
Flushes any pending operations
sql.flush();
The reserve method pulls out a connection from the pool, and returns a client that wraps the single connection.
This can be used for running queries on an isolated connection. Calling reserve in a reserved Sql will return a new reserved connection, not the same connection (behavior matches postgres package).
const reserved = await sql.reserve(); await reserved`select * from users`; await reserved.release(); // with in a production scenario would be something more like const reserved = await sql.reserve(); try { // ... queries } finally { await reserved.release(); } // Bun supports Symbol.dispose and Symbol.asyncDispose // always release after context (safer) using reserved = await sql.reserve() await reserved`select * from users`
- name: string): Promise<void>;
Rolls back a distributed transaction also know as prepared transaction in postgres or XA transaction in MySQL
@param nameThe name of the distributed transaction
await sql.rollbackDistributed("my_distributed_transaction");
Alternative method to begin a transaction.
Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.transaction will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.transaction(async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] })
options: string,Alternative method to begin a transaction with options Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.transaction will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.transaction("read write", async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] });
- string: string,values?: any[]
If you know what you're doing, you can use unsafe to pass any string you'd like. Please note that this can lead to SQL injection if you're not careful. You can also nest sql.unsafe within a safe sql expression. This is useful if only part of your fraction has unsafe elements.
const result = await sql.unsafe(`select ${danger} from users where id = ${dragons}`)
class Transpiler
Quickly transpile TypeScript, JSX, or JS to modern JavaScript.
const transpiler = new Bun.Transpiler(); transpiler.transformSync(` const App = () => <div>Hello World</div>; export default App; `); // This outputs: const output = ` const App = () => jsx("div", { children: "Hello World" }, undefined, false, undefined, this); export default App; `
- scan(
Get a list of import paths and paths from a TypeScript, JSX, TSX, or JavaScript file.
@param codeThe code to scan
const {imports, exports} = transpiler.scan(` import {foo} from "baz"; export const hello = "hi!"; `); console.log(imports); // ["baz"] console.log(exports); // ["hello"]
Get a list of import paths from a TypeScript, JSX, TSX, or JavaScript file.
@param codeThe code to scan
const imports = transpiler.scanImports(` import {foo} from "baz"; import type {FooType} from "bar"; import type {DogeType} from "wolf"; `); console.log(imports); // ["baz"]
This is a fast path which performs less work than
scan
.- ): Promise<string>;
Transpile code from TypeScript or JSX into valid JavaScript. This function does not resolve imports.
@param codeThe code to transpile
- ctx: object): string;
Transpile code from TypeScript or JSX into valid JavaScript. This function does not resolve imports.
@param codeThe code to transpile
ctx: object): string;Transpile code from TypeScript or JSX into valid JavaScript. This function does not resolve imports.
@param codeThe code to transpile
@param ctxAn object to pass to macros
): string;Transpile code from TypeScript or JSX into valid JavaScript. This function does not resolve imports.
@param codeThe code to transpile
interface BunFile
Blob
powered by the fastest system calls available for operating on files.This Blob is lazy. That means it won't do any work until you read from it.
size
will not be valid until the contents of the file are read at least once.type
is auto-set based on the file extension when possible
const file = Bun.file("./hello.json"); console.log(file.type); // "application/json" console.log(await file.text()); // '{"hello":"world"}'
Returns a promise that resolves to the contents of the blob as an ArrayBuffer
Returns a promise that resolves to the contents of the blob as a Uint8Array (array of bytes) its the same as
new Uint8Array(await blob.arrayBuffer())
Deletes the file (same as unlink)
Does the file exist?
This returns true for regular files and FIFOs. It returns false for directories. Note that a race condition can occur where the file is deleted or renamed after this is called but before you open it.
This does a system call to check if the file exists, which can be slow.
If using this in an HTTP server, it's faster to instead use
return new Response(Bun.file(path))
and then anerror
handler to handle exceptions.Instead of checking for a file's existence and then performing the operation, it is faster to just perform the operation and handle the error.
For empty Blob, this always returns true.
Read the data from the blob as a FormData object.
This first decodes the data from UTF-8, then parses it as a
multipart/form-data
body or aapplication/x-www-form-urlencoded
body.The
type
property of the blob is used to determine the format of the body.This is a non-standard addition to the
Blob
API, to make it conform more closely to theBodyMixin
API.Read the data from the blob as a JSON object.
This first decodes the data from UTF-8, then parses it as JSON.
- begin?: number,end?: number,contentType?: string
Offset any operation on the file starting at
begin
and ending atend
.end
is relative to 0Similar to
TypedArray.subarray
. Does not copy the file, open the file, or modify the file.If
begin
> 0, () will be slower on macOS@param beginstart offset in bytes
@param endabsolute offset in bytes (relative to 0)
@param contentTypeMIME type for the new BunFile
begin?: number,contentType?: stringOffset any operation on the file starting at
begin
Similar to
TypedArray.subarray
. Does not copy the file, open the file, or modify the file.If
begin
> 0, Bun.write() will be slower on macOS@param beginstart offset in bytes
@param contentTypeMIME type for the new BunFile
Returns a readable stream of the blob's contents
Returns a promise that resolves to the contents of the blob as a string
Deletes the file.
- data: string | ArrayBuffer | SharedArrayBuffer | BunFile | Request | Response | ArrayBufferView<ArrayBufferLike>,options?: { highWaterMark: number }): Promise<number>;
Write data to the file. This is equivalent to using Bun.write with a BunFile.
@param dataThe data to write.
@param optionsThe options to use for the write.
interface S3File
Represents a file in an S3-compatible storage service. Extends the Blob interface for compatibility with web APIs.
- readonly bucket?: string
The bucket name containing the file.
const file = s3.file("s3://my-bucket/file.txt"); console.log(file.bucket); // "my-bucket"
- readonly name?: string
The name or path of the file in the bucket.
const file = s3.file("folder/image.jpg"); console.log(file.name); // "folder/image.jpg"
- readonly readable: ReadableStream<Uint8Array<ArrayBuffer>>
Gets a readable stream of the file's content. Useful for processing large files without loading them entirely into memory.
// Basic streaming read const stream = file.stream(); for await (const chunk of stream) { console.log('Received chunk:', chunk); }
- unlink: () => Promise<void>
Alias for delete() method. Provided for compatibility with Node.js fs API naming.
await file.unlink();
Returns a promise that resolves to the contents of the blob as an ArrayBuffer
Returns a promise that resolves to the contents of the blob as a Uint8Array (array of bytes) its the same as
new Uint8Array(await blob.arrayBuffer())
Deletes the file from S3.
@returnsPromise that resolves when deletion is complete
// Basic deletion await file.delete();
Checks if the file exists in S3. Uses HTTP HEAD request to efficiently check existence without downloading.
@returnsPromise resolving to true if file exists, false otherwise
// Basic existence check if (await file.exists()) { console.log("File exists in S3"); }
Read the data from the blob as a FormData object.
This first decodes the data from UTF-8, then parses it as a
multipart/form-data
body or aapplication/x-www-form-urlencoded
body.The
type
property of the blob is used to determine the format of the body.This is a non-standard addition to the
Blob
API, to make it conform more closely to theBodyMixin
API.Read the data from the blob as a JSON object.
This first decodes the data from UTF-8, then parses it as JSON.
- ): string;
Generates a presigned URL for the file. Allows temporary access to the file without exposing credentials.
@param optionsConfiguration for the presigned URL
@returnsPresigned URL string
// Basic download URL const url = file.presign({ expiresIn: 3600 // 1 hour });
- begin?: number,end?: number,contentType?: string
Creates a new S3File representing a slice of the original file. Uses HTTP Range headers for efficient partial downloads.
@param beginStarting byte offset
@param endEnding byte offset (exclusive)
@param contentTypeOptional MIME type for the slice
@returnsA new S3File representing the specified range
// Reading file header const header = file.slice(0, 1024); const headerText = await header.text();
Returns a promise that resolves to the contents of the blob as a string
- data: string | ArrayBuffer | SharedArrayBuffer | Blob | BunFile | Request | Response | ArrayBufferView<ArrayBufferLike> | S3File,): Promise<number>;
Uploads data to S3. Supports various input types and automatically handles large files.
@param dataThe data to upload
@param optionsUpload configuration options
@returnsPromise resolving to number of bytes written
// Writing string data await file.write("Hello World", { type: "text/plain" });
Creates a writable stream for uploading data. Suitable for large files as it uses multipart upload.
@param optionsConfiguration for the upload
@returnsA NetworkSink for writing data
// Basic streaming write const writer = file.writer({ type: "application/json" }); writer.write('{"hello": '); writer.write('"world"}'); await writer.end();
interface Server<WebSocketData>
HTTP & HTTPS Server
To start the server, see serve
For performance, Bun pre-allocates most of the data for 2048 concurrent requests. That means starting a new server allocates about 500 KB of memory. Try to avoid starting and stopping the server often (unless it's a new instance of bun).
Powered by a fork of uWebSockets. Thank you @alexhultman.
- readonly development: boolean
Is the server running in development mode?
In development mode,
Bun.serve()
returns rendered error messages with stack traces instead of a generic 500 error. This makes debugging easier, but development mode shouldn't be used in production or you will risk leaking sensitive information. - readonly hostname: undefined | string
The hostname the server is listening on. Does not include the port.
This will be
undefined
when the server is listening on a unix socket."localhost"
- readonly id: string
An identifier of the server instance
When bun is started with the
--hot
flag, this ID is used to hot reload the server without interrupting pending requests or websockets.When bun is not started with the
--hot
flag, this ID is currently unused. - readonly port: undefined | number
The port the server is listening on.
This will be undefined when the server is listening on a unix socket.
3000
- topic: string,compress?: boolean): number;
Send a message to all connected ServerWebSocket subscribed to a topic
@param topicThe topic to publish to
@param dataThe data to send
@param compressShould the data be compressed? Ignored if the client does not support compression.
@returns0 if the message was dropped, -1 if backpressure was applied, or the number of bytes sent.
server.publish("chat", "Hello World");
Undo a call to Server.unref
If the Server has already been stopped, this does nothing.
If Server.ref is called multiple times, this does nothing. Think of it as a boolean toggle.
Update the
fetch
anderror
handlers without restarting the server.This is useful if you want to change the behavior of your server without restarting it or for hot reloading.
// create the server const server = Bun.serve({ fetch(request) { return new Response("Hello World v1") } }); // Update the server to return a different response server.reload({ fetch(request) { return new Response("Hello World v2") } });
Passing other options such as
port
orhostname
won't do anything.Returns the client IP address and port of the given Request. If the request was closed or is a unix socket, returns null.
export default { async fetch(request, server) { return new Response(server.requestIP(request)); } }
- stop(closeActiveConnections?: boolean): Promise<void>;
Stop listening to prevent new connections from being accepted.
By default, it does not cancel in-flight requests or websockets. That means it may take some time before all network activity stops.
@param closeActiveConnectionsImmediately terminate in-flight requests, websockets, and stop accepting new connections.
- topic: string): number;
A count of connections subscribed to a given topic
This operation will loop through each topic internally to get the count.
@param topicthe websocket topic to check how many subscribers are connected to
@returnsthe number of subscribers
Don't keep the process alive if this server is the only thing left. Active connections may continue to keep the process alive.
By default, the server is ref'd.
To prevent new connections from being accepted, use Server.stop
- ...options: [WebSocketData] extends [undefined] ? [options?: { data: undefined; headers: HeadersInit }] : [options: { data: WebSocketData; headers: HeadersInit }]): boolean;
Upgrade a Request to a ServerWebSocket
@param requestThe Request to upgrade
@param optionsPass headers or attach data to the ServerWebSocket
@returnstrue
if the upgrade was successful andfalse
if it failedimport { serve } from "bun"; const server: Bun.Server<{ user: string }> = serve({ websocket: { open: (ws) => { console.log("Client connected"); }, message: (ws, message) => { console.log("Client sent message", message); }, close: (ws) => { console.log("Client disconnected"); }, }, fetch(req, server) { const url = new URL(req.url); if (url.pathname === "/chat") { const upgraded = server.upgrade(req, { data: {user: "John Doe"} }); if (!upgraded) { return new Response("Upgrade failed", { status: 400 }); } } return new Response("Hello World"); }, });
What you pass to
data
is available on the ServerWebSocket.data property
The raw arguments passed to the process, including flags passed to Bun. If you want to easily read flags passed to your script, consider using
process.argv
instead.A list of files embedded into the standalone executable. Lexigraphically sorted by name.
If the process is not a standalone executable, this returns an empty array.
Are ANSI colors enabled for stdin and stdout?
Used for console.log
The environment variables of the process
Defaults to
process.env
as it was when the current Bun process launched.Changes to
process.env
at runtime won't automatically be reflected in the default value. For that, you can passprocess.env
explicitly.- const hash: (data: string | ArrayBufferView | ArrayBuffer | SharedArrayBuffer, seed?: number | bigint) => number | bigint & Hash
Hash a string or array buffer using Wyhash
This is not a cryptographic hash function.
Is the current global scope the main thread?
- const password: { hash(password: StringOrBuffer, algorithm?: 'argon2d' | 'argon2i' | 'argon2id' | Argon2Algorithm | BCryptAlgorithm | 'bcrypt'): Promise<string>; hashSync(password: StringOrBuffer, algorithm?: 'argon2d' | 'argon2i' | 'argon2id' | Argon2Algorithm | BCryptAlgorithm | 'bcrypt'): string; verify(password: StringOrBuffer, hash: StringOrBuffer, algorithm?: 'argon2d' | 'argon2i' | 'argon2id' | 'bcrypt'): Promise<boolean>; verifySync(password: StringOrBuffer, hash: StringOrBuffer, algorithm?: 'argon2d' | 'argon2i' | 'argon2id' | 'bcrypt'): boolean }
Hash and verify passwords using argon2 or bcrypt. The default is argon2. Password hashing functions are necessarily slow, and this object will automatically run in a worker thread.
Example with argon2
import {password} from "bun"; const hash = await password.hash("hello world"); const verify = await password.verify("hello world", hash); console.log(verify); // true
Example with bcrypt
import {password} from "bun"; const hash = await password.hash("hello world", "bcrypt"); // algorithm is optional, will be inferred from the hash if not specified const verify = await password.verify("hello world", hash, "bcrypt"); console.log(verify); // true
Default Redis client
Connection information populated from one of, in order of preference:
process.env.VALKEY_URL
process.env.REDIS_URL
"valkey://localhost:6379"
The git sha at the time the currently-running version of Bun was compiled
"a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2"
- const secrets: { delete(options: { name: string; service: string }): Promise<boolean>; get(options: { name: string; service: string }): Promise<null | string>; set(options: { allowUnrestrictedAccess: boolean; name: string; service: string; value: string }): Promise<void> }
Securely store and retrieve sensitive credentials using the operating system's native credential storage.
Uses platform-specific secure storage:
- macOS: Keychain Services
- Linux: libsecret (GNOME Keyring, KWallet, etc.)
- Windows: Windows Credential Manager
import { secrets } from "bun"; // Store a credential await secrets.set({ service: "my-cli-tool", name: "github-token", value: "ghp_xxxxxxxxxxxxxxxxxxxx" }); // Retrieve a credential const token = await secrets.get({ service: "my-cli-tool", name: "github-token" }); if (token) { console.log("Token found:", token); } else { console.log("Token not found"); } // Delete a credential const deleted = await secrets.delete({ service: "my-cli-tool", name: "github-token" }); console.log("Deleted:", deleted); // true if deleted, false if not found
The current version of Bun with the shortened commit sha of the build
"v1.2.0 (a1b2c3d4)"
- strings: TemplateStringsArray,
The Bun shell is a powerful tool for running shell commands.
const result = await $`echo "Hello, world!"`.text(); console.log(result); // "Hello, world!"
class ShellError
ShellError represents an error that occurred while executing a shell command with the Bun Shell.
try { const result = await $`exit 1`; } catch (error) { if (error instanceof $.ShellError) { console.log(error.exitCode); // 1 } }
- static stackTraceLimit: number
The
Error.stackTraceLimit
property specifies the number of stack frames collected by a stack trace (whether generated bynew Error().stack
orError.captureStackTrace(obj)
).The default value is
10
but may be set to any valid JavaScript number. Changes will affect any stack trace captured after the value has been changed.If set to a non-number value, or set to a negative number, stack traces will not capture any frames.
Read from stdout as an ArrayBuffer
@returnsStdout as an ArrayBuffer
const output = await $`echo hello`; console.log(output.arrayBuffer()); // ArrayBuffer { byteLength: 6 }
Read from stdout as an Uint8Array
@returnsStdout as an Uint8Array
const output = await $`echo hello`; console.log(output.bytes()); // Uint8Array { byteLength: 6 }
Read from stdout as a JSON object
@returnsStdout as a JSON object
const output = await $`echo '{"hello": 123}'`; console.log(output.json()); // { hello: 123 }
- @param encoding
The encoding to use when decoding the output
@returnsStdout as a string with the given encoding
Read as UTF-8 string
const output = await $`echo hello`; console.log(output.text()); // "hello\n"
Read as base64 string
const output = await $`echo ${atob("hello")}`; console.log(output.text("base64")); // "hello\n"
- targetObject: object,constructorOpt?: Function): void;
Create .stack property on a target object
class ShellPromise
The
Bun.$.ShellPromise
class represents a shell command that gets executed once awaited, or called with.text()
,.json()
, etc.const myShellPromise = $`echo "Hello, world!"`; const result = await myShellPromise.text(); console.log(result); // "Hello, world!"
Read from stdout as an ArrayBuffer
Automatically calls quiet
@returnsA promise that resolves with stdout as an ArrayBuffer
const output = await $`echo hello`.arrayBuffer(); console.log(output); // ArrayBuffer { byteLength: 6 }
- onrejected?: null | (reason: any) => TResult | PromiseLike<TResult>
Attaches a callback for only the rejection of the Promise.
@param onrejectedThe callback to execute when the Promise is rejected.
@returnsA Promise for the completion of the callback.
- @param newCwd
The new working directory
- env(newEnv: undefined | Dict<string> | Record<string, undefined | string>): this;
Set environment variables for the shell.
@param newEnvThe new environment variables
await $`echo $FOO`.env({ ...process.env, FOO: "LOL!" }) expect(stdout.toString()).toBe("LOL!");
- onfinally?: null | () => void
Attaches a callback that is invoked when the Promise is settled (fulfilled or rejected). The resolved value cannot be modified from the callback.
@param onfinallyThe callback to execute when the Promise is settled (fulfilled or rejected).
@returnsA Promise for the completion of the callback.
Read from stdout as a JSON object
Automatically calls quiet
@returnsA promise that resolves with stdout as a JSON object
const output = await $`echo '{"hello": 123}'`.json(); console.log(output); // { hello: 123 }
Read from stdout as a string, line by line
Automatically calls quiet to disable echoing to stdout.
Configure the shell to not throw an exception on non-zero exit codes. Throwing can be re-enabled with
.throws(true)
.By default, the shell with throw an exception on commands which return non-zero exit codes.
By default, the shell will write to the current process's stdout and stderr, as well as buffering that output.
This configures the shell to only buffer the output.
- text(encoding?: BufferEncoding): Promise<string>;
Read from stdout as a string.
Automatically calls quiet to disable echoing to stdout.
@param encodingThe encoding to use when decoding the output
@returnsA promise that resolves with stdout as a string
Read as UTF-8 string
const output = await $`echo hello`.text(); console.log(output); // "hello\n"
Read as base64 string
const output = await $`echo ${atob("hello")}`.text("base64"); console.log(output); // "hello\n"
- onrejected?: null | (reason: any) => TResult2 | PromiseLike<TResult2>): Promise<TResult1 | TResult2>;
Attaches callbacks for the resolution and/or rejection of the Promise.
@param onfulfilledThe callback to execute when the Promise is resolved.
@param onrejectedThe callback to execute when the Promise is rejected.
@returnsA Promise for the completion of which ever callback is executed.
- shouldThrow: boolean): this;
Configure whether or not the shell should throw an exception on non-zero exit codes.
By default, this is configured to
true
. - values: T): Promise<{ [K in string | number | symbol]: Awaited<T[P<P>]> }>;
Creates a Promise that is resolved with an array of results when all of the provided Promises resolve, or rejected when any Promise is rejected.
@param valuesAn array of Promises.
@returnsA new Promise.
- values: Iterable<T | PromiseLike<T>>): Promise<PromiseSettledResult<Awaited<T>>[]>;
Creates a Promise that is resolved with an array of results when all of the provided Promises resolve or reject.
@param valuesAn array of Promises.
@returnsA new Promise.
- values: T): Promise<Awaited<T[number]>>;
The any function returns a promise that is fulfilled by the first given promise to be fulfilled, or rejected with an AggregateError containing an array of rejection reasons if all of the given promises are rejected. It resolves all elements of the passed iterable to promises as it runs this algorithm.
@param valuesAn array or iterable of Promises.
@returnsA new Promise.
- values: T): Promise<Awaited<T[number]>>;
Creates a Promise that is resolved or rejected when any of the provided Promises are resolved or rejected.
@param valuesAn array of Promises.
@returnsA new Promise.
- reason?: any): Promise<T>;
Creates a new rejected promise for the provided reason.
@param reasonThe reason the promise was rejected.
@returnsA new rejected Promise.
- value: T): Promise<Awaited<T>>;
Creates a new resolved promise for the provided value.
@param valueA promise.
@returnsA promise whose internal state matches the provided promise.
- fn: (...args: A) => T | PromiseLike<T>,...args: A): Promise<T>;
Try to run a function and return the result. If the function throws, return the result of the
catch
function.@param fnThe function to run
@param argsThe arguments to pass to the function. This is similar to
setTimeout
and avoids the extra closure.@returnsThe result of the function or the result of the
catch
function - static withResolvers<T>(): { promise: Promise<T>; reject: (reason?: any) => void; resolve: (value?: T | PromiseLike<T>) => void };
Create a deferred promise, with exposed
resolve
andreject
methods which can be called separately.This is useful when you want to return a Promise and have code outside the Promise resolve or reject it.
const { promise, resolve, reject } = Promise.withResolvers(); setTimeout(() => { resolve("Hello world!"); }, 1000); await promise; // "Hello world!"
interface ShellOutput
Read from stdout as an ArrayBuffer
@returnsStdout as an ArrayBuffer
const output = await $`echo hello`; console.log(output.arrayBuffer()); // ArrayBuffer { byteLength: 6 }
Read from stdout as an Uint8Array
@returnsStdout as an Uint8Array
const output = await $`echo hello`; console.log(output.bytes()); // Uint8Array { byteLength: 6 }
Read from stdout as a JSON object
@returnsStdout as a JSON object
const output = await $`echo '{"hello": 123}'`; console.log(output.json()); // { hello: 123 }
- @param encoding
The encoding to use when decoding the output
@returnsStdout as a string with the given encoding
Read as UTF-8 string
const output = await $`echo hello`; console.log(output.text()); // "hello\n"
Read as base64 string
const output = await $`echo ${atob("hello")}`; console.log(output.text("base64")); // "hello\n"
@param patternBrace pattern to expand
const result = braces('index.{js,jsx,ts,tsx}'); console.log(result) // ['index.js', 'index.jsx', 'index.ts', 'index.tsx']
newEnv?: Dict<string> | Record<string, undefined | string>Change the default environment variables for shells created by this instance.
@param newEnvDefault environment variables to use for shells created by this instance.
import {$} from 'bun'; $.env({ BUN: "bun" }); await $`echo $BUN`; // "bun"
- size: number
Allocate a new
Uint8Array
without zeroing the bytes.This can be 3.5x faster than
new Uint8Array(size)
, but if you send uninitialized memory to your users (even unintentionally), it can potentially leak anything recently in memory. Bundles JavaScript, TypeScript, CSS, HTML and other supported files into optimized outputs.
@param configBuild configuration options
@returnsPromise that resolves to build output containing generated artifacts and build status
Basic usage - Bundle a single entrypoint and check results
const result = await Bun.build({ entrypoints: ['./src/index.tsx'], outdir: './dist' }); if (!result.success) { console.error('Build failed:', result.logs); process.exit(1); }
- outputFormat?: 'number' | 'hex' | 'ansi' | 'ansi-16' | 'ansi-16m' | 'ansi-256' | 'css' | 'HEX' | 'hsl' | 'lab' | 'rgb' | 'rgba'): null | string;
Converts formats of colors
@param inputA value that could possibly be a color
@param outputFormatAn optional output format
outputFormat: '[rgb]'): null | [number, number, number];Convert any color input to rgb
@param inputAny color input
@param outputFormatSpecify
[rgb]
to output as an array withr
,g
, andb
propertiesoutputFormat: '[rgba]'): null | [number, number, number, number];Convert any color input to rgba
@param inputAny color input
@param outputFormatSpecify
[rgba]
to output as an array withr
,g
,b
, anda
propertiesoutputFormat: '{rgb}'): null | { b: number; g: number; r: number };Convert any color input to a number
@param inputAny color input
@param outputFormatSpecify
{rgb}
to output as an object withr
,g
, andb
propertiesoutputFormat: '{rgba}'): null | { a: number; b: number; g: number; r: number };Convert any color input to rgba
@param inputAny color input
@param outputFormatSpecify {rgba} to output as an object with
r
,g
,b
, anda
propertiesoutputFormat: 'number'): null | number;Convert any color input to a number
@param inputAny color input
@param outputFormatSpecify
number
to output as a number - maxLength?: number
Concatenate an array of typed arrays into a single
ArrayBuffer
. This is a fast path.You can do this manually if you'd like, but this function will generally be a little faster.
If you want a
Uint8Array
instead, considerBuffer.concat
.@param buffersAn array of typed arrays to concatenate.
@returnsAn
ArrayBuffer
with the data from all the buffers.Here is similar code to do it manually, except about 30% slower:
var chunks = [...]; var size = 0; for (const chunk of chunks) { size += chunk.byteLength; } var buffer = new ArrayBuffer(size); var view = new Uint8Array(buffer); var offset = 0; for (const chunk of chunks) { view.set(chunk, offset); offset += chunk.byteLength; } return buffer;
This function is faster because it uses uninitialized memory when copying. Since the entire length of the buffer is known, it is safe to use uninitialized memory.
maxLength: number,asUint8Array: falseConcatenate an array of typed arrays into a single
ArrayBuffer
. This is a fast path.You can do this manually if you'd like, but this function will generally be a little faster.
If you want a
Uint8Array
instead, considerBuffer.concat
.@param buffersAn array of typed arrays to concatenate.
@returnsAn
ArrayBuffer
with the data from all the buffers.Here is similar code to do it manually, except about 30% slower:
var chunks = [...]; var size = 0; for (const chunk of chunks) { size += chunk.byteLength; } var buffer = new ArrayBuffer(size); var view = new Uint8Array(buffer); var offset = 0; for (const chunk of chunks) { view.set(chunk, offset); offset += chunk.byteLength; } return buffer;
This function is faster because it uses uninitialized memory when copying. Since the entire length of the buffer is known, it is safe to use uninitialized memory.
maxLength: number,asUint8Array: trueConcatenate an array of typed arrays into a single
ArrayBuffer
. This is a fast path.You can do this manually if you'd like, but this function will generally be a little faster.
If you want a
Uint8Array
instead, considerBuffer.concat
.@param buffersAn array of typed arrays to concatenate.
@returnsAn
ArrayBuffer
with the data from all the buffers.Here is similar code to do it manually, except about 30% slower:
var chunks = [...]; var size = 0; for (const chunk of chunks) { size += chunk.byteLength; } var buffer = new ArrayBuffer(size); var view = new Uint8Array(buffer); var offset = 0; for (const chunk of chunks) { view.set(chunk, offset); offset += chunk.byteLength; } return buffer;
This function is faster because it uses uninitialized memory when copying. Since the entire length of the buffer is known, it is safe to use uninitialized memory.
Create a TCP client that connects to a server via a TCP socket
Create a TCP client that connects to a server via a unix socket
- a: any,b: any,strict?: boolean): boolean;
Fast deep-equality check two objects.
This also powers expect().toEqual in
bun:test
@param strict - subset: unknown,a: unknown): boolean;
Returns true if all properties in the subset exist in the other and have equal values.
This also powers expect().toMatchObject in
bun:test
Compresses a chunk of data with
zlib
DEFLATE algorithm.@param dataThe buffer of data to compress
@param optionsCompression options to use
@returnsThe output buffer with the compressed data
- input: string | number | boolean | object): string;
Escape the following characters in a string:
Blob
powered by the fastest system calls available for operating on files.This Blob is lazy. That means it won't do any work until you read from it.
size
will not be valid until the contents of the file are read at least once.type
is auto-set based on the file extension when possible
@param pathThe path to the file (lazily loaded) if the path starts with
s3://
it will behave like S3Fileconst file = Bun.file("./hello.json"); console.log(file.type); // "application/json" console.log(await file.json()); // { hello: "world" }
Blob
that leverages the fastest system calls available to operate on files.This Blob is lazy. It won't do any work until you read from it. Errors propagate as promise rejections.
Blob.size
will not be valid until the contents of the file are read at least once.Blob.type
will have a default set based on the file extension@param pathThe path to the file as a byte buffer (the buffer is copied) if the path starts with
s3://
it will behave like S3Fileconst file = Bun.file(new TextEncoder.encode("./hello.json")); console.log(file.type); // "application/json"
fileDescriptor: number,Blob
powered by the fastest system calls available for operating on files.This Blob is lazy. That means it won't do any work until you read from it.
size
will not be valid until the contents of the file are read at least once.
@param fileDescriptorThe file descriptor of the file
const file = Bun.file(fd);
- @param url
The URL to convert.
@returnsA filesystem path.
const path = Bun.fileURLToPath(new URL("file:///foo/bar.txt")); console.log(path); // "/foo/bar.txt"
- format?: 'jsc'
Show precise statistics about memory usage of your application
Generate a heap snapshot in JavaScriptCore's format that can be viewed with
bun --inspect
or Safari's Web Inspectorformat: 'v8'): string;Show precise statistics about memory usage of your application
Generate a V8 Heap Snapshot that can be used with Chrome DevTools & Visual Studio Code
This is a JSON string that can be saved to a file.
const snapshot = Bun.generateHeapSnapshot("v8"); await Bun.write("heap.heapsnapshot", snapshot);
Decompresses a chunk of data with
zlib
GUNZIP algorithm.@param dataThe buffer of data to decompress
@returnsThe output buffer with the decompressed data
Compresses a chunk of data with
zlib
GZIP algorithm.@param dataThe buffer of data to compress
@param optionsCompression options to use
@returnsThe output buffer with the compressed data
- offset?: number): number;
Find the index of a newline character in potentially ill-formed UTF-8 text.
This is sort of like readline() except without the IO.
Decompresses a chunk of data with
zlib
INFLATE algorithm.@param dataThe buffer of data to decompress
@returnsThe output buffer with the decompressed data
- arg: any,): string;
Pretty-print an object the same as console.log to a
string
Supports JSX
@param argThe value to inspect
@param optionsOptions for the inspection
That can be used to declare custom inspect functions.
tabularData: object | unknown[],properties?: string[],options?: { colors: boolean }): string;Pretty-print an object or array as a table
Like console.table, except it returns a string
tabularData: object | unknown[],options?: { colors: boolean }): string;Pretty-print an object or array as a table
Like console.table, except it returns a string
Create a TCP server that listens on a port
Create a TCP server that listens on a unix socket
Open a file as a live-updating
Uint8Array
without copying memory- Writing to the array writes to the file.
- Reading from the array reads from the file.
This uses the
mmap()
syscall under the hood.This API inherently has some rough edges:
- It does not support empty files. It will throw a
SystemError
withEINVAL
- Usage on shared/networked filesystems is discouraged. It will be very slow.
- If you delete or truncate the file, that will crash bun. This is called a segmentation fault.
To close the file, set the array to
null
and it will be garbage collected eventually.Returns the number of nanoseconds since the process was started.
This function uses a high-resolution monotonic system timer to provide precise time measurements. In JavaScript, numbers are represented as double-precision floating-point values (IEEE 754), which can safely represent integers up to 2^53 - 1 (Number.MAX_SAFE_INTEGER).
Due to this limitation, while the internal counter may continue beyond this point, the precision of the returned value will degrade after 14.8 weeks of uptime (when the nanosecond count exceeds Number.MAX_SAFE_INTEGER). Beyond this point, the function will continue to count but with reduced precision, which might affect time calculations and comparisons in long-running applications.
@returnsThe number of nanoseconds since the process was started, with precise values up to Number.MAX_SAFE_INTEGER.
- path: string,): void;
Open a file in your local editor. Auto-detects via
$VISUAL
||$EDITOR
@param pathpath to open
- @param path
The path to convert.
@returnsA URL with the file:// scheme.
const url = Bun.pathToFileURL("/foo/bar.txt"); console.log(url.href); // "file:///foo/bar.txt"
Internally, this function uses WebKit's URL API to convert the path to a file:// URL.
- encoding?: 'base64' | 'base64url' | 'hex'): string;
Generate a UUIDv5, which is a name-based UUID based on the SHA-1 hash of a namespace UUID and a name.
@param nameThe name to use for the UUID
@param namespaceThe namespace to use for the UUID
@param encodingThe encoding to use for the UUID
import { randomUUIDv5 } from "bun"; const uuid = randomUUIDv5("www.example.com", "dns"); console.log(uuid); // "6ba7b810-9dad-11d1-80b4-00c04fd430c8"
import { randomUUIDv5 } from "bun"; const uuid = randomUUIDv5("www.example.com", "url"); console.log(uuid); // "6ba7b811-9dad-11d1-80b4-00c04fd430c8"
encoding: 'buffer'Generate a UUIDv5 as a Buffer
@param nameThe name to use for the UUID
@param namespaceThe namespace to use for the UUID
@param encodingThe encoding to use for the UUID
import { randomUUIDv5 } from "bun"; const uuid = randomUUIDv5("www.example.com", "url", "buffer"); console.log(uuid); // <Buffer 6b a7 b8 11 9d ad 11 d1 80 b4 00 c0 4f d4 30 c8>
- encoding?: 'base64' | 'base64url' | 'hex',timestamp?: number | Date): string;
Generate a UUIDv7, which is a sequential ID based on the current timestamp with a random component.
When the same timestamp is used multiple times, a monotonically increasing counter is appended to allow sorting. The final 8 bytes are cryptographically random. When the timestamp changes, the counter resets to a psuedo-random integer.
@param encoding"hex" | "base64" | "base64url"
@param timestampUnix timestamp in milliseconds, defaults to
Date.now()
import { randomUUIDv7 } from "bun"; const array = [ randomUUIDv7(), randomUUIDv7(), randomUUIDv7(), ] [ "0192ce07-8c4f-7d66-afec-2482b5c9b03c", "0192ce07-8c4f-7d67-805f-0f71581b5622", "0192ce07-8c4f-7d68-8170-6816e4451a58" ]
encoding: 'buffer',timestamp?: number | DateGenerate a UUIDv7 as a Buffer
@param encoding"buffer"
@param timestampUnix timestamp in milliseconds, defaults to
Date.now()
- ): T[] | Promise<T[]>;
Consume all data from a ReadableStream until it closes or errors.
@param streamThe stream to consume
@returnsA promise that resolves with the chunks as an array
Consume all data from a ReadableStream until it closes or errors.
Concatenate the chunks into a single ArrayBuffer.
Each chunk must be a TypedArray or an ArrayBuffer. If you need to support chunks of different types, consider readableStreamToBlob
@param streamThe stream to consume.
@returnsA promise that resolves with the concatenated chunks or the concatenated chunks as an
ArrayBuffer
.- stream: ReadableStream<string | Uint8Array<ArrayBufferLike> | Uint8ClampedArray<ArrayBufferLike> | Uint16Array<ArrayBufferLike> | Uint32Array<ArrayBufferLike> | Int8Array<ArrayBufferLike> | Int16Array<ArrayBufferLike> | Int32Array<ArrayBufferLike> | BigUint64Array<ArrayBufferLike> | BigInt64Array<ArrayBufferLike> | Float16Array<ArrayBufferLike> | Float32Array<ArrayBufferLike> | Float64Array<ArrayBufferLike> | DataView<ArrayBufferLike>>,multipartBoundaryExcludingDashes?: string | Uint8Array<ArrayBufferLike> | Uint8ClampedArray<ArrayBufferLike> | Uint16Array<ArrayBufferLike> | Uint32Array<ArrayBufferLike> | Int8Array<ArrayBufferLike> | Int16Array<ArrayBufferLike> | Int32Array<ArrayBufferLike> | BigUint64Array<ArrayBufferLike> | BigInt64Array<ArrayBufferLike> | Float16Array<ArrayBufferLike> | Float32Array<ArrayBufferLike> | Float64Array<ArrayBufferLike> | DataView<ArrayBufferLike>
Consume all data from a ReadableStream until it closes or errors.
Reads the multi-part or URL-encoded form data into a FormData object
@param streamThe stream to consume.
@param multipartBoundaryExcludingDashesOptional boundary to use for multipart form data. If none is provided, assumes it is a URLEncoded form.
@returnsA promise that resolves with the data encoded into a FormData object.
Multipart form data example
// without dashes const boundary = "WebKitFormBoundary" + Math.random().toString(16).slice(2); const myStream = getStreamFromSomewhere() // ... const formData = await Bun.readableStreamToFormData(stream, boundary); formData.get("foo"); // "bar"
URL-encoded form data example
const stream = new Response("hello=123").body; const formData = await Bun.readableStreamToFormData(stream); formData.get("hello"); // "123"
- moduleId: string,parent: string): Promise<string>;
Resolve a
moduleId
as though it were imported fromparent
On failure, throws a
ResolveMessage
For now, use the sync version. There is zero performance benefit to using this async version. It exists for future-proofing.
- moduleId: string,parent: string): string;
Synchronously resolve a
moduleId
as though it were imported fromparent
On failure, throws a
ResolveMessage
Bun.serve provides a high-performance HTTP server with built-in routing support. It enables both function-based and object-based route handlers with type-safe parameters and method-specific handling.
@param optionsServer configuration options
Basic Usage
Bun.serve({ port: 3000, fetch(req) { return new Response("Hello World"); } });
- @param input
string
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
will be faster@param hashIntooptional
Uint8Array
to write the hash to. 32 bytes minimum.This hashing function balances speed with cryptographic strength. This does not encrypt or decrypt data.
The implementation uses BoringSSL (used in Chromium & Go)
The equivalent
openssl
command is:# You will need OpenSSL 3 or later openssl sha512-256 /path/to/file
@param inputstring
,Uint8Array
, orArrayBuffer
to hash.Uint8Array
orArrayBuffer
will be faster@param encodingDigestEncoding
to return the hash inThis hashing function balances speed with cryptographic strength. This does not encrypt or decrypt data.
The implementation uses BoringSSL (used in Chromium & Go)
The equivalent
openssl
command is:# You will need OpenSSL 3 or later openssl sha512-256 /path/to/file
- ms: number | Date): Promise<void>;
Resolve a
Promise
after milliseconds. This is like setTimeout except it returns aPromise
.@param msmilliseconds to delay resolving the promise. This is a minimum number. It may take longer. If a Date is passed, it will sleep until the Date is reached.
Sleep for 1 second
import { sleep } from "bun"; await sleep(1000);
Sleep for 10 milliseconds
await Bun.sleep(10);
Sleep until
Date
const target = new Date(); target.setSeconds(target.getSeconds() + 1); await Bun.sleep(target);
Internally,
Bun.sleep
is the equivalent ofawait new Promise((resolve) => setTimeout(resolve, ms));
As always, you can use
Bun.sleep
or the importedsleep
function interchangeably. - ms: number): void;
Sleep the thread for a given number of milliseconds
This is a blocking function.
Internally, it calls nanosleep(2)
- function spawn<In extends Writable = 'ignore', Out extends Readable = 'pipe', Err extends Readable = 'inherit'>(cmds: string[],
Spawn a new process
const {stdout} = Bun.spawn(["echo", "hello"]); const text = await readableStreamToText(stdout); console.log(text); // "hello\n"
Internally, this uses posix_spawn(2)
@param cmdsThe command to run
The first argument will be resolved to an absolute executable path. It must be a file, not a directory.
If you explicitly set
PATH
inenv
, thatPATH
will be used to resolve the executable instead of the defaultPATH
.To check if the command exists before running it, use
Bun.which(bin)
. - function spawnSync<In extends Writable = 'ignore', Out extends Readable = 'pipe', Err extends Readable = 'pipe'>(cmds: string[],
Synchronously spawn a new process
const {stdout} = Bun.spawnSync(["echo", "hello"]); console.log(stdout.toString()); // "hello\n"
Internally, this uses posix_spawn(2)
@param cmdsThe command to run
The first argument will be resolved to an absolute executable path. It must be a file, not a directory.
If you explicitly set
PATH
inenv
, thatPATH
will be used to resolve the executable instead of the defaultPATH
.To check if the command exists before running it, use
Bun.which(bin)
. - input: string,): number;
Get the column count of a string as it would be displayed in a terminal. Supports ANSI escape codes, emoji, and wide characters.
This is useful for:
- Aligning text in a terminal
- Quickly checking if a string contains ANSI escape codes
- Measuring the width of a string in a terminal
This API is designed to match the popular "string-width" package, so that existing code can be easily ported to Bun and vice versa.
@param inputThe string to measure
@returnsThe width of the string in columns
import { stringWidth } from "bun"; console.log(stringWidth("abc")); // 3 console.log(stringWidth("👩👩👧👦")); // 1 console.log(stringWidth("\u001b[31mhello\u001b[39m")); // 5 console.log(stringWidth("\u001b[31mhello\u001b[39m", { countAnsiEscapeCodes: false })); // 5 console.log(stringWidth("\u001b[31mhello\u001b[39m", { countAnsiEscapeCodes: true })); // 13
- @param input
The string to remove ANSI escape codes from.
@returnsThe string with ANSI escape codes removed.
import { stripANSI } from "bun"; console.log(stripANSI("\u001b[31mhello\u001b[39m")); // "hello"
Create a UDP socket
@param optionsThe options to use when creating the server
Create a UDP socket
@param optionsThe options to use when creating the server
- command: string,): null | string;
Find the path to an executable, similar to typing which in your terminal. Reads the
PATH
environment variable unless overridden withoptions.PATH
.@param commandThe name of the executable or script to find
@param optionsOptions for the search
- options?: { createPath: boolean; mode: number }): Promise<number>;
Use the fastest syscalls available to copy from
input
intodestination
.If
destination
exists, it must be a regular file or symlink to a file. Ifdestination
's directory does not exist, it will be created by default.@param destinationThe file or file path to write to
@param inputThe data to copy into
destination
.@param optionsOptions for the write
@returnsA promise that resolves with the number of bytes written.
options?: { createPath: boolean }): Promise<number>;Persist a Response body to disk.
@param destinationThe file to write to. If the file doesn't exist, it will be created and if the file does exist, it will be overwritten. If
input
's size is less thandestination
's size,destination
will be truncated.@param inputResponse
object@param optionsOptions for the write
@returnsA promise that resolves with the number of bytes written.
options?: { createPath: boolean }): Promise<number>;Persist a Response body to disk.
@param destinationPathThe file path to write to. If the file doesn't exist, it will be created and if the file does exist, it will be overwritten. If
input
's size is less thandestination
's size,destination
will be truncated.@param inputResponse
object@returnsA promise that resolves with the number of bytes written.
options?: { createPath: boolean }): Promise<number>;Use the fastest syscalls available to copy from
input
intodestination
.If
destination
exists, it must be a regular file or symlink to a file.On Linux, this uses
copy_file_range
.On macOS, when the destination doesn't already exist, this uses
clonefile()
and falls back tofcopyfile()
@param destinationThe file to write to. If the file doesn't exist, it will be created and if the file does exist, it will be overwritten. If
input
's size is less thandestination
's size,destination
will be truncated.@param inputThe file to copy from.
@returnsA promise that resolves with the number of bytes written.
options?: { createPath: boolean }): Promise<number>;Use the fastest syscalls available to copy from
input
intodestination
.If
destination
exists, it must be a regular file or symlink to a file.On Linux, this uses
copy_file_range
.On macOS, when the destination doesn't already exist, this uses
clonefile()
and falls back tofcopyfile()
@param destinationPathThe file path to write to. If the file doesn't exist, it will be created and if the file does exist, it will be overwritten. If
input
's size is less thandestination
's size,destination
will be truncated.@param inputThe file to copy from.
@returnsA promise that resolves with the number of bytes written.
- options?: { level: number }
Compresses a chunk of data with the Zstandard (zstd) compression algorithm.
@param dataThe buffer of data to compress
@param optionsCompression options to use
@returnsA promise that resolves to the output buffer with the compressed data
- options?: { level: number }
Compresses a chunk of data with the Zstandard (zstd) compression algorithm.
@param dataThe buffer of data to compress
@param optionsCompression options to use
@returnsThe output buffer with the compressed data
Decompresses a chunk of data with the Zstandard (zstd) decompression algorithm.
@param dataThe buffer of data to decompress
@returnsA promise that resolves to the output buffer with the decompressed data
Decompresses a chunk of data with the Zstandard (zstd) decompression algorithm.
@param dataThe buffer of data to decompress
@returnsThe output buffer with the decompressed data
class EventSource
EventTarget is a DOM interface implemented by objects that can receive events and may have listeners for them.
- onmessage: null | (this: EventSource, ev: new (type: string, eventInitDict?: MessageEventInit<T>) => MessageEvent<T>) => any
- readonly readyState: number
Returns the state of this EventSource object's connection. It can have the values described below.
- readonly withCredentials: boolean
Returns true if the credentials mode for connection requests to the URL providing the event stream is set to "include", and false otherwise.
Not supported in Bun
- type: K,): void;
Appends an event listener for events whose type attribute value is type. The callback argument sets the callback that will be invoked when the event is dispatched.
The options argument sets listener-specific options. For compatibility this can be a boolean, in which case the method behaves exactly as if the value was specified as options's capture.
When set to true, options's capture prevents callback from being invoked when the event's eventPhase attribute value is BUBBLING_PHASE. When false (or not present), callback will not be invoked when event's eventPhase attribute value is CAPTURING_PHASE. Either way, callback will be invoked if event's eventPhase attribute value is AT_TARGET.
When set to true, options's passive indicates that the callback will not cancel the event by invoking preventDefault(). This is used to enable performance optimizations described in § 2.8 Observing event listeners.
When set to true, options's once indicates that the callback will only be invoked once after which the event listener will be removed.
If an AbortSignal is passed for options's signal, then the event listener will be removed when signal is aborted.
The event listener is appended to target's event listener list and is not appended if it has the same type, callback, and capture.
type: string,listener: (this: EventSource, event: new (type: string, eventInitDict?: MessageEventInit<T>) => MessageEvent<T>) => any,): void;Appends an event listener for events whose type attribute value is type. The callback argument sets the callback that will be invoked when the event is dispatched.
The options argument sets listener-specific options. For compatibility this can be a boolean, in which case the method behaves exactly as if the value was specified as options's capture.
When set to true, options's capture prevents callback from being invoked when the event's eventPhase attribute value is BUBBLING_PHASE. When false (or not present), callback will not be invoked when event's eventPhase attribute value is CAPTURING_PHASE. Either way, callback will be invoked if event's eventPhase attribute value is AT_TARGET.
When set to true, options's passive indicates that the callback will not cancel the event by invoking preventDefault(). This is used to enable performance optimizations described in § 2.8 Observing event listeners.
When set to true, options's once indicates that the callback will only be invoked once after which the event listener will be removed.
If an AbortSignal is passed for options's signal, then the event listener will be removed when signal is aborted.
The event listener is appended to target's event listener list and is not appended if it has the same type, callback, and capture.
type: string,): void;Appends an event listener for events whose type attribute value is type. The callback argument sets the callback that will be invoked when the event is dispatched.
The options argument sets listener-specific options. For compatibility this can be a boolean, in which case the method behaves exactly as if the value was specified as options's capture.
When set to true, options's capture prevents callback from being invoked when the event's eventPhase attribute value is BUBBLING_PHASE. When false (or not present), callback will not be invoked when event's eventPhase attribute value is CAPTURING_PHASE. Either way, callback will be invoked if event's eventPhase attribute value is AT_TARGET.
When set to true, options's passive indicates that the callback will not cancel the event by invoking preventDefault(). This is used to enable performance optimizations described in § 2.8 Observing event listeners.
When set to true, options's once indicates that the callback will only be invoked once after which the event listener will be removed.
If an AbortSignal is passed for options's signal, then the event listener will be removed when signal is aborted.
The event listener is appended to target's event listener list and is not appended if it has the same type, callback, and capture.
Aborts any instances of the fetch algorithm started for this EventSource object, and sets the readyState attribute to CLOSED.
- ): boolean;
Dispatches a synthetic event event to target and returns true if either event's cancelable attribute value is false or its preventDefault() method was not invoked, and false otherwise.
Keep the event loop alive while connection is open or reconnecting
Not available in browsers
- type: K,): void;
Removes the event listener in target's event listener list with the same type, callback, and options.
type: string,listener: (this: EventSource, event: new (type: string, eventInitDict?: MessageEventInit<T>) => MessageEvent<T>) => any,): void;Removes the event listener in target's event listener list with the same type, callback, and options.
type: string,): void;Removes the event listener in target's event listener list with the same type, callback, and options.
Do not keep the event loop alive while connection is open or reconnecting
Not available in browsers
class WebSocket
A WebSocket client implementation
const ws = new WebSocket("ws://localhost:8080", { headers: { "x-custom-header": "hello", }, }); ws.addEventListener("open", () => { console.log("Connected to server"); }); ws.addEventListener("message", (event) => { console.log("Received message:", event.data); }); ws.send("Hello, server!"); ws.terminate();
- readonly bufferedAmount: number
The number of bytes of data that have been queued using send() but not yet transmitted to the network
- onmessage: null | (this: WebSocket, ev: new (type: string, eventInitDict?: MessageEventInit<T>) => MessageEvent<T>) => any
Event handler for message event
- type: K,): void;
Registers an event handler of a specific event type on the WebSocket.
@param typeA case-sensitive string representing the event type to listen for
@param listenerThe function to be called when the event occurs
@param optionsAn options object that specifies characteristics about the event listener
type: string,): void;Appends an event listener for events whose type attribute value is type. The callback argument sets the callback that will be invoked when the event is dispatched.
The options argument sets listener-specific options. For compatibility this can be a boolean, in which case the method behaves exactly as if the value was specified as options's capture.
When set to true, options's capture prevents callback from being invoked when the event's eventPhase attribute value is BUBBLING_PHASE. When false (or not present), callback will not be invoked when event's eventPhase attribute value is CAPTURING_PHASE. Either way, callback will be invoked if event's eventPhase attribute value is AT_TARGET.
When set to true, options's passive indicates that the callback will not cancel the event by invoking preventDefault(). This is used to enable performance optimizations described in § 2.8 Observing event listeners.
When set to true, options's once indicates that the callback will only be invoked once after which the event listener will be removed.
If an AbortSignal is passed for options's signal, then the event listener will be removed when signal is aborted.
The event listener is appended to target's event listener list and is not appended if it has the same type, callback, and capture.
- @param code
A numeric value indicating the status code
@param reasonA human-readable string explaining why the connection is closing
- ): boolean;
Dispatches a synthetic event event to target and returns true if either event's cancelable attribute value is false or its preventDefault() method was not invoked, and false otherwise.
- @param data
Optional data to include in the ping frame
- @param data
Optional data to include in the pong frame
- type: K,): void;
Removes an event listener previously registered with addEventListener()
@param typeA case-sensitive string representing the event type to remove
@param listenerThe function to remove from the event target
@param optionsAn options object that specifies characteristics about the event listener
type: string,): void;Removes the event listener in target's event listener list with the same type, callback, and options.
- @param data
The data to send to the server
Immediately terminates the connection
class Worker
EventTarget is a DOM interface implemented by objects that can receive events and may have listeners for them.
- onmessage: null | (this: Worker, ev: new (type: string, eventInitDict?: MessageEventInit<T>) => MessageEvent<T>) => any
- onmessageerror: null | (this: Worker, ev: new (type: string, eventInitDict?: MessageEventInit<T>) => MessageEvent<T>) => any
- threadId: number
An integer identifier for the referenced thread. Inside the worker thread, it is available as
require('node:worker_threads').threadId
. This value is unique for eachWorker
instance inside a single process. - type: K,): void;
Appends an event listener for events whose type attribute value is type. The callback argument sets the callback that will be invoked when the event is dispatched.
The options argument sets listener-specific options. For compatibility this can be a boolean, in which case the method behaves exactly as if the value was specified as options's capture.
When set to true, options's capture prevents callback from being invoked when the event's eventPhase attribute value is BUBBLING_PHASE. When false (or not present), callback will not be invoked when event's eventPhase attribute value is CAPTURING_PHASE. Either way, callback will be invoked if event's eventPhase attribute value is AT_TARGET.
When set to true, options's passive indicates that the callback will not cancel the event by invoking preventDefault(). This is used to enable performance optimizations described in § 2.8 Observing event listeners.
When set to true, options's once indicates that the callback will only be invoked once after which the event listener will be removed.
If an AbortSignal is passed for options's signal, then the event listener will be removed when signal is aborted.
The event listener is appended to target's event listener list and is not appended if it has the same type, callback, and capture.
type: string,): void;Appends an event listener for events whose type attribute value is type. The callback argument sets the callback that will be invoked when the event is dispatched.
The options argument sets listener-specific options. For compatibility this can be a boolean, in which case the method behaves exactly as if the value was specified as options's capture.
When set to true, options's capture prevents callback from being invoked when the event's eventPhase attribute value is BUBBLING_PHASE. When false (or not present), callback will not be invoked when event's eventPhase attribute value is CAPTURING_PHASE. Either way, callback will be invoked if event's eventPhase attribute value is AT_TARGET.
When set to true, options's passive indicates that the callback will not cancel the event by invoking preventDefault(). This is used to enable performance optimizations described in § 2.8 Observing event listeners.
When set to true, options's once indicates that the callback will only be invoked once after which the event listener will be removed.
If an AbortSignal is passed for options's signal, then the event listener will be removed when signal is aborted.
The event listener is appended to target's event listener list and is not appended if it has the same type, callback, and capture.
- ): boolean;
Dispatches a synthetic event event to target and returns true if either event's cancelable attribute value is false or its preventDefault() method was not invoked, and false otherwise.
- message: any,): void;
Clones message and transmits it to worker's global environment. transfer can be passed as a list of objects that are to be transferred rather than cloned.
Opposite of
unref()
, callingref()
on a previouslyunref()
ed worker does not let the program exit if it's the only active handle left (the default behavior). If the worker isref()
ed, callingref()
again has no effect.- type: K,): void;
Removes the event listener in target's event listener list with the same type, callback, and options.
type: string,): void;Removes the event listener in target's event listener list with the same type, callback, and options.
Aborts worker's associated global environment.
Calling
unref()
on a worker allows the thread to exit if this is the only active handle in the event system. If the worker is alreadyunref()
ed callingunref()
again has no effect.
Type definitions
namespace __experimental
interface SSGPageProps<Params extends SSGParamsLike = SSGParamsLike>
Props interface for SSG page components.
This interface defines the shape of props that will be passed to your static page components during the build process. The
params
object contains the route parameters extracted from the URL pattern.// Blog post component props interface BlogPageProps extends SSGPageProps<{ slug: string }> { // params: { slug: string } is automatically included } // Product page component props interface ProductPageProps extends SSGPageProps<{ category: string; id: string; }> { // params: { category: string; id: string } is automatically included } // Usage in component function BlogPost({ params }: BlogPageProps) { const { slug } = params; // TypeScript knows slug is a string return <h1>Blog post: {slug}</h1>; }
interface SSGParamsLike
Base interface for static site generation route parameters.
Supports both single string values and arrays of strings for dynamic route segments. This is typically used for route parameters like
[slug]
,[...rest]
, or[id]
.// Simple slug parameter type BlogParams = { slug: string }; // Multiple parameters type ProductParams = { category: string; id: string; }; // Catch-all routes with string arrays type DocsParams = { path: string[]; };
interface SSGPath<Params extends SSGParamsLike = SSGParamsLike>
Configuration object for a single static route to be generated.
Each path object contains the parameters needed to render a specific instance of a dynamic route at build time.
// Single blog post path const blogPath: SSGPath<{ slug: string }> = { params: { slug: "my-first-post" } }; // Product page with multiple params const productPath: SSGPath<{ category: string; id: string }> = { params: { category: "electronics", id: "laptop-123" } }; // Documentation with catch-all route const docsPath: SSGPath<{ path: string[] }> = { params: { path: ["getting-started", "installation"] } };
- type GetStaticPaths<Params extends SSGParamsLike = SSGParamsLike> = () => MaybePromise<{ paths: SSGPaths<Params> }>
getStaticPaths is Bun's implementation of SSG (Static Site Generation) path determination.
This function is called at your app's build time to determine which dynamic routes should be pre-rendered as static pages. It returns an array of path parameters that will be used to generate static pages for dynamic routes (e.g., [slug].tsx, [category]/[id].tsx).
The function can be either synchronous or asynchronous, allowing you to fetch data from APIs, databases, or file systems to determine which paths should be statically generated.
// In pages/blog/[slug].tsx ———————————————————╮ export const getStaticPaths: GetStaticPaths<{ slug: string }> = async () => { // Fetch all blog posts from your CMS or API at build time const posts = await fetchBlogPosts(); return { paths: posts.map((post) => ({ params: { slug: post.slug } })) }; }; // In pages/products/[category]/[id].tsx export const getStaticPaths: GetStaticPaths<{ category: string; id: string; }> = async () => { // Fetch products from database const products = await db.products.findMany({ select: { id: true, category: { slug: true } } }); return { paths: products.map(product => ({ params: { category: product.category.slug, id: product.id } })) }; }; // In pages/docs/[...path].tsx (catch-all route) export const getStaticPaths: GetStaticPaths<{ path: string[] }> = async () => { // Read documentation structure from file system const docPaths = await getDocumentationPaths('./content/docs'); return { paths: docPaths.map(docPath => ({ params: { path: docPath.split('/') } })) }; }; // Synchronous example with static data export const getStaticPaths: GetStaticPaths<{ id: string }> = () => { const staticIds = ['1', '2', '3', '4', '5']; return { paths: staticIds.map(id => ({ params: { id } })) }; };
- type SSGPage<Params extends SSGParamsLike = SSGParamsLike> = ComponentType
React component type for SSG pages that can be statically generated.
This type represents a React component that receives SSG page props and can be rendered at build time. The component can be either a regular React component or an async React Server Component for advanced use cases like data fetching during static generation.
// Regular synchronous SSG page component const BlogPost: SSGPage<{ slug: string }> = ({ params }) => { return ( <article> <h1>Blog Post: {params.slug}</h1> <p>This content was generated at build time!</p> </article> ); }; // Async React Server Component for data fetching const AsyncBlogPost: SSGPage<{ slug: string }> = async ({ params }) => { // Fetch data during static generation const post = await fetchBlogPost(params.slug); const author = await fetchAuthor(post.authorId); return ( <article> <h1>{post.title}</h1> <p>By {author.name}</p> <div dangerouslySetInnerHTML={{ __html: post.content }} /> </article> ); }; // Product page with multiple params and async data fetching const ProductPage: SSGPage<{ category: string; id: string }> = async ({ params }) => { const [product, reviews] = await Promise.all([ fetchProduct(params.category, params.id), fetchProductReviews(params.id) ]); return ( <div> <h1>{product.name}</h1> <p>Category: {params.category}</p> <p>Price: ${product.price}</p> <div> <h2>Reviews ({reviews.length})</h2> {reviews.map(review => ( <div key={review.id}>{review.comment}</div> ))} </div> </div> ); };
- type SSGPaths<Params extends SSGParamsLike = SSGParamsLike> = SSGPath<Params>[]
Array of static paths to be generated at build time.
This type represents the collection of all route configurations that should be pre-rendered for a dynamic route.
// Array of blog post paths const blogPaths: SSGPaths<{ slug: string }> = [ { params: { slug: "introduction-to-bun" } }, { params: { slug: "performance-benchmarks" } }, { params: { slug: "getting-started-guide" } } ]; // Mixed parameter types const productPaths: SSGPaths<{ category: string; id: string }> = [ { params: { category: "books", id: "javascript-guide" } }, { params: { category: "electronics", id: "smartphone-x" } } ];
namespace __internal
interface BunHeadersOverride
- name: 'set-cookie' | 'Set-Cookie'): string[];
Get all headers matching the name
Only supports
"Set-Cookie"
. All other headers are empty arrays.@param nameThe header name to get
@returnsAn array of header values
const headers = new Headers(); headers.append("Set-Cookie", "foo=bar"); headers.append("Set-Cookie", "baz=qux"); headers.getAll("Set-Cookie"); // ["foo=bar", "baz=qux"]
Convert Headers to a plain JavaScript object.
About 10x faster than
Object.fromEntries(headers.entries())
Called when you run
JSON.stringify(headers)
Does not preserve insertion order. Well-known header names are lowercased. Other header names are left as-is.
interface BunRequestOverride
interface BunResponseOverride
- type DistributedMerge<T, Else = T> = T extends T ? Merge<T, Exclude<Else, T>> : never
- type DistributedOmit<T, K extends PropertyKey> = T extends T ? Omit<T, K> : never
Like Omit, but correctly distributes over unions. Most useful for removing properties from union options objects, like Bun.SQL.Options
type X = Bun.DistributedOmit<{type?: 'a', url?: string} | {type?: 'b', flag?: boolean}, "url"> // `{type?: 'a'} | {type?: 'b', flag?: boolean}` (Omit applied to each union item instead of entire type) type X = Omit<{type?: 'a', url?: string} | {type?: 'b', flag?: boolean}, "url">; // `{type?: "a" | "b" | undefined}` (Missing `flag` property and no longer a union)
- type KeysInBoth<A, B> = Extract<keyof A, keyof B>
- type LibDomIsLoaded = typeof globalThis extends { onabort: any } ? true : false
- type LibEmptyOrBroadcastChannel = LibDomIsLoaded extends true ? {} : BroadcastChannel
- type LibEmptyOrBunWebSocket = LibDomIsLoaded extends true ? {} : Bun.WebSocket
- type LibEmptyOrEventSource = LibDomIsLoaded extends true ? {} : EventSource
- type LibEmptyOrNodeMessagePort = LibDomIsLoaded extends true ? {} : MessagePort
- type LibEmptyOrNodeReadableStream<T> = LibDomIsLoaded extends true ? {} : ReadableStream
- type LibEmptyOrNodeStreamWebCompressionStream = LibDomIsLoaded extends true ? {} : CompressionStream
- type LibEmptyOrNodeStreamWebDecompressionStream = LibDomIsLoaded extends true ? {} : DecompressionStream
- type LibEmptyOrNodeStreamWebTextDecoderStream = LibDomIsLoaded extends true ? {} : TextDecoderStream
- type LibEmptyOrNodeStreamWebTextEncoderStream = LibDomIsLoaded extends true ? {} : TextEncoderStream
- type LibEmptyOrNodeUtilTextDecoder = LibDomIsLoaded extends true ? {} : TextDecoder
- type LibEmptyOrNodeUtilTextEncoder = LibDomIsLoaded extends true ? {} : TextEncoder
- type LibEmptyOrNodeWritableStream<T> = LibDomIsLoaded extends true ? {} : WritableStream
- type LibEmptyOrPerformanceEntry = LibDomIsLoaded extends true ? {} : PerformanceEntry
- type LibEmptyOrPerformanceMark = LibDomIsLoaded extends true ? {} : PerformanceMark
- type LibEmptyOrPerformanceMeasure = LibDomIsLoaded extends true ? {} : PerformanceMeasure
- type LibEmptyOrPerformanceObserver = LibDomIsLoaded extends true ? {} : PerformanceObserver
- type LibEmptyOrPerformanceObserverEntryList = LibDomIsLoaded extends true ? {} : PerformanceObserverEntryList
- type LibEmptyOrPerformanceResourceTiming = LibDomIsLoaded extends true ? {} : PerformanceResourceTiming
- type LibEmptyOrReadableByteStreamController = LibDomIsLoaded extends true ? {} : ReadableByteStreamController
- type LibEmptyOrReadableStreamBYOBReader = LibDomIsLoaded extends true ? {} : ReadableStreamBYOBReader
- type LibEmptyOrReadableStreamBYOBRequest = LibDomIsLoaded extends true ? {} : ReadableStreamBYOBRequest
- type LibOrFallbackHeaders = LibDomIsLoaded extends true ? {} : Headers
- type LibOrFallbackRequest = LibDomIsLoaded extends true ? {} : Request
- type LibOrFallbackRequestInit = LibDomIsLoaded extends true ? {} : Omit<RequestInit, 'body' | 'headers'> & { body: Bun.BodyInit | null; headers: Bun.HeadersInit }
- type LibOrFallbackResponse = LibDomIsLoaded extends true ? {} : Response
- type LibOrFallbackResponseInit = LibDomIsLoaded extends true ? {} : ResponseInit
- type LibPerformanceOrNodePerfHooksPerformance = LibDomIsLoaded extends true ? {} : Performance
- type LibWorkerOrBunWorker = LibDomIsLoaded extends true ? {} : Bun.Worker
- type Merge<A, B> = MergeInner<A, B> & MergeInner<B, A>
- type MergeInner<A, B> = Omit<A, KeysInBoth<A, B>> & Omit<B, KeysInBoth<A, B>> & { [K in KeysInBoth<A, B>]: A[Key] | B[Key] }
- type UseLibDomIfAvailable<GlobalThisKeyName extends PropertyKey, Otherwise> = LibDomIsLoaded extends true ? typeof globalThis extends { [K in GlobalThisKeyName]: infer T } ? T : Otherwise : Otherwise
Helper type for avoiding conflicts in types.
Uses the lib.dom.d.ts definition if it exists, otherwise defines it locally.
This is to avoid type conflicts between lib.dom.d.ts and @types/bun.
Unfortunately some symbols cannot be defined when both Bun types and lib.dom.d.ts types are loaded, and since we can't redeclare the symbol in a way that satisfies both, we need to fallback to the type that lib.dom.d.ts provides.
- type Without<A, B> = A & { [K in Exclude<keyof B, keyof A>]: never }
namespace Build
- type Architecture = 'x64' | 'arm64'
- type Libc = 'glibc' | 'musl'
- type SIMD = 'baseline' | 'modern'
- type Target = `bun-darwin-${Architecture}` | `bun-darwin-x64-${SIMD}` | `bun-linux-${Architecture}` | `bun-linux-${Architecture}-${Libc}` | 'bun-windows-x64' | `bun-windows-x64-${SIMD}` | `bun-linux-x64-${SIMD}-${Libc}`
namespace Password
Hash and verify passwords using argon2 or bcrypt
These are fast APIs that can run in a worker thread if used asynchronously.
interface Argon2Algorithm
- timeCost?: number
Defines the amount of computation realized and therefore the execution time, given in number of iterations.
interface BCryptAlgorithm
- type AlgorithmLabel = BCryptAlgorithm | Argon2Algorithm['algorithm']
namespace RedisClient
- type KeyLike = string | ArrayBufferView | Blob
- type StringPubSubListener = (message: string, channel: string) => void
namespace Security
bun install
security related declarationsinterface Advisory
Advisory represents the result of a security scan result of a package
- description: null | string
If available, this is a brief description of the advisory that Bun will print to the user.
- level: 'warn' | 'fatal'
Level represents the degree of danger for a security advisory
Bun behaves differently depending on the values returned from the
scan()
hook:In any case, Bun always pretty prints all the advisories, but...
→ if any fatal, Bun will immediately cancel the installation and quit with a non-zero exit code
→ else if any warn, Bun will either ask the user if they'd like to continue with the install if in a TTY environment, or immediately exit if not.
- url: null | string
If available, this is a url linking to a CVE or report online so users can learn more about the advisory.
interface Package
- requestedRange: string
The range that was requested by the command
This could be a tag like
beta
or a semver range like>=4.0.0
- version: string
The resolved version to be installed that matches the requested range.
This is the exact version string, not a range.
interface Scanner
- version: '1'
This is the version of the scanner implementation. It may change in future versions, so we will use this version to discriminate between such versions. It's entirely possible this API changes in the future so much that version 1 would no longer be supported.
The version is required because third-party scanner package versions are inherently unrelated to Bun versions
namespace Serve
interface BaseServeOptions<WebSocketData>
- id?: null | string
Uniquely identify a server instance with an ID
When bun is started with the
--hot
flag:This string will be used to hot reload the server without interrupting pending requests or websockets. If not provided, a value will be generated. To disable hot reloading, set this value to
null
.When bun is not started with the
--hot
flag:This string will currently do nothing. But in the future it could be useful for logs or metrics.
- tls?: TLSOptions | TLSOptions[]
Set options for using TLS with this server
const server = Bun.serve({ fetch: request => new Response("Welcome to Bun!"), tls: { cert: Bun.file("cert.pem"), key: Bun.file("key.pem"), ca: [Bun.file("ca1.pem"), Bun.file("ca2.pem")], }, });
interface HostnamePortServeOptions<WebSocketData>
- hostname?: string & {} | '0.0.0.0' | '127.0.0.1' | 'localhost'
What hostname should the server listen on?
"127.0.0.1" // Only listen locally
- id?: null | string
Uniquely identify a server instance with an ID
When bun is started with the
--hot
flag:This string will be used to hot reload the server without interrupting pending requests or websockets. If not provided, a value will be generated. To disable hot reloading, set this value to
null
.When bun is not started with the
--hot
flag:This string will currently do nothing. But in the future it could be useful for logs or metrics.
- idleTimeout?: number
Sets the the number of seconds to wait before timing out a connection due to inactivity.
- reusePort?: boolean
Whether the
SO_REUSEPORT
flag should be set.This allows multiple processes to bind to the same port, which is useful for load balancing.
- tls?: TLSOptions | TLSOptions[]
Set options for using TLS with this server
const server = Bun.serve({ fetch: request => new Response("Welcome to Bun!"), tls: { cert: Bun.file("cert.pem"), key: Bun.file("key.pem"), ca: [Bun.file("ca1.pem"), Bun.file("ca2.pem")], }, });
interface UnixServeOptions<WebSocketData>
- id?: null | string
Uniquely identify a server instance with an ID
When bun is started with the
--hot
flag:This string will be used to hot reload the server without interrupting pending requests or websockets. If not provided, a value will be generated. To disable hot reloading, set this value to
null
.When bun is not started with the
--hot
flag:This string will currently do nothing. But in the future it could be useful for logs or metrics.
- tls?: TLSOptions | TLSOptions[]
Set options for using TLS with this server
const server = Bun.serve({ fetch: request => new Response("Welcome to Bun!"), tls: { cert: Bun.file("cert.pem"), key: Bun.file("key.pem"), ca: [Bun.file("ca1.pem"), Bun.file("ca2.pem")], }, });
- unix?: string
If set, the HTTP server will listen on a unix socket instead of a port. (Cannot be used with hostname+port)
- type BaseRouteValue = Response | false | HTMLBundle | BunFile
- type Development = boolean | { chromeDevToolsAutomaticWorkspaceFolders: boolean; console: boolean; hmr: boolean }
Development configuration for Bun.serve
- type ExtractRouteParams<T> = T extends `${string}:${infer Param}/${infer Rest}` ? { [K in Param]: string } & ExtractRouteParams<Rest> : T extends `${string}:${infer Param}` ? { [K in Param]: string } : T extends `${string}*` ? {} : {}
- type FetchOrRoutes<WebSocketData, R extends string> = { routes: Routes<WebSocketData, R>; fetch(this: Server<WebSocketData>, req: Request, server: Server<WebSocketData>): MaybePromise<Response> } | { routes: Routes<WebSocketData, R>; fetch(this: Server<WebSocketData>, req: Request, server: Server<WebSocketData>): MaybePromise<Response> }
- type FetchOrRoutesWithWebSocket<WebSocketData, R extends string> = { websocket: WebSocketHandler<WebSocketData> } & { routes: RoutesWithUpgrade<WebSocketData, R>; fetch(this: Server<WebSocketData>, req: Request, server: Server<WebSocketData>): MaybePromise<undefined | void | Response> } | { routes: RoutesWithUpgrade<WebSocketData, R>; fetch(this: Server<WebSocketData>, req: Request, server: Server<WebSocketData>): MaybePromise<undefined | void | Response> }
- type Handler<Req extends Request, S, Res> = (request: Req, server: S) => MaybePromise<Res>
- type HTTPMethod = 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH' | 'HEAD' | 'OPTIONS'
- type Options<WebSocketData, R extends string = never> = Bun.__internal.XOR<HostnamePortServeOptions<WebSocketData>, UnixServeOptions<WebSocketData>> & Bun.__internal.XOR<FetchOrRoutes<WebSocketData, R>, FetchOrRoutesWithWebSocket<WebSocketData, R>>
The type of options that can be passed to serve, with support for
routes
and a safer requirement forfetch
export default { fetch: req => Response.json(req.url), websocket: { message(ws) { ws.data.name; // string }, }, } satisfies Bun.Serve.Options<{ name: string }>;
- type Routes<WebSocketData, R extends string> = { [K in R]: BaseRouteValue | Handler<BunRequest<Path>, Server<WebSocketData>, Response> | Partial<Record<HTTPMethod, Handler<BunRequest<Path>, Server<WebSocketData>, Response>>> }
- type RoutesWithUpgrade<WebSocketData, R extends string> = { [K in R]: BaseRouteValue | Handler<BunRequest<Path>, Server<WebSocketData>, Response | undefined | void> | Partial<Record<HTTPMethod, Handler<BunRequest<Path>, Server<WebSocketData>, Response | undefined | void>>> }
namespace SpawnOptions
interface OptionsObject<In extends Writable, Out extends Readable, Err extends Readable>
- argv0?: string
Path to the executable to run in the subprocess. This defaults to
cmds[0]
.One use-case for this is for applications which wrap other applications or to simulate a symlink.
- env?: Record<string, undefined | string>
The environment variables of the process
Defaults to
process.env
as it was when the current Bun process launched.Changes to
process.env
at runtime won't automatically be reflected in the default value. For that, you can passprocess.env
explicitly. - killSignal?: string | number
The signal to use when killing the process after a timeout, when the AbortSignal is aborted, or when the process goes over the
maxBuffer
limit.// Kill the process with SIGKILL after 5 seconds const subprocess = Bun.spawn({ cmd: ["sleep", "10"], timeout: 5000, killSignal: "SIGKILL", });
- maxBuffer?: number
The maximum number of bytes the process may output. If the process goes over this limit, it is killed with signal
killSignal
(defaults to SIGTERM). - serialization?: 'json' | 'advanced'
The serialization format to use for IPC messages. Defaults to
"advanced"
.To communicate with Node.js processes, use
"json"
.When
ipc
is not specified, this is ignored. - signal?: AbortSignal
An AbortSignal that can be used to abort the subprocess.
This is useful for aborting a subprocess when some other part of the program is aborted, such as a
fetch
response.If the signal is aborted, the process will be killed with the signal specified by
killSignal
(defaults to SIGTERM).const controller = new AbortController(); const { signal } = controller; const start = performance.now(); const subprocess = Bun.spawn({ cmd: ["sleep", "100"], signal, }); await Bun.sleep(1); controller.abort(); await subprocess.exited; const end = performance.now(); console.log(end - start); // 1ms instead of 101ms
- stderr?: Err
The file descriptor for the standard error. It may be:
"pipe"
,undefined
: The process will have a ReadableStream for standard output/error"ignore"
,null
: The process will have no standard output/error"inherit"
: The process will inherit the standard output/error of the current processArrayBufferView
: The process write to the preallocated buffer. Not implemented.number
: The process will write to the file descriptor
- stdin?: In
The file descriptor for the standard input. It may be:
"ignore"
,null
,undefined
: The process will have no standard input"pipe"
: The process will have a new FileSink for standard input"inherit"
: The process will inherit the standard input of the current processArrayBufferView
,Blob
: The process will read from the buffernumber
: The process will read from the file descriptor
- stdio?: [In, Out, Err, ...Readable[]]
The standard file descriptors of the process, in the form [stdin, stdout, stderr]. This overrides the
stdin
,stdout
, andstderr
properties.For stdin you may pass:
"ignore"
,null
,undefined
: The process will have no standard input (default)"pipe"
: The process will have a new FileSink for standard input"inherit"
: The process will inherit the standard input of the current processArrayBufferView
,Blob
,Bun.file()
,Response
,Request
: The process will read from buffer/stream.number
: The process will read from the file descriptor
For stdout and stdin you may pass:
"pipe"
,undefined
: The process will have a ReadableStream for standard output/error"ignore"
,null
: The process will have no standard output/error"inherit"
: The process will inherit the standard output/error of the current processArrayBufferView
: The process write to the preallocated buffer. Not implemented.number
: The process will write to the file descriptor
- stdout?: Out
The file descriptor for the standard output. It may be:
"pipe"
,undefined
: The process will have a ReadableStream for standard output/error"ignore"
,null
: The process will have no standard output/error"inherit"
: The process will inherit the standard output/error of the current processArrayBufferView
: The process write to the preallocated buffer. Not implemented.number
: The process will write to the file descriptor
- timeout?: number
The maximum amount of time the process is allowed to run in milliseconds.
If the timeout is reached, the process will be killed with the signal specified by
killSignal
(defaults to SIGTERM).// Kill the process after 5 seconds const subprocess = Bun.spawn({ cmd: ["sleep", "10"], timeout: 5000, }); await subprocess.exited; // Will resolve after 5 seconds
- ipc(message: any,handle?: unknown): void;
When specified, Bun will open an IPC channel to the subprocess. The passed callback is called for incoming messages, and
subprocess.send
can send messages to the subprocess. Messages are serialized using the JSC serialize API, which allows for the same types thatpostMessage
/structuredClone
supports.The subprocess can send and receive messages by using
process.send
andprocess.on("message")
, respectively. This is the same API as what Node.js exposes whenchild_process.fork()
is used.Currently, this is only compatible with processes that are other
bun
instances.@param subprocessThe Subprocess that received the message
- exitCode: null | number,signalCode: null | number,): void | Promise<void>;
Callback that runs when the Subprocess exits
This is called even if the process exits with a non-zero exit code.
Warning: this may run before the
Bun.spawn
function returns.A simple alternative is
await subprocess.exited
.@param errorIf an error occurred in the call to waitpid2, this will be the error.
const subprocess = spawn({ cmd: ["echo", "hello"], onExit: (subprocess, code) => { console.log(`Process exited with code ${code}`); }, });
- type Readable = 'pipe' | 'inherit' | 'ignore' | null | undefined | BunFile | ArrayBufferView | number
Option for stdout/stderr
- type ReadableToIO<X extends Readable> = X extends 'pipe' | undefined ? ReadableStream<Uint8Array<ArrayBuffer>> : X extends BunFile | ArrayBufferView | number ? number : undefined
- type ReadableToSyncIO<X extends Readable> = X extends 'pipe' | undefined ? Buffer : undefined
- type Writable = 'pipe' | 'inherit' | 'ignore' | null | undefined | BunFile | ArrayBufferView | number | ReadableStream | Blob | Response | Request
Option for stdin
- type WritableIO = FileSink | number | undefined
- type WritableToIO<X extends Writable> = X extends 'pipe' ? FileSink : X extends BunFile | ArrayBufferView | Blob | Request | Response | number ? number : undefined
namespace udp
interface BaseUDPSocket
interface ConnectedSocket<DataBinaryType extends BinaryType>
interface ConnectedSocketHandler<DataBinaryType extends BinaryType>
interface ConnectSocketOptions<DataBinaryType extends BinaryType>
interface Socket<DataBinaryType extends BinaryType>
interface SocketHandler<DataBinaryType extends BinaryType>
interface SocketOptions<DataBinaryType extends BinaryType>
- type Data = string | ArrayBufferView | ArrayBufferLike
namespace WebAssembly
interface CompileError
interface GlobalDescriptor<T extends ValueType = ValueType>
interface Memory
interface MemoryDescriptor
interface Module
interface ModuleExportDescriptor
interface ModuleImportDescriptor
interface RuntimeError
interface Table
interface TableDescriptor
- type Exports = Record<string, ExportValue>
- type ExportValue = Function | Global | WebAssembly.Memory | WebAssembly.Table
- type ImportExportKind = 'function' | 'global' | 'memory' | 'table'
- type Imports = Record<string, ModuleImports>
- type ImportValue = ExportValue | number
- type ModuleImports = Record<string, ImportValue>
- type TableKind = 'anyfunc' | 'externref'
- type ValueType = keyof ValueTypeMap
interface AbstractWorker
- type: K,): void;type: string,): void;
- type: K,): void;type: string,): void;
interface AbstractWorkerEventMap
interface AddEventListenerOptions
interface BuildArtifact
A build artifact represents a file that was generated by the bundler
Returns a promise that resolves to the contents of the blob as an ArrayBuffer
Returns a promise that resolves to the contents of the blob as a Uint8Array (array of bytes) its the same as
new Uint8Array(await blob.arrayBuffer())
Read the data from the blob as a FormData object.
This first decodes the data from UTF-8, then parses it as a
multipart/form-data
body or aapplication/x-www-form-urlencoded
body.The
type
property of the blob is used to determine the format of the body.This is a non-standard addition to the
Blob
API, to make it conform more closely to theBodyMixin
API.Read the data from the blob as a JSON object.
This first decodes the data from UTF-8, then parses it as JSON.
Returns a readable stream of the blob's contents
Returns a promise that resolves to the contents of the blob as a string
interface BuildConfigBase
- bytecode?: boolean
Generate bytecode for the output. This can dramatically improve cold start times, but will make the final output larger and slightly increase memory usage.
Bytecode is currently only supported for CommonJS (
format: "cjs"
).Must be
target: "bun"
- conditions?: string | string[]
package.json
exports
conditions used when resolving importsEquivalent to
--conditions
inbun build
orbun run
.https://nodejs.org/api/packages.html#exports
- env?: 'inline' | 'disable' | `${string}*`
Controls how environment variables are handled during bundling.
Can be one of:
"inline"
: Injects environment variables into the bundled output by convertingprocess.env.FOO
references to string literals containing the actual environment variable values"disable"
: Disables environment variable injection entirely- A string ending in
*
: Inlines environment variables that match the given prefix. For example,"MY_PUBLIC_*"
will only include env vars starting with "MY_PUBLIC_"
Bun.build({ env: "MY_PUBLIC_*", entrypoints: ["src/index.ts"], })
- format?: 'esm' | 'cjs' | 'iife'
Output module format. Top-level await is only supported for
"esm"
.Can be:
"esm"
"cjs"
(experimental)"iife"
(experimental)
- ignoreDCEAnnotations?: boolean
Ignore dead code elimination/tree-shaking annotations such as @PURE and package.json "sideEffects" fields. This should only be used as a temporary workaround for incorrect annotations in libraries.
- jsx?: { development: boolean; factory: string; fragment: string; importSource: string; runtime: 'classic' | 'automatic'; sideEffects: boolean }
JSX configuration options
- minify?: boolean | { identifiers: boolean; keepNames: boolean; syntax: boolean; whitespace: boolean }
Whether to enable minification.
Use
true
/false
to enable/disable all minification options. Alternatively, you can pass an object for granular control over certain minifications. - sourcemap?: boolean | 'none' | 'linked' | 'external' | 'inline'
Specifies if and how to generate source maps.
"none"
- No source maps are generated"linked"
- A separate*.ext.map
file is generated alongside each*.ext
file. A//# sourceMappingURL
comment is added to the output file to link the two. Requiresoutdir
to be set."inline"
- an inline source map is appended to the output file."external"
- Generate a separate source map file for each input file. No//# sourceMappingURL
comment is added to the output file.
true
andfalse
are aliases for"inline"
and"none"
, respectively. - throw?: boolean
- When set to
true
, the returned promise rejects with an AggregateError when a build failure happens. - When set to
false
, returns a BuildOutput with{success: false}
- When set to
- tsconfig?: string
Custom tsconfig.json file path to use for path resolution. Equivalent to
--tsconfig-override
in the CLI.await Bun.build({ entrypoints: ['./src/index.ts'], tsconfig: './custom-tsconfig.json' });
interface BunMessageEvent<T = any>
A message received by a target object.
- readonly bubbles: boolean
Returns true or false depending on how event was initialized. True if event goes through its target's ancestors in reverse tree order, and false otherwise.
- readonly cancelable: boolean
Returns true or false depending on how event was initialized. Its return value does not always carry meaning, but true can indicate that part of the operation during which event was dispatched, can be canceled by invoking the preventDefault() method.
- readonly composed: boolean
Returns true or false depending on how event was initialized. True if event invokes listeners past a ShadowRoot node that is the root of its target, and false otherwise.
- readonly currentTarget: null | EventTarget
Returns the object whose event listener's callback is currently being invoked.
- readonly defaultPrevented: boolean
Returns true if preventDefault() was invoked successfully to indicate cancelation, and false otherwise.
- readonly eventPhase: number
Returns the event's phase, which is one of NONE, CAPTURING_PHASE, AT_TARGET, and BUBBLING_PHASE.
- readonly isTrusted: boolean
Returns true if event was dispatched by the user agent, and false otherwise.
- readonly origin: string
Returns the origin of the message, for server-sent events and cross-document messaging.
- readonly ports: readonly MessagePort[]
Returns the MessagePort array sent with the message, for cross-document messaging and channel messaging.
- readonly timeStamp: number
Returns the event's timestamp as the number of milliseconds measured relative to the time origin.
Returns the invocation target objects of event's path (objects on which listeners will be invoked), except for any nodes in shadow trees of which the shadow root's mode is "closed" that are not reachable from event's currentTarget.
Returns an array containing the current EventTarget as the only entry or empty if the event is not being dispatched. This is not used in Node.js and is provided purely for completeness.
Sets the
defaultPrevented
property totrue
ifcancelable
istrue
.Stops the invocation of event listeners after the current one completes.
This is not used in Node.js and is provided purely for completeness.
interface BunPlugin
A Bun plugin. Used for extending Bun's behavior at runtime, or with Bun.build
- name: string
Human-readable name of the plugin
In a future version of Bun, this will be used in error messages.
- target?: Target
The target JavaScript environment the plugin should be applied to.
bun
: The default environment when usingbun run
orbun
to load a scriptbrowser
: The plugin will be applied to browser buildsnode
: The plugin will be applied to Node.js builds
If unspecified, it is assumed that the plugin is compatible with all targets.
This field is not read by Bun.plugin, only Bun.build and
bun build
- ): void | Promise<void>;
A function that will be called when the plugin is loaded.
This function may be called in the same tick that it is registered, or it may be called later. It could potentially be called multiple times for different targets.
@param buildA builder object that can be used to register plugin hooks
interface BunRegisterPlugin
Extend Bun's module resolution and loading behavior
Plugins are applied in the order they are defined.
Today, there are two kinds of hooks:
onLoad
lets you return source code or an object that will become the module's exportsonResolve
lets you redirect a module specifier to another module specifier. It does not chain.
Plugin hooks must define a
filter
RegExp and will only be matched if the import specifier contains a "." or a ":".ES Module resolution semantics mean that plugins may be initialized after a module is resolved. You might need to load plugins at the very beginning of the application and then use a dynamic import to load the rest of the application. A future version of Bun may also support specifying plugins via
bunfig.toml
.A YAML loader plugin
Bun.plugin({ setup(builder) { builder.onLoad({ filter: /\.yaml$/ }, ({path}) => ({ loader: "object", exports: require("js-yaml").load(fs.readFileSync(path, "utf8")) })); }); // You can use require() const {foo} = require("./file.yaml"); // Or import await import("./file.yaml");
Deactivate all plugins
This prevents registered plugins from being applied to future builds.
interface BunRequest<T extends string = string>
This Fetch API interface represents a resource request.
- readonly cache: RequestCache
Returns the cache mode associated with request, which is a string indicating how the request will interact with the browser's cache when fetching.
- readonly credentials: RequestCredentials
Returns the credentials mode associated with request, which is a string indicating whether credentials will be sent with the request always, never, or only when sent to a same-origin URL.
- readonly destination: RequestDestination
Returns the kind of resource requested by request, e.g., "document" or "script".
- readonly integrity: string
Returns request's subresource integrity metadata, which is a cryptographic hash of the resource being fetched. Its value consists of multiple hashes separated by whitespace. [SRI]
- readonly keepalive: boolean
Returns a boolean indicating whether or not request can outlive the global in which it was created.
- readonly mode: RequestMode
Returns the mode associated with request, which is a string indicating whether the request will use CORS, or will be restricted to same-origin URLs.
- readonly redirect: RequestRedirect
Returns the redirect mode associated with request, which is a string indicating how redirects for the request will be handled during fetching. A request will follow redirects by default.
- readonly referrer: string
Returns the referrer of request. Its value can be a same-origin URL if explicitly set in init, the empty string to indicate no referrer, and "about:client" when defaulting to the global's default. This is used during fetching to determine the value of the
Referer
header of the request being made. - readonly referrerPolicy: ReferrerPolicy
Returns the referrer policy associated with request. This is used during fetching to compute the value of the request's referrer.
- readonly signal: AbortSignal
Returns the signal associated with request, which is an AbortSignal object indicating whether or not request has been aborted, and its abort event handler.
interface CloseEventInit
interface CompileBuildConfig
- bytecode?: boolean
Generate bytecode for the output. This can dramatically improve cold start times, but will make the final output larger and slightly increase memory usage.
Bytecode is currently only supported for CommonJS (
format: "cjs"
).Must be
target: "bun"
- compile: boolean | CompileBuildOptions | Target
Create a standalone executable
When
true
, creates an executable for the current platform. When a target string, creates an executable for that platform.// Create executable for current platform await Bun.build({ entrypoints: ['./app.js'], compile: { target: 'linux-x64', }, outfile: './my-app' }); // Cross-compile for Linux x64 await Bun.build({ entrypoints: ['./app.js'], compile: 'linux-x64', outfile: './my-app' });
- conditions?: string | string[]
package.json
exports
conditions used when resolving importsEquivalent to
--conditions
inbun build
orbun run
.https://nodejs.org/api/packages.html#exports
- env?: 'inline' | 'disable' | `${string}*`
Controls how environment variables are handled during bundling.
Can be one of:
"inline"
: Injects environment variables into the bundled output by convertingprocess.env.FOO
references to string literals containing the actual environment variable values"disable"
: Disables environment variable injection entirely- A string ending in
*
: Inlines environment variables that match the given prefix. For example,"MY_PUBLIC_*"
will only include env vars starting with "MY_PUBLIC_"
Bun.build({ env: "MY_PUBLIC_*", entrypoints: ["src/index.ts"], })
- format?: 'esm' | 'cjs' | 'iife'
Output module format. Top-level await is only supported for
"esm"
.Can be:
"esm"
"cjs"
(experimental)"iife"
(experimental)
- ignoreDCEAnnotations?: boolean
Ignore dead code elimination/tree-shaking annotations such as @PURE and package.json "sideEffects" fields. This should only be used as a temporary workaround for incorrect annotations in libraries.
- jsx?: { development: boolean; factory: string; fragment: string; importSource: string; runtime: 'classic' | 'automatic'; sideEffects: boolean }
JSX configuration options
- minify?: boolean | { identifiers: boolean; keepNames: boolean; syntax: boolean; whitespace: boolean }
Whether to enable minification.
Use
true
/false
to enable/disable all minification options. Alternatively, you can pass an object for granular control over certain minifications. - sourcemap?: boolean | 'none' | 'linked' | 'external' | 'inline'
Specifies if and how to generate source maps.
"none"
- No source maps are generated"linked"
- A separate*.ext.map
file is generated alongside each*.ext
file. A//# sourceMappingURL
comment is added to the output file to link the two. Requiresoutdir
to be set."inline"
- an inline source map is appended to the output file."external"
- Generate a separate source map file for each input file. No//# sourceMappingURL
comment is added to the output file.
true
andfalse
are aliases for"inline"
and"none"
, respectively. - throw?: boolean
- When set to
true
, the returned promise rejects with an AggregateError when a build failure happens. - When set to
false
, returns a BuildOutput with{success: false}
- When set to
- tsconfig?: string
Custom tsconfig.json file path to use for path resolution. Equivalent to
--tsconfig-override
in the CLI.await Bun.build({ entrypoints: ['./src/index.ts'], tsconfig: './custom-tsconfig.json' });
interface CompileBuildOptions
- windows?: { copyright: string; description: string; hideConsole: boolean; icon: string; publisher: string; title: string; version: string }
interface CookieInit
interface CookieStoreDeleteOptions
interface CookieStoreGetOptions
interface CSRFGenerateOptions
- expiresIn?: number
The number of milliseconds until the token expires. 0 means the token never expires.
interface CSRFVerifyOptions
- secret?: string
The secret to use for the token. If not provided, a random default secret will be generated in memory and used.
interface CustomEventInit<T = any>
interface DirectUnderlyingSource<R = any>
interface EditorOptions
interface ErrorEventInit
interface EventInit
interface EventListener
interface EventListenerObject
interface EventListenerOptions
interface EventMap
interface EventSourceEventMap
interface FdSocketOptions<Data = undefined>
- allowHalfOpen?: boolean
Whether to allow half-open connections.
A half-open connection occurs when one end of the connection has called
close()
or sent a FIN packet, while the other end remains open. When set totrue
:- The socket won't automatically send FIN when the remote side closes its end
- The local side can continue sending data even after the remote side has closed
- The application must explicitly call
end()
to fully close the connection
When
false
, the socket automatically closes both ends of the connection when either side closes.
interface FetchEvent
An event which takes place in the DOM.
- readonly bubbles: boolean
Returns true or false depending on how event was initialized. True if event goes through its target's ancestors in reverse tree order, and false otherwise.
- readonly cancelable: boolean
Returns true or false depending on how event was initialized. Its return value does not always carry meaning, but true can indicate that part of the operation during which event was dispatched, can be canceled by invoking the preventDefault() method.
- readonly composed: boolean
Returns true or false depending on how event was initialized. True if event invokes listeners past a ShadowRoot node that is the root of its target, and false otherwise.
- readonly currentTarget: null | EventTarget
Returns the object whose event listener's callback is currently being invoked.
- readonly defaultPrevented: boolean
Returns true if preventDefault() was invoked successfully to indicate cancelation, and false otherwise.
- readonly eventPhase: number
Returns the event's phase, which is one of NONE, CAPTURING_PHASE, AT_TARGET, and BUBBLING_PHASE.
- readonly isTrusted: boolean
Returns true if event was dispatched by the user agent, and false otherwise.
- readonly timeStamp: number
Returns the event's timestamp as the number of milliseconds measured relative to the time origin.
Returns the invocation target objects of event's path (objects on which listeners will be invoked), except for any nodes in shadow trees of which the shadow root's mode is "closed" that are not reachable from event's currentTarget.
Returns an array containing the current EventTarget as the only entry or empty if the event is not being dispatched. This is not used in Node.js and is provided purely for completeness.
Sets the
defaultPrevented
property totrue
ifcancelable
istrue
.Stops the invocation of event listeners after the current one completes.
This is not used in Node.js and is provided purely for completeness.
interface FileBlob
Blob
powered by the fastest system calls available for operating on files.This Blob is lazy. That means it won't do any work until you read from it.
size
will not be valid until the contents of the file are read at least once.type
is auto-set based on the file extension when possible
const file = Bun.file("./hello.json"); console.log(file.type); // "application/json" console.log(await file.text()); // '{"hello":"world"}'
Returns a promise that resolves to the contents of the blob as an ArrayBuffer
Returns a promise that resolves to the contents of the blob as a Uint8Array (array of bytes) its the same as
new Uint8Array(await blob.arrayBuffer())
Deletes the file (same as unlink)
Does the file exist?
This returns true for regular files and FIFOs. It returns false for directories. Note that a race condition can occur where the file is deleted or renamed after this is called but before you open it.
This does a system call to check if the file exists, which can be slow.
If using this in an HTTP server, it's faster to instead use
return new Response(Bun.file(path))
and then anerror
handler to handle exceptions.Instead of checking for a file's existence and then performing the operation, it is faster to just perform the operation and handle the error.
For empty Blob, this always returns true.
Read the data from the blob as a FormData object.
This first decodes the data from UTF-8, then parses it as a
multipart/form-data
body or aapplication/x-www-form-urlencoded
body.The
type
property of the blob is used to determine the format of the body.This is a non-standard addition to the
Blob
API, to make it conform more closely to theBodyMixin
API.Read the data from the blob as a JSON object.
This first decodes the data from UTF-8, then parses it as JSON.
- begin?: number,end?: number,contentType?: string
Offset any operation on the file starting at
begin
and ending atend
.end
is relative to 0Similar to
TypedArray.subarray
. Does not copy the file, open the file, or modify the file.If
begin
> 0, () will be slower on macOS@param beginstart offset in bytes
@param endabsolute offset in bytes (relative to 0)
@param contentTypeMIME type for the new BunFile
begin?: number,contentType?: stringOffset any operation on the file starting at
begin
Similar to
TypedArray.subarray
. Does not copy the file, open the file, or modify the file.If
begin
> 0, Bun.write() will be slower on macOS@param beginstart offset in bytes
@param contentTypeMIME type for the new BunFile
Returns a readable stream of the blob's contents
Returns a promise that resolves to the contents of the blob as a string
Deletes the file.
- data: string | ArrayBuffer | SharedArrayBuffer | BunFile | Request | Response | ArrayBufferView<ArrayBufferLike>,options?: { highWaterMark: number }): Promise<number>;
Write data to the file. This is equivalent to using Bun.write with a BunFile.
@param dataThe data to write.
@param optionsThe options to use for the write.
interface FileSink
Fast incremental writer for files and pipes.
This uses the same interface as ArrayBufferSink, but writes to a file or pipe.
Flush the internal buffer, committing the data to disk or the pipe.
@returnsNumber of bytes flushed or a Promise resolving to the number of bytes
For FIFOs & pipes, this lets you decide whether Bun's process should remain alive until the pipe is closed.
By default, it is automatically managed. While the stream is open, the process remains alive and once the other end hangs up or the stream closes, the process exits.
If you previously called unref, you can call this again to re-enable automatic management.
Internally, it will reference count the number of times this is called. By default, that number is 1
If the file is not a FIFO or pipe, ref and unref do nothing. If the pipe is already closed, this does nothing.
- @param options
Configuration options for the file sink
For FIFOs & pipes, this lets you decide whether Bun's process should remain alive until the pipe is closed.
If you want to allow Bun's process to terminate while the stream is open, call this.
If the file is not a FIFO or pipe, ref and unref do nothing. If the pipe is already closed, this does nothing.
- ): number;
Write a chunk of data to the file.
If the file descriptor is not writable yet, the data is buffered.
@param chunkThe data to write
@returnsNumber of bytes written
interface GenericTransformStream
interface GlobScanOptions
interface Hash
- adler32: (data: string | ArrayBuffer | SharedArrayBuffer | ArrayBufferView<ArrayBufferLike>) => number
- cityHash32: (data: string | ArrayBuffer | SharedArrayBuffer | ArrayBufferView<ArrayBufferLike>) => number
- cityHash64: (data: string | ArrayBuffer | SharedArrayBuffer | ArrayBufferView<ArrayBufferLike>, seed?: bigint) => bigint
- crc32: (data: string | ArrayBuffer | SharedArrayBuffer | ArrayBufferView<ArrayBufferLike>) => number
- murmur32v2: (data: string | ArrayBuffer | SharedArrayBuffer | ArrayBufferView<ArrayBufferLike>, seed?: number) => number
- murmur32v3: (data: string | ArrayBuffer | SharedArrayBuffer | ArrayBufferView<ArrayBufferLike>, seed?: number) => number
- murmur64v2: (data: string | ArrayBuffer | SharedArrayBuffer | ArrayBufferView<ArrayBufferLike>, seed?: bigint) => bigint
- rapidhash: (data: string | ArrayBuffer | SharedArrayBuffer | ArrayBufferView<ArrayBufferLike>, seed?: bigint) => bigint
- wyhash: (data: string | ArrayBuffer | SharedArrayBuffer | ArrayBufferView<ArrayBufferLike>, seed?: bigint) => bigint
- xxHash3: (data: string | ArrayBuffer | SharedArrayBuffer | ArrayBufferView<ArrayBufferLike>, seed?: bigint) => bigint
- xxHash32: (data: string | ArrayBuffer | SharedArrayBuffer | ArrayBufferView<ArrayBufferLike>, seed?: number) => number
- xxHash64: (data: string | ArrayBuffer | SharedArrayBuffer | ArrayBufferView<ArrayBufferLike>, seed?: bigint) => bigint
interface HeapSnapshot
JavaScriptCore engine's internal heap snapshot
I don't know how to make this something Chrome or Safari can read.
If you have any ideas, please file an issue https://github.com/oven-sh/bun
interface HTMLBundle
Used when importing an HTML file at runtime or at build time.
import app from "./index.html";
interface Import
interface LibdeflateCompressionOptions
interface MessageEventInit<T = any>
interface MMapOptions
interface NetworkSink
Fast incremental writer for files and pipes.
This uses the same interface as ArrayBufferSink, but writes to a file or pipe.
Flush the internal buffer, committing the data to the network.
@returnsNumber of bytes flushed or a Promise resolving to the number of bytes
For FIFOs & pipes, this lets you decide whether Bun's process should remain alive until the pipe is closed.
By default, it is automatically managed. While the stream is open, the process remains alive and once the other end hangs up or the stream closes, the process exits.
If you previously called unref, you can call this again to re-enable automatic management.
Internally, it will reference count the number of times this is called. By default, that number is 1
If the file is not a FIFO or pipe, ref and unref do nothing. If the pipe is already closed, this does nothing.
- @param options
Configuration options for the file sink
For FIFOs & pipes, this lets you decide whether Bun's process should remain alive until the pipe is closed.
If you want to allow Bun's process to terminate while the stream is open, call this.
If the file is not a FIFO or pipe, ref and unref do nothing. If the pipe is already closed, this does nothing.
- ): number;
Write a chunk of data to the network.
If the network is not writable yet, the data is buffered.
@param chunkThe data to write
@returnsNumber of bytes written
interface NormalBuildConfig
- bytecode?: boolean
Generate bytecode for the output. This can dramatically improve cold start times, but will make the final output larger and slightly increase memory usage.
Bytecode is currently only supported for CommonJS (
format: "cjs"
).Must be
target: "bun"
- conditions?: string | string[]
package.json
exports
conditions used when resolving importsEquivalent to
--conditions
inbun build
orbun run
.https://nodejs.org/api/packages.html#exports
- env?: 'inline' | 'disable' | `${string}*`
Controls how environment variables are handled during bundling.
Can be one of:
"inline"
: Injects environment variables into the bundled output by convertingprocess.env.FOO
references to string literals containing the actual environment variable values"disable"
: Disables environment variable injection entirely- A string ending in
*
: Inlines environment variables that match the given prefix. For example,"MY_PUBLIC_*"
will only include env vars starting with "MY_PUBLIC_"
Bun.build({ env: "MY_PUBLIC_*", entrypoints: ["src/index.ts"], })
- format?: 'esm' | 'cjs' | 'iife'
Output module format. Top-level await is only supported for
"esm"
.Can be:
"esm"
"cjs"
(experimental)"iife"
(experimental)
- ignoreDCEAnnotations?: boolean
Ignore dead code elimination/tree-shaking annotations such as @PURE and package.json "sideEffects" fields. This should only be used as a temporary workaround for incorrect annotations in libraries.
- jsx?: { development: boolean; factory: string; fragment: string; importSource: string; runtime: 'classic' | 'automatic'; sideEffects: boolean }
JSX configuration options
- minify?: boolean | { identifiers: boolean; keepNames: boolean; syntax: boolean; whitespace: boolean }
Whether to enable minification.
Use
true
/false
to enable/disable all minification options. Alternatively, you can pass an object for granular control over certain minifications. - sourcemap?: boolean | 'none' | 'linked' | 'external' | 'inline'
Specifies if and how to generate source maps.
"none"
- No source maps are generated"linked"
- A separate*.ext.map
file is generated alongside each*.ext
file. A//# sourceMappingURL
comment is added to the output file to link the two. Requiresoutdir
to be set."inline"
- an inline source map is appended to the output file."external"
- Generate a separate source map file for each input file. No//# sourceMappingURL
comment is added to the output file.
true
andfalse
are aliases for"inline"
and"none"
, respectively. - throw?: boolean
- When set to
true
, the returned promise rejects with an AggregateError when a build failure happens. - When set to
false
, returns a BuildOutput with{success: false}
- When set to
- tsconfig?: string
Custom tsconfig.json file path to use for path resolution. Equivalent to
--tsconfig-override
in the CLI.await Bun.build({ entrypoints: ['./src/index.ts'], tsconfig: './custom-tsconfig.json' });
interface OnLoadArgs
- defer: () => Promise<void>
Defer the execution of this callback until all other modules have been parsed.
- path: string
The resolved import specifier of the module being loaded
builder.onLoad({ filter: /^hello:world$/ }, (args) => { console.log(args.path); // "hello:world" return { exports: { foo: "bar" }, loader: "object" }; });
interface OnLoadResultObject
- exports: Record<string, unknown>
The object to use as the module
// In your loader builder.onLoad({ filter: /^hello:world$/ }, (args) => { return { exports: { foo: "bar" }, loader: "object" }; }); // In your script import {foo} from "hello:world"; console.log(foo); // "bar"
interface OnLoadResultSourceCode
- contents: string | ArrayBuffer | SharedArrayBuffer | ArrayBufferView<ArrayBufferLike>
The source code of the module
interface OnResolveArgs
interface OnResolveResult
- namespace?: string
The namespace of the destination It will be concatenated with
path
to form the final import specifier"foo" // "foo:bar"
interface PluginBuilder
The builder object passed to
Bun.plugin
- config: BuildConfig & { plugins: BunPlugin[] }
The config object passed to
Bun.build
as is. Can be mutated. - specifier: string,): this;
Create a lazy-loaded virtual module that can be
import
ed orrequire
d from other modules@param specifierThe module specifier to register the callback for
@param callbackThe function to run when the module is imported or required
@returnsthis
for method chainingBun.plugin({ setup(builder) { builder.module("hello:world", () => { return { exports: { foo: "bar" }, loader: "object" }; }); }, }); // sometime later const { foo } = await import("hello:world"); console.log(foo); // "bar" // or const { foo } = require("hello:world"); console.log(foo); // "bar"
- ): this;
Register a callback which will be invoked when bundling ends. This is called after all modules have been bundled and the build is complete.
@returnsthis
for method chainingconst plugin: Bun.BunPlugin = { name: "my-plugin", setup(builder) { builder.onEnd((result) => { console.log("bundle just finished!!", result); }); }, };
- ): this;
Register a callback to load imports with a specific import specifier
@param constraintsThe constraints to apply the plugin to
@param callbackThe callback to handle the import
@returnsthis
for method chainingBun.plugin({ setup(builder) { builder.onLoad({ filter: /^hello:world$/ }, (args) => { return { exports: { foo: "bar" }, loader: "object" }; }); }, });
- ): this;
Register a callback to resolve imports matching a filter and/or namespace
@param constraintsThe constraints to apply the plugin to
@param callbackThe callback to handle the import
@returnsthis
for method chainingBun.plugin({ setup(builder) { builder.onResolve({ filter: /^wat$/ }, (args) => { return { path: "/tmp/woah.js" }; }); }, });
- ): this;
Register a callback which will be invoked when bundling starts. When using hot module reloading, this is called at the start of each incremental rebuild.
@returnsthis
for method chainingBun.plugin({ setup(builder) { builder.onStart(() => { console.log("bundle just started!!") }); }, });
interface PluginConstraints
- filter: RegExp
Only apply the plugin when the import specifier matches this regular expression
// Only apply the plugin when the import specifier matches the regex Bun.plugin({ setup(builder) { builder.onLoad({ filter: /node_modules/underscore/ }, (args) => { return { contents: "throw new Error('Please use lodash instead of underscore.')" }; }); } })
- namespace?: string
Only apply the plugin when the import specifier has a namespace matching this string
Namespaces are prefixes in import specifiers. For example,
"bun:ffi"
has the namespace"bun"
.The default namespace is
"file"
and it can be omitted from import specifiers.
interface ReadableStreamDefaultReadManyResult<T>
interface RedisOptions
interface ReservedSQL
Represents a reserved connection from the connection pool Extends SQL with additional release functionality
- options: Merge<SQLiteOptions, PostgresOrMySQLOptions> | Merge<PostgresOrMySQLOptions, SQLiteOptions>
Current client options
- values: any[],
Creates a new SQL array parameter
@param valuesThe values to create the array parameter from
@param typeNameOrTypeIDThe type name or type ID to create the array parameter from, if omitted it will default to JSON
@returnsA new SQL array parameter
const array = sql.array([1, 2, 3], "INT"); await sql`CREATE TABLE users_posts (user_id INT, posts_id INT[])`; await sql`INSERT INTO users_posts (user_id, posts_id) VALUES (${user.id}, ${array})`;
Begins a new transaction.
Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.begin will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.begin(async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] })
options: string,Begins a new transaction with options.
Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.begin will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.begin("read write", async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] })
- name: string,
Begins a distributed transaction Also know as Two-Phase Commit, in a distributed transaction, Phase 1 involves the coordinator preparing nodes by ensuring data is written and ready to commit, while Phase 2 finalizes with nodes committing or rolling back based on the coordinator's decision, ensuring durability and releasing locks. In PostgreSQL and MySQL distributed transactions persist beyond the original session, allowing privileged users or coordinators to commit/rollback them, ensuring support for distributed transactions, recovery, and administrative tasks. beginDistributed will automatic rollback if any exception are not caught, and you can commit and rollback later if everything goes well. PostgreSQL natively supports distributed transactions using PREPARE TRANSACTION, while MySQL uses XA Transactions, and MSSQL also supports distributed/XA transactions. However, in MSSQL, distributed transactions are tied to the original session, the DTC coordinator, and the specific connection. These transactions are automatically committed or rolled back following the same rules as regular transactions, with no option for manual intervention from other sessions, in MSSQL distributed transactions are used to coordinate transactions using Linked Servers.
await sql.beginDistributed("numbers", async sql => { await sql`create table if not exists numbers (a int)`; await sql`insert into numbers values(1)`; }); // later you can call await sql.commitDistributed("numbers"); // or await sql.rollbackDistributed("numbers");
- options?: { timeout: number }): Promise<void>;
Closes the database connection with optional timeout in seconds. If timeout is 0, it will close immediately, if is not provided it will wait for all queries to finish before closing.
@param optionsThe options for the close
await sql.close({ timeout: 1 });
- name: string): Promise<void>;
Commits a distributed transaction also know as prepared transaction in postgres or XA transaction in MySQL
@param nameThe name of the distributed transaction
await sql.commitDistributed("my_distributed_transaction");
- name: string,
Alternative method to begin a distributed transaction
- end(options?: { timeout: number }): Promise<void>;
Closes the database connection with optional timeout in seconds. If timeout is 0, it will close immediately, if is not provided it will wait for all queries to finish before closing. This is an alias of SQL.close
@param optionsThe options for the close
await sql.end({ timeout: 1 });
Flushes any pending operations
sql.flush();
Releases the client back to the connection pool
The reserve method pulls out a connection from the pool, and returns a client that wraps the single connection.
This can be used for running queries on an isolated connection. Calling reserve in a reserved Sql will return a new reserved connection, not the same connection (behavior matches postgres package).
const reserved = await sql.reserve(); await reserved`select * from users`; await reserved.release(); // with in a production scenario would be something more like const reserved = await sql.reserve(); try { // ... queries } finally { await reserved.release(); } // Bun supports Symbol.dispose and Symbol.asyncDispose // always release after context (safer) using reserved = await sql.reserve() await reserved`select * from users`
- name: string): Promise<void>;
Rolls back a distributed transaction also know as prepared transaction in postgres or XA transaction in MySQL
@param nameThe name of the distributed transaction
await sql.rollbackDistributed("my_distributed_transaction");
Alternative method to begin a transaction.
Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.transaction will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.transaction(async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] })
options: string,Alternative method to begin a transaction with options Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.transaction will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.transaction("read write", async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] });
- string: string,values?: any[]
If you know what you're doing, you can use unsafe to pass any string you'd like. Please note that this can lead to SQL injection if you're not careful. You can also nest sql.unsafe within a safe sql expression. This is useful if only part of your fraction has unsafe elements.
const result = await sql.unsafe(`select ${danger} from users where id = ${dragons}`)
interface ResourceUsage
- contextSwitches: { involuntary: number; voluntary: number }
The number of voluntary and involuntary context switches that the process made.
- cpuTime: { system: number; total: number; user: number }
The amount of CPU time used by the process, in microseconds.
- maxRSS: number
The maximum amount of resident set size (in bytes) used by the process during its lifetime.
interface S3FilePresignOptions
Options for generating presigned URLs
- accessKeyId?: string
The access key ID for authentication. Defaults to
S3_ACCESS_KEY_ID
orAWS_ACCESS_KEY_ID
environment variables. - acl?: 'private' | 'public-read' | 'public-read-write' | 'aws-exec-read' | 'authenticated-read' | 'bucket-owner-read' | 'bucket-owner-full-control' | 'log-delivery-write'
The Access Control List (ACL) policy for the file. Controls who can access the file and what permissions they have.
// Setting public read access const file = s3.file("public-file.txt", { acl: "public-read", bucket: "my-bucket" });
- bucket?: string
The S3 bucket name. Defaults to
S3_BUCKET
orAWS_BUCKET
environment variables.// Using explicit bucket const file = s3.file("my-file.txt", { bucket: "my-bucket" });
- endpoint?: string
The S3-compatible service endpoint URL. Defaults to
S3_ENDPOINT
orAWS_ENDPOINT
environment variables.// AWS S3 const file = s3.file("my-file.txt", { endpoint: "https://s3.us-east-1.amazonaws.com" });
- expiresIn?: number
Number of seconds until the presigned URL expires.
- Default: 86400 (1 day)
// Short-lived URL const url = file.presign({ expiresIn: 3600 // 1 hour });
- method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'HEAD'
The HTTP method allowed for the presigned URL.
// GET URL for downloads const downloadUrl = file.presign({ method: "GET", expiresIn: 3600 });
- partSize?: number
The size of each part in multipart uploads (in bytes).
- Minimum: 5 MiB
- Maximum: 5120 MiB
- Default: 5 MiB
// Configuring multipart uploads const file = s3.file("large-file.dat", { partSize: 10 * 1024 * 1024, // 10 MiB parts queueSize: 4 // Upload 4 parts in parallel }); const writer = file.writer(); // ... write large file in chunks
- queueSize?: number
Number of parts to upload in parallel for multipart uploads.
- Default: 5
- Maximum: 255
Increasing this value can improve upload speeds for large files but will use more memory.
- region?: string
The AWS region. Defaults to
S3_REGION
orAWS_REGION
environment variables.const file = s3.file("my-file.txt", { bucket: "my-bucket", region: "us-west-2" });
- retry?: number
Number of retry attempts for failed uploads.
- Default: 3
- Maximum: 255
// Setting retry attempts const file = s3.file("my-file.txt", { retry: 5 // Retry failed uploads up to 5 times });
- secretAccessKey?: string
The secret access key for authentication. Defaults to
S3_SECRET_ACCESS_KEY
orAWS_SECRET_ACCESS_KEY
environment variables. - sessionToken?: string
Optional session token for temporary credentials. Defaults to
S3_SESSION_TOKEN
orAWS_SESSION_TOKEN
environment variables.// Using temporary credentials const file = s3.file("my-file.txt", { accessKeyId: tempAccessKey, secretAccessKey: tempSecretKey, sessionToken: tempSessionToken });
- storageClass?: 'STANDARD' | 'DEEP_ARCHIVE' | 'EXPRESS_ONEZONE' | 'GLACIER' | 'GLACIER_IR' | 'INTELLIGENT_TIERING' | 'ONEZONE_IA' | 'OUTPOSTS' | 'REDUCED_REDUNDANCY' | 'SNOW' | 'STANDARD_IA'
By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects.
// Setting explicit Storage class const file = s3.file("my-file.json", { storageClass: "STANDARD_IA" });
- type?: string
The Content-Type of the file. Automatically set based on file extension when possible.
// Setting explicit content type const file = s3.file("data.bin", { type: "application/octet-stream" });
- virtualHostedStyle?: boolean
Use virtual hosted style endpoint. default to false, when true if
endpoint
is informed it will ignore thebucket
// Using virtual hosted style const file = s3.file("my-file.txt", { virtualHostedStyle: true, endpoint: "https://my-bucket.s3.us-east-1.amazonaws.com" });
interface S3ListObjectsOptions
- continuationToken?: string
ContinuationToken indicates to S3 that the list is being continued on this bucket with a token. ContinuationToken is obfuscated and is not a real key. You can use this ContinuationToken for pagination of the list results.
- encodingType?: 'url'
Encoding type used by S3 to encode the object keys in the response. Responses are encoded only in UTF-8. An object key can contain any Unicode character. However, the XML 1.0 parser can't parse certain characters, such as characters with an ASCII value from 0 to 10. For characters that aren't supported in XML 1.0, you can add this parameter to request that S3 encode the keys in the response.
- fetchOwner?: boolean
If you want to return the owner field with each key in the result, then set the FetchOwner field to true.
- maxKeys?: number
Sets the maximum number of keys returned in the response. By default, the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.
- startAfter?: string
StartAfter is where you want S3 to start listing from. S3 starts listing after this specified key. StartAfter can be any key in the bucket.
interface S3ListObjectsResponse
- commonPrefixes?: { prefix: string }[]
All of the keys (up to 1,000) that share the same prefix are grouped together. When counting the total numbers of returns by this API operation, this group of keys is considered as one item.
A response can contain CommonPrefixes only if you specify a delimiter.
CommonPrefixes contains all (if there are any) keys between Prefix and the next occurrence of the string specified by a delimiter.
CommonPrefixes lists keys that act like subdirectories in the directory specified by Prefix.
For example, if the prefix is notes/ and the delimiter is a slash (/) as in notes/summer/july, the common prefix is notes/summer/. All of the keys that roll up into a common prefix count as a single return when calculating the number of returns.
- contents?: { checksumAlgorithm: 'CRC32' | 'CRC32C' | 'SHA1' | 'SHA256' | 'CRC64NVME'; checksumType: 'COMPOSITE' | 'FULL_OBJECT'; eTag: string; key: string; lastModified: string; owner: { displayName: string; id: string }; restoreStatus: { isRestoreInProgress: boolean; restoreExpiryDate: string }; size: number; storageClass: 'STANDARD' | 'DEEP_ARCHIVE' | 'EXPRESS_ONEZONE' | 'GLACIER' | 'GLACIER_IR' | 'INTELLIGENT_TIERING' | 'ONEZONE_IA' | 'OUTPOSTS' | 'REDUCED_REDUNDANCY' | 'SNOW' | 'STANDARD_IA' }[]
Metadata about each object returned.
- continuationToken?: string
If ContinuationToken was sent with the request, it is included in the response. You can use the returned ContinuationToken for pagination of the list response.
- delimiter?: string
Causes keys that contain the same string between the prefix and the first occurrence of the delimiter to be rolled up into a single result element in the CommonPrefixes collection. These rolled-up keys are not returned elsewhere in the response. Each rolled-up result counts as only one return against the MaxKeys value.
- isTruncated?: boolean
Set to false if all of the results were returned. Set to true if more keys are available to return. If the number of results exceeds that specified by MaxKeys, all of the results might not be returned.
- keyCount?: number
KeyCount is the number of keys returned with this request. KeyCount will always be less than or equal to the MaxKeys field. For example, if you ask for 50 keys, your result will include 50 keys or fewer.
- maxKeys?: number
Sets the maximum number of keys returned in the response. By default, the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.
- nextContinuationToken?: string
NextContinuationToken is sent when isTruncated is true, which means there are more keys in the bucket that can be listed. The next list requests to S3 can be continued with this NextContinuationToken. NextContinuationToken is obfuscated and is not a real key.
interface S3Options
Configuration options for S3 operations
- accessKeyId?: string
The access key ID for authentication. Defaults to
S3_ACCESS_KEY_ID
orAWS_ACCESS_KEY_ID
environment variables. - acl?: 'private' | 'public-read' | 'public-read-write' | 'aws-exec-read' | 'authenticated-read' | 'bucket-owner-read' | 'bucket-owner-full-control' | 'log-delivery-write'
The Access Control List (ACL) policy for the file. Controls who can access the file and what permissions they have.
// Setting public read access const file = s3.file("public-file.txt", { acl: "public-read", bucket: "my-bucket" });
- bucket?: string
The S3 bucket name. Defaults to
S3_BUCKET
orAWS_BUCKET
environment variables.// Using explicit bucket const file = s3.file("my-file.txt", { bucket: "my-bucket" });
- endpoint?: string
The S3-compatible service endpoint URL. Defaults to
S3_ENDPOINT
orAWS_ENDPOINT
environment variables.// AWS S3 const file = s3.file("my-file.txt", { endpoint: "https://s3.us-east-1.amazonaws.com" });
- partSize?: number
The size of each part in multipart uploads (in bytes).
- Minimum: 5 MiB
- Maximum: 5120 MiB
- Default: 5 MiB
// Configuring multipart uploads const file = s3.file("large-file.dat", { partSize: 10 * 1024 * 1024, // 10 MiB parts queueSize: 4 // Upload 4 parts in parallel }); const writer = file.writer(); // ... write large file in chunks
- queueSize?: number
Number of parts to upload in parallel for multipart uploads.
- Default: 5
- Maximum: 255
Increasing this value can improve upload speeds for large files but will use more memory.
- region?: string
The AWS region. Defaults to
S3_REGION
orAWS_REGION
environment variables.const file = s3.file("my-file.txt", { bucket: "my-bucket", region: "us-west-2" });
- retry?: number
Number of retry attempts for failed uploads.
- Default: 3
- Maximum: 255
// Setting retry attempts const file = s3.file("my-file.txt", { retry: 5 // Retry failed uploads up to 5 times });
- secretAccessKey?: string
The secret access key for authentication. Defaults to
S3_SECRET_ACCESS_KEY
orAWS_SECRET_ACCESS_KEY
environment variables. - sessionToken?: string
Optional session token for temporary credentials. Defaults to
S3_SESSION_TOKEN
orAWS_SESSION_TOKEN
environment variables.// Using temporary credentials const file = s3.file("my-file.txt", { accessKeyId: tempAccessKey, secretAccessKey: tempSecretKey, sessionToken: tempSessionToken });
- storageClass?: 'STANDARD' | 'DEEP_ARCHIVE' | 'EXPRESS_ONEZONE' | 'GLACIER' | 'GLACIER_IR' | 'INTELLIGENT_TIERING' | 'ONEZONE_IA' | 'OUTPOSTS' | 'REDUCED_REDUNDANCY' | 'SNOW' | 'STANDARD_IA'
By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects.
// Setting explicit Storage class const file = s3.file("my-file.json", { storageClass: "STANDARD_IA" });
- type?: string
The Content-Type of the file. Automatically set based on file extension when possible.
// Setting explicit content type const file = s3.file("data.bin", { type: "application/octet-stream" });
- virtualHostedStyle?: boolean
Use virtual hosted style endpoint. default to false, when true if
endpoint
is informed it will ignore thebucket
// Using virtual hosted style const file = s3.file("my-file.txt", { virtualHostedStyle: true, endpoint: "https://my-bucket.s3.us-east-1.amazonaws.com" });
interface S3Stats
interface SavepointSQL
Represents a savepoint within a transaction
- options: Merge<SQLiteOptions, PostgresOrMySQLOptions> | Merge<PostgresOrMySQLOptions, SQLiteOptions>
Current client options
- values: any[],
Creates a new SQL array parameter
@param valuesThe values to create the array parameter from
@param typeNameOrTypeIDThe type name or type ID to create the array parameter from, if omitted it will default to JSON
@returnsA new SQL array parameter
const array = sql.array([1, 2, 3], "INT"); await sql`CREATE TABLE users_posts (user_id INT, posts_id INT[])`; await sql`INSERT INTO users_posts (user_id, posts_id) VALUES (${user.id}, ${array})`;
Begins a new transaction.
Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.begin will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.begin(async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] })
options: string,Begins a new transaction with options.
Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.begin will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.begin("read write", async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] })
- name: string,
Begins a distributed transaction Also know as Two-Phase Commit, in a distributed transaction, Phase 1 involves the coordinator preparing nodes by ensuring data is written and ready to commit, while Phase 2 finalizes with nodes committing or rolling back based on the coordinator's decision, ensuring durability and releasing locks. In PostgreSQL and MySQL distributed transactions persist beyond the original session, allowing privileged users or coordinators to commit/rollback them, ensuring support for distributed transactions, recovery, and administrative tasks. beginDistributed will automatic rollback if any exception are not caught, and you can commit and rollback later if everything goes well. PostgreSQL natively supports distributed transactions using PREPARE TRANSACTION, while MySQL uses XA Transactions, and MSSQL also supports distributed/XA transactions. However, in MSSQL, distributed transactions are tied to the original session, the DTC coordinator, and the specific connection. These transactions are automatically committed or rolled back following the same rules as regular transactions, with no option for manual intervention from other sessions, in MSSQL distributed transactions are used to coordinate transactions using Linked Servers.
await sql.beginDistributed("numbers", async sql => { await sql`create table if not exists numbers (a int)`; await sql`insert into numbers values(1)`; }); // later you can call await sql.commitDistributed("numbers"); // or await sql.rollbackDistributed("numbers");
- options?: { timeout: number }): Promise<void>;
Closes the database connection with optional timeout in seconds. If timeout is 0, it will close immediately, if is not provided it will wait for all queries to finish before closing.
@param optionsThe options for the close
await sql.close({ timeout: 1 });
- name: string): Promise<void>;
Commits a distributed transaction also know as prepared transaction in postgres or XA transaction in MySQL
@param nameThe name of the distributed transaction
await sql.commitDistributed("my_distributed_transaction");
- name: string,
Alternative method to begin a distributed transaction
- end(options?: { timeout: number }): Promise<void>;
Closes the database connection with optional timeout in seconds. If timeout is 0, it will close immediately, if is not provided it will wait for all queries to finish before closing. This is an alias of SQL.close
@param optionsThe options for the close
await sql.end({ timeout: 1 });
Flushes any pending operations
sql.flush();
The reserve method pulls out a connection from the pool, and returns a client that wraps the single connection.
This can be used for running queries on an isolated connection. Calling reserve in a reserved Sql will return a new reserved connection, not the same connection (behavior matches postgres package).
const reserved = await sql.reserve(); await reserved`select * from users`; await reserved.release(); // with in a production scenario would be something more like const reserved = await sql.reserve(); try { // ... queries } finally { await reserved.release(); } // Bun supports Symbol.dispose and Symbol.asyncDispose // always release after context (safer) using reserved = await sql.reserve() await reserved`select * from users`
- name: string): Promise<void>;
Rolls back a distributed transaction also know as prepared transaction in postgres or XA transaction in MySQL
@param nameThe name of the distributed transaction
await sql.rollbackDistributed("my_distributed_transaction");
Alternative method to begin a transaction.
Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.transaction will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.transaction(async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] })
options: string,Alternative method to begin a transaction with options Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.transaction will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.transaction("read write", async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] });
- string: string,values?: any[]
If you know what you're doing, you can use unsafe to pass any string you'd like. Please note that this can lead to SQL injection if you're not careful. You can also nest sql.unsafe within a safe sql expression. This is useful if only part of your fraction has unsafe elements.
const result = await sql.unsafe(`select ${danger} from users where id = ${dragons}`)
interface ServerWebSocket<T = undefined>
A fast WebSocket designed for servers.
Features:
- Message compression - Messages can be compressed
- Backpressure - If the client is not ready to receive data, the server will tell you.
- Dropped messages - If the client cannot receive data, the server will tell you.
- Topics - Messages can be ServerWebSocket.published to a specific topic and the client can ServerWebSocket.subscribe to topics
This is slightly different than the browser WebSocket which Bun supports for clients.
Powered by uWebSockets.
Bun.serve({ websocket: { open(ws) { console.log("Connected", ws.remoteAddress); }, message(ws, data) { console.log("Received", data); ws.send(data); }, close(ws, code, reason) { console.log("Disconnected", code, reason); }, } });
- binaryType?: 'arraybuffer' | 'uint8array' | 'nodebuffer'
Sets how binary data is returned in events.
- if
nodebuffer
, binary data is returned asBuffer
objects. (default) - if
arraybuffer
, binary data is returned asArrayBuffer
objects. - if
uint8array
, binary data is returned asUint8Array
objects.
let ws: WebSocket; ws.binaryType = "uint8array"; ws.addEventListener("message", ({ data }) => { console.log(data instanceof Uint8Array); // true });
- if
- data: T
Custom data that you can assign to a client, can be read and written at any time.
import { serve } from "bun"; serve({ fetch(request, server) { const data = { accessToken: request.headers.get("Authorization"), }; if (server.upgrade(request, { data })) { return; } return new Response(); }, websocket: { open(ws) { console.log(ws.data.accessToken); } } });
- readonly readyState: WebSocketReadyState
The ready state of the client.
- if
0
, the client is connecting. - if
1
, the client is connected. - if
2
, the client is closing. - if
3
, the client is closed.
console.log(socket.readyState); // 1
- if
- readonly remoteAddress: string
The IP address of the client.
console.log(socket.remoteAddress); // "127.0.0.1"
- code?: number,reason?: string): void;
Closes the connection.
Here is a list of close codes:
1000
means "normal closure" (default)1009
means a message was too big and was rejected1011
means the server encountered an error1012
means the server is restarting1013
means the server is too busy or the client is rate-limited4000
through4999
are reserved for applications (you can use it!)
To close the connection abruptly, use
terminate()
.@param codeThe close code to send
@param reasonThe close reason to send
- ): T;
Batches
send()
andpublish()
operations, which makes it faster to send data.The
message
,open
, anddrain
callbacks are automatically corked, so you only need to call this if you are sending messages outside of those callbacks or in async functions.@param callbackThe callback to run.
ws.cork((ctx) => { ctx.send("These messages"); ctx.sendText("are sent"); ctx.sendBinary(new TextEncoder().encode("together!")); });
- @param topic
The topic name.
ws.subscribe("chat"); console.log(ws.isSubscribed("chat")); // true
- @param data
The data to send
- @param data
The data to send
- topic: string,compress?: boolean): number;
Sends a message to subscribers of the topic.
@param topicThe topic name.
@param dataThe data to send.
@param compressShould the data be compressed? If the client does not support compression, this is ignored.
ws.publish("chat", "Hello!"); ws.publish("chat", "Compress this.", true); ws.publish("chat", new Uint8Array([1, 2, 3, 4]));
- topic: string,compress?: boolean): number;
Sends a binary message to subscribers of the topic.
@param topicThe topic name.
@param dataThe data to send.
@param compressShould the data be compressed? If the client does not support compression, this is ignored.
ws.publish("chat", new TextEncoder().encode("Hello!")); ws.publish("chat", new Uint8Array([1, 2, 3, 4]), true);
- topic: string,data: string,compress?: boolean): number;
Sends a text message to subscribers of the topic.
@param topicThe topic name.
@param dataThe data to send.
@param compressShould the data be compressed? If the client does not support compression, this is ignored.
ws.publish("chat", "Hello!"); ws.publish("chat", "Compress this.", true);
- @param data
The data to send.
@param compressShould the data be compressed? If the client does not support compression, this is ignored.
ws.send("Hello!"); ws.send("Compress this.", true); ws.send(new Uint8Array([1, 2, 3, 4]));
- @param data
The data to send.
@param compressShould the data be compressed? If the client does not support compression, this is ignored.
ws.send(new TextEncoder().encode("Hello!")); ws.send(new Uint8Array([1, 2, 3, 4]), true);
- @param data
The data to send.
@param compressShould the data be compressed? If the client does not support compression, this is ignored.
ws.send("Hello!"); ws.send("Compress this.", true);
- @param topic
The topic name.
ws.subscribe("chat");
Abruptly close the connection.
To gracefully close the connection, use
close()
.- @param topic
The topic name.
ws.unsubscribe("chat");
interface Socket<Data = undefined>
Represents a TCP or TLS socket connection used for network communication. This interface provides methods for reading, writing, managing the connection state, and handling TLS-specific features if applicable.
Sockets are created using
Bun.connect()
or accepted by aBun.listen()
server.- readonly alpnProtocol: null | string | false
String containing the selected ALPN protocol. Before a handshake has completed, this value is always null. When a handshake is completed but not ALPN protocol was selected, socket.alpnProtocol equals false.
- readonly bytesWritten: number
The total number of bytes successfully written to the socket since it was established. This includes data currently buffered by the OS but not yet acknowledged by the remote peer.
- data: Data
The user-defined data associated with this socket instance. This can be set when the socket is created via
Bun.connect({ data: ... })
. It can be read or updated at any time.// In a socket handler function open(socket: Socket<{ userId: string }>) { console.log(`Socket opened for user: ${socket.data.userId}`); socket.data.lastActivity = Date.now(); // Update data }
- readonly listener?: SocketListener<undefined>
Get the server that created this socket
This will return undefined if the socket was created by Bun.connect or if the listener has already closed.
- readonly localAddress: string
Local IP address connected to the socket
"192.168.1.100" | "2001:db8::1"
- readonly localFamily: 'IPv4' | 'IPv6'
IP protocol family used for the local endpoint of the socket
"IPv4" | "IPv6"
- readonly readyState: -2 | -1 | 0 | 1 | 2
The ready state of the socket.
You can assume that a positive value means the socket is open and usable
-2
= Shutdown-1
= Detached0
= Closed1
= Established2
= Else
- readonly remoteAddress: string
Remote IP address connected to the socket
"192.168.1.100" | "2001:db8::1"
Alias for
socket.end()
. Allows the socket to be used withusing
declarations for automatic resource management.async function processSocket() { using socket = await Bun.connect({ ... }); socket.write("Data"); // socket.end() is called automatically when exiting the scope }
Closes the socket.
This is a wrapper around
end()
andshutdown()
.Disables TLS renegotiation for this
Socket
instance. Once called, attempts to renegotiate will trigger anerror
handler on theSocket
.There is no support for renegotiation as a server. (Attempts by clients will result in a fatal alert so that ClientHello messages cannot be used to flood a server and escape higher-level limits.)
- end(byteOffset?: number,byteLength?: number): number;
Sends the final data chunk and initiates a graceful shutdown of the socket's write side. After calling
end()
, no more data can be written usingwrite()
orend()
. The socket remains readable until the remote end also closes its write side or the connection is terminated. This sends a TCP FIN packet after writing the data.@param dataOptional final data to write before closing. Same types as
write()
.@param byteOffsetOptional offset for buffer data.
@param byteLengthOptional length for buffer data.
@returnsThe number of bytes written for the final chunk. Returns
-1
if the socket was already closed or shutting down.// send some data and close the write side socket.end("Goodbye!"); // or close write side without sending final data socket.end();
Close the socket immediately
- length: number,label: string,
Keying material is used for validations to prevent different kind of attacks in network protocols, for example in the specifications of IEEE 802.1X.
Example
const keyingMaterial = socket.exportKeyingMaterial( 128, 'client finished'); /* Example return value of keyingMaterial: <Buffer 76 26 af 99 c5 56 8e 42 09 91 ef 9f 93 cb ad 6c 7b 65 f8 53 f1 d8 d9 12 5a 33 b8 b5 25 df 7b 37 9f e0 e2 4f b8 67 83 a3 2f cd 5d 41 42 4c 91 74 ef 2c ... 78 more bytes>
@param lengthnumber of bytes to retrieve from keying material
@param labelan application specific label, typically this will be a value from the IANA Exporter Label Registry.
@param contextOptionally provide a context.
@returnsrequested bytes of the keying material
length: number,label: string,): void;Exports the keying material of the socket.
@param lengthThe length of the keying material to export.
@param labelThe label of the keying material to export.
@param contextThe context of the keying material to export.
Flush any buffered data to the socket This attempts to send the data immediately, but success depends on the network conditions and the receiving end. It might be necessary after several
write
calls if immediate sending is critical, though often the OS handles flushing efficiently. Note thatwrite
calls outsideopen
/data
/drain
might benefit from manualcork
/flush
.Returns the reason why the peer's certificate was not been verified. This property is set only when
socket.authorized === false
.Returns an object representing the local certificate. The returned object has some properties corresponding to the fields of the certificate.
If there is no local certificate, an empty object will be returned. If the socket has been destroyed,
null
will be returned.Returns an object containing information on the negotiated cipher suite.
For example, a TLSv1.2 protocol with AES256-SHA cipher:
{ "name": "AES256-SHA", "standardName": "TLS_RSA_WITH_AES_256_CBC_SHA", "version": "SSLv3" }
Returns an object representing the type, name, and size of parameter of an ephemeral key exchange in
perfect forward secrecy
on a client connection. It returns an empty object when the key exchange is not ephemeral. As this is only supported on a client socket;null
is returned if called on a server socket. The supported types are'DH'
and'ECDH'
. Thename
property is available only when type is'ECDH'
.For example:
{ type: 'ECDH', name: 'prime256v1', size: 256 }
.Returns an object representing the peer's certificate. If the peer does not provide a certificate, an empty object will be returned. If the socket has been destroyed,
null
will be returned.If the full certificate chain was requested, each certificate will include an
issuerCertificate
property containing an object representing its issuer's certificate.@returnsA certificate object.
Returns the servername of the socket.
As the
Finished
messages are message digests of the complete handshake (with a total of 192 bits for TLS 1.0 and more for SSL 3.0), they can be used for external authentication procedures when the authentication provided by SSL/TLS is not desired or is not enough.@returnsThe latest
Finished
message that has been sent to the socket as part of a SSL/TLS handshake, orundefined
if noFinished
message has been sent yet.As the
Finished
messages are message digests of the complete handshake (with a total of 192 bits for TLS 1.0 and more for SSL 3.0), they can be used for external authentication procedures when the authentication provided by SSL/TLS is not desired or is not enough.@returnsThe latest
Finished
message that is expected or has actually been received from the socket as part of a SSL/TLS handshake, orundefined
if there is noFinished
message so far.For a client, returns the TLS session ticket if one is available, or
undefined
. For a server, always returnsundefined
.It may be useful for debugging.
See
Session Resumption
for more information.Returns a string containing the negotiated SSL/TLS protocol version of the current connection. The value
'unknown'
will be returned for connected sockets that have not completed the handshaking process. The valuenull
will be returned for server sockets or disconnected client sockets.Protocol versions are:
'SSLv3'
'TLSv1'
'TLSv1.1'
'TLSv1.2'
'TLSv1.3'
See
Session Resumption
for more information.@returnstrue
if the session was reused,false
otherwise. TLS Only: Checks if the current TLS session was resumed from a previous session. Returnstrue
if the session was resumed,false
otherwise.Keep Bun's process alive at least until this socket is closed
After the socket has closed, the socket is unref'd, the process may exit, and this becomes a no-op
- ): void;
Reset the socket's callbacks. This is useful with
bun --hot
to facilitate hot reloading.This will apply to all sockets from the same Listener. it is per socket only for Bun.connect.
If this is a TLS Socket
- enable?: boolean,initialDelay?: number): boolean;
Enable/disable keep-alive functionality, and optionally set the initial delay before the first keepalive probe is sent on an idle socket. Set
initialDelay
(in milliseconds) to set the delay between the last data packet received and the first keepalive probe. Only available for already connected sockets, will return false otherwise.Enabling the keep-alive functionality will set the following socket options: SO_KEEPALIVE=1 TCP_KEEPIDLE=initialDelay TCP_KEEPCNT=10 TCP_KEEPINTVL=1
@param enableDefault:
false
@param initialDelayDefault:
0
@returnstrue if is able to setNoDelay and false if it fails.
- size?: number): boolean;
The
socket.setMaxSendFragment()
method sets the maximum TLS fragment size. Returnstrue
if setting the limit succeeded;false
otherwise.Smaller fragment sizes decrease the buffering latency on the client: larger fragments are buffered by the TLS layer until the entire fragment is received and its integrity is verified; large fragments can span multiple roundtrips and their processing can be delayed due to packet loss or reordering. However, smaller fragments add extra TLS framing bytes and CPU overhead, which may decrease overall server throughput.
@param sizeThe maximum TLS fragment size. The maximum value is
16384
. - noDelay?: boolean): boolean;
Enable/disable the use of Nagle's algorithm. Only available for already connected sockets, will return false otherwise
@param noDelayDefault:
true
@returnstrue if is able to setNoDelay and false if it fails.
- ): void;
Sets the session of the socket.
@param sessionThe session to set.
- requestCert: boolean,rejectUnauthorized: boolean): void;
Sets the verify mode of the socket.
@param requestCertWhether to request a certificate.
@param rejectUnauthorizedWhether to reject unauthorized certificates.
- halfClose?: boolean): void;
Shuts down the write-half or both halves of the connection. This allows the socket to enter a half-closed state where it can still receive data but can no longer send data (
halfClose = true
), or close both read and write (halfClose = false
, similar toend()
but potentially more immediate depending on OS). Callsshutdown(2)
syscall internally.@param halfCloseIf
true
, only shuts down the write side (allows receiving). Iffalse
or omitted, shuts down both read and write. Defaults tofalse
.// Stop sending data, but allow receiving socket.shutdown(true); // Shutdown both reading and writing socket.shutdown();
Forcefully closes the socket connection immediately. This is an abrupt termination, unlike the graceful shutdown initiated by
end()
. It usesSO_LINGER
withl_onoff=1
andl_linger=0
before callingclose(2)
. Consider using close() or end() for graceful shutdowns.socket.terminate();
- seconds: number): void;
Set a timeout until the socket automatically closes.
To reset the timeout, call this function again.
When a timeout happens, the
timeout
callback is called and the socket is closed. Allow Bun's process to exit even if this socket is still open
After the socket has closed, this function does nothing.
Upgrades the socket to a TLS socket.
@param optionsThe options for the upgrade.
@returnsA tuple containing the raw socket and the TLS socket.
- byteOffset?: number,byteLength?: number): number;
Writes
data
to the socket. This method is unbuffered and non-blocking. This uses thesendto(2)
syscall internally.For optimal performance with multiple small writes, consider batching multiple writes together into a single
socket.write()
call.@param dataThe data to write. Can be a string (encoded as UTF-8),
ArrayBuffer
,TypedArray
, orDataView
.@param byteOffsetThe offset in bytes within the buffer to start writing from. Defaults to 0. Ignored for strings.
@param byteLengthThe number of bytes to write from the buffer. Defaults to the remaining length of the buffer from the offset. Ignored for strings.
@returnsThe number of bytes written. Returns
-1
if the socket is closed or shutting down. Can return less than the input size if the socket's buffer is full (backpressure).// Send a string const bytesWritten = socket.write("Hello, world!\n"); // Send binary data const buffer = new Uint8Array([0x01, 0x02, 0x03]); socket.write(buffer); // Send part of a buffer const largeBuffer = new Uint8Array(1024); // ... fill largeBuffer ... socket.write(largeBuffer, 100, 50); // Write 50 bytes starting from index 100
interface SocketAddress
interface SocketHandler<Data = unknown, DataBinaryType extends BinaryType = 'buffer'>
- ): void | Promise<void>;
When the socket fails to be created, this function is called.
The promise returned by
Bun.connect
rejects after this function is called.When
connectError
is specified, the rejected promise will not be added to the promise rejection queue (so it won't be reported as an unhandled promise rejection, since connectError handles it).When
connectError
is not specified, the rejected promise will be added to the promise rejection queue.
interface SocketListener<Data = undefined>
interface SocketOptions<Data = unknown>
- allowHalfOpen?: boolean
Whether to allow half-open connections.
A half-open connection occurs when one end of the connection has called
close()
or sent a FIN packet, while the other end remains open. When set totrue
:- The socket won't automatically send FIN when the remote side closes its end
- The local side can continue sending data even after the remote side has closed
- The application must explicitly call
end()
to fully close the connection
When
false
, the socket automatically closes both ends of the connection when either side closes.
interface StringWidthOptions
- ambiguousIsNarrow?: boolean
When it's ambiugous and
true
, count emoji as 1 characters wide. Iffalse
, emoji are counted as 2 character wide. - countAnsiEscapeCodes?: boolean
If
true
, count ANSI escape codes as part of the string width. Iffalse
, ANSI escape codes are ignored when calculating the string width.
interface StructuredSerializeOptions
interface Subprocess<In extends SpawnOptions.Writable = SpawnOptions.Writable, Out extends SpawnOptions.Readable = SpawnOptions.Readable, Err extends SpawnOptions.Readable = SpawnOptions.Readable>
A process created by Bun.spawn.
This type accepts 3 optional type parameters which correspond to the
stdio
array from the options object. Instead of specifying these, you should use one of the following utility types instead:- ReadableSubprocess (any, pipe, pipe)
- WritableSubprocess (pipe, any, any)
- PipedSubprocess (pipe, pipe, pipe)
- NullSubprocess (ignore, ignore, ignore)
- readonly exitCode: null | number
Synchronously get the exit code of the process
If the process hasn't exited yet, this will return
null
- readonly exited: Promise<number>
The exit code of the process
The promise will resolve when the process exits
- readonly pid: number
The process ID of the child process
const { pid } = Bun.spawn({ cmd: ["echo", "hello"] }); console.log(pid); // 1234
- readonly readable: ReadableToIO<Out>
This returns the same value as Subprocess.stdout
It exists for compatibility with ReadableStream.pipeThrough
- readonly signalCode: null | Signals
Synchronously get the signal code of the process
If the process never sent a signal code, this will return
null
To receive signal code changes, use the
onExit
callback.If the signal code is unknown, it will return the original signal code number, but that case should essentially never happen.
- readonly stdio: [null, null, null, ...number[]]
Access extra file descriptors passed to the
stdio
option in the options object. Disconnect the IPC channel to the subprocess. This is only supported if the subprocess was created with the
ipc
option.- @param exitCode
The exitCode to send to the process
This method will tell Bun to wait for this process to exit after you already called
unref()
.Before shutting down, Bun will wait for all subprocesses to exit by default
Get the resource usage information of the process (max RSS, CPU time, etc)
Only available after the process has exited
If the process hasn't exited yet, this will return
undefined
- send(message: any): void;
Send a message to the subprocess. This is only supported if the subprocess was created with the
ipc
option, and is another instance ofbun
.Messages are serialized using the JSC serialize API, which allows for the same types that
postMessage
/structuredClone
supports. Before shutting down, Bun will wait for all subprocesses to exit by default
This method will tell Bun to not wait for this process to exit before shutting down.
interface SyncSubprocess<Out extends SpawnOptions.Readable = SpawnOptions.Readable, Err extends SpawnOptions.Readable = SpawnOptions.Readable>
A process created by Bun.spawnSync.
This type accepts 2 optional type parameters which correspond to the
stdout
andstderr
options. Instead of specifying these, you should use one of the following utility types instead:- ReadableSyncSubprocess (pipe, pipe)
- NullSyncSubprocess (ignore, ignore)
- resourceUsage: ResourceUsage
Get the resource usage information of the process (max RSS, CPU time, etc)
interface TCPSocket
Represents a TCP or TLS socket connection used for network communication. This interface provides methods for reading, writing, managing the connection state, and handling TLS-specific features if applicable.
Sockets are created using
Bun.connect()
or accepted by aBun.listen()
server.- readonly alpnProtocol: null | string | false
String containing the selected ALPN protocol. Before a handshake has completed, this value is always null. When a handshake is completed but not ALPN protocol was selected, socket.alpnProtocol equals false.
- readonly bytesWritten: number
The total number of bytes successfully written to the socket since it was established. This includes data currently buffered by the OS but not yet acknowledged by the remote peer.
- data: undefined
The user-defined data associated with this socket instance. This can be set when the socket is created via
Bun.connect({ data: ... })
. It can be read or updated at any time.// In a socket handler function open(socket: Socket<{ userId: string }>) { console.log(`Socket opened for user: ${socket.data.userId}`); socket.data.lastActivity = Date.now(); // Update data }
- readonly listener?: SocketListener<undefined>
Get the server that created this socket
This will return undefined if the socket was created by Bun.connect or if the listener has already closed.
- readonly localAddress: string
Local IP address connected to the socket
"192.168.1.100" | "2001:db8::1"
- readonly localFamily: 'IPv4' | 'IPv6'
IP protocol family used for the local endpoint of the socket
"IPv4" | "IPv6"
- readonly readyState: -2 | -1 | 0 | 1 | 2
The ready state of the socket.
You can assume that a positive value means the socket is open and usable
-2
= Shutdown-1
= Detached0
= Closed1
= Established2
= Else
- readonly remoteAddress: string
Remote IP address connected to the socket
"192.168.1.100" | "2001:db8::1"
Alias for
socket.end()
. Allows the socket to be used withusing
declarations for automatic resource management.async function processSocket() { using socket = await Bun.connect({ ... }); socket.write("Data"); // socket.end() is called automatically when exiting the scope }
Closes the socket.
This is a wrapper around
end()
andshutdown()
.Disables TLS renegotiation for this
Socket
instance. Once called, attempts to renegotiate will trigger anerror
handler on theSocket
.There is no support for renegotiation as a server. (Attempts by clients will result in a fatal alert so that ClientHello messages cannot be used to flood a server and escape higher-level limits.)
- end(byteOffset?: number,byteLength?: number): number;
Sends the final data chunk and initiates a graceful shutdown of the socket's write side. After calling
end()
, no more data can be written usingwrite()
orend()
. The socket remains readable until the remote end also closes its write side or the connection is terminated. This sends a TCP FIN packet after writing the data.@param dataOptional final data to write before closing. Same types as
write()
.@param byteOffsetOptional offset for buffer data.
@param byteLengthOptional length for buffer data.
@returnsThe number of bytes written for the final chunk. Returns
-1
if the socket was already closed or shutting down.// send some data and close the write side socket.end("Goodbye!"); // or close write side without sending final data socket.end();
Close the socket immediately
- length: number,label: string,
Keying material is used for validations to prevent different kind of attacks in network protocols, for example in the specifications of IEEE 802.1X.
Example
const keyingMaterial = socket.exportKeyingMaterial( 128, 'client finished'); /* Example return value of keyingMaterial: <Buffer 76 26 af 99 c5 56 8e 42 09 91 ef 9f 93 cb ad 6c 7b 65 f8 53 f1 d8 d9 12 5a 33 b8 b5 25 df 7b 37 9f e0 e2 4f b8 67 83 a3 2f cd 5d 41 42 4c 91 74 ef 2c ... 78 more bytes>
@param lengthnumber of bytes to retrieve from keying material
@param labelan application specific label, typically this will be a value from the IANA Exporter Label Registry.
@param contextOptionally provide a context.
@returnsrequested bytes of the keying material
length: number,label: string,): void;Exports the keying material of the socket.
@param lengthThe length of the keying material to export.
@param labelThe label of the keying material to export.
@param contextThe context of the keying material to export.
Flush any buffered data to the socket This attempts to send the data immediately, but success depends on the network conditions and the receiving end. It might be necessary after several
write
calls if immediate sending is critical, though often the OS handles flushing efficiently. Note thatwrite
calls outsideopen
/data
/drain
might benefit from manualcork
/flush
.Returns the reason why the peer's certificate was not been verified. This property is set only when
socket.authorized === false
.Returns an object representing the local certificate. The returned object has some properties corresponding to the fields of the certificate.
If there is no local certificate, an empty object will be returned. If the socket has been destroyed,
null
will be returned.Returns an object containing information on the negotiated cipher suite.
For example, a TLSv1.2 protocol with AES256-SHA cipher:
{ "name": "AES256-SHA", "standardName": "TLS_RSA_WITH_AES_256_CBC_SHA", "version": "SSLv3" }
Returns an object representing the type, name, and size of parameter of an ephemeral key exchange in
perfect forward secrecy
on a client connection. It returns an empty object when the key exchange is not ephemeral. As this is only supported on a client socket;null
is returned if called on a server socket. The supported types are'DH'
and'ECDH'
. Thename
property is available only when type is'ECDH'
.For example:
{ type: 'ECDH', name: 'prime256v1', size: 256 }
.Returns an object representing the peer's certificate. If the peer does not provide a certificate, an empty object will be returned. If the socket has been destroyed,
null
will be returned.If the full certificate chain was requested, each certificate will include an
issuerCertificate
property containing an object representing its issuer's certificate.@returnsA certificate object.
Returns the servername of the socket.
As the
Finished
messages are message digests of the complete handshake (with a total of 192 bits for TLS 1.0 and more for SSL 3.0), they can be used for external authentication procedures when the authentication provided by SSL/TLS is not desired or is not enough.@returnsThe latest
Finished
message that has been sent to the socket as part of a SSL/TLS handshake, orundefined
if noFinished
message has been sent yet.As the
Finished
messages are message digests of the complete handshake (with a total of 192 bits for TLS 1.0 and more for SSL 3.0), they can be used for external authentication procedures when the authentication provided by SSL/TLS is not desired or is not enough.@returnsThe latest
Finished
message that is expected or has actually been received from the socket as part of a SSL/TLS handshake, orundefined
if there is noFinished
message so far.For a client, returns the TLS session ticket if one is available, or
undefined
. For a server, always returnsundefined
.It may be useful for debugging.
See
Session Resumption
for more information.Returns a string containing the negotiated SSL/TLS protocol version of the current connection. The value
'unknown'
will be returned for connected sockets that have not completed the handshaking process. The valuenull
will be returned for server sockets or disconnected client sockets.Protocol versions are:
'SSLv3'
'TLSv1'
'TLSv1.1'
'TLSv1.2'
'TLSv1.3'
See
Session Resumption
for more information.@returnstrue
if the session was reused,false
otherwise. TLS Only: Checks if the current TLS session was resumed from a previous session. Returnstrue
if the session was resumed,false
otherwise.Keep Bun's process alive at least until this socket is closed
After the socket has closed, the socket is unref'd, the process may exit, and this becomes a no-op
- ): void;
Reset the socket's callbacks. This is useful with
bun --hot
to facilitate hot reloading.This will apply to all sockets from the same Listener. it is per socket only for Bun.connect.
If this is a TLS Socket
- enable?: boolean,initialDelay?: number): boolean;
Enable/disable keep-alive functionality, and optionally set the initial delay before the first keepalive probe is sent on an idle socket. Set
initialDelay
(in milliseconds) to set the delay between the last data packet received and the first keepalive probe. Only available for already connected sockets, will return false otherwise.Enabling the keep-alive functionality will set the following socket options: SO_KEEPALIVE=1 TCP_KEEPIDLE=initialDelay TCP_KEEPCNT=10 TCP_KEEPINTVL=1
@param enableDefault:
false
@param initialDelayDefault:
0
@returnstrue if is able to setNoDelay and false if it fails.
- size?: number): boolean;
The
socket.setMaxSendFragment()
method sets the maximum TLS fragment size. Returnstrue
if setting the limit succeeded;false
otherwise.Smaller fragment sizes decrease the buffering latency on the client: larger fragments are buffered by the TLS layer until the entire fragment is received and its integrity is verified; large fragments can span multiple roundtrips and their processing can be delayed due to packet loss or reordering. However, smaller fragments add extra TLS framing bytes and CPU overhead, which may decrease overall server throughput.
@param sizeThe maximum TLS fragment size. The maximum value is
16384
. - noDelay?: boolean): boolean;
Enable/disable the use of Nagle's algorithm. Only available for already connected sockets, will return false otherwise
@param noDelayDefault:
true
@returnstrue if is able to setNoDelay and false if it fails.
- ): void;
Sets the session of the socket.
@param sessionThe session to set.
- requestCert: boolean,rejectUnauthorized: boolean): void;
Sets the verify mode of the socket.
@param requestCertWhether to request a certificate.
@param rejectUnauthorizedWhether to reject unauthorized certificates.
- halfClose?: boolean): void;
Shuts down the write-half or both halves of the connection. This allows the socket to enter a half-closed state where it can still receive data but can no longer send data (
halfClose = true
), or close both read and write (halfClose = false
, similar toend()
but potentially more immediate depending on OS). Callsshutdown(2)
syscall internally.@param halfCloseIf
true
, only shuts down the write side (allows receiving). Iffalse
or omitted, shuts down both read and write. Defaults tofalse
.// Stop sending data, but allow receiving socket.shutdown(true); // Shutdown both reading and writing socket.shutdown();
Forcefully closes the socket connection immediately. This is an abrupt termination, unlike the graceful shutdown initiated by
end()
. It usesSO_LINGER
withl_onoff=1
andl_linger=0
before callingclose(2)
. Consider using close() or end() for graceful shutdowns.socket.terminate();
- seconds: number): void;
Set a timeout until the socket automatically closes.
To reset the timeout, call this function again.
When a timeout happens, the
timeout
callback is called and the socket is closed. Allow Bun's process to exit even if this socket is still open
After the socket has closed, this function does nothing.
Upgrades the socket to a TLS socket.
@param optionsThe options for the upgrade.
@returnsA tuple containing the raw socket and the TLS socket.
- byteOffset?: number,byteLength?: number): number;
Writes
data
to the socket. This method is unbuffered and non-blocking. This uses thesendto(2)
syscall internally.For optimal performance with multiple small writes, consider batching multiple writes together into a single
socket.write()
call.@param dataThe data to write. Can be a string (encoded as UTF-8),
ArrayBuffer
,TypedArray
, orDataView
.@param byteOffsetThe offset in bytes within the buffer to start writing from. Defaults to 0. Ignored for strings.
@param byteLengthThe number of bytes to write from the buffer. Defaults to the remaining length of the buffer from the offset. Ignored for strings.
@returnsThe number of bytes written. Returns
-1
if the socket is closed or shutting down. Can return less than the input size if the socket's buffer is full (backpressure).// Send a string const bytesWritten = socket.write("Hello, world!\n"); // Send binary data const buffer = new Uint8Array([0x01, 0x02, 0x03]); socket.write(buffer); // Send part of a buffer const largeBuffer = new Uint8Array(1024); // ... fill largeBuffer ... socket.write(largeBuffer, 100, 50); // Write 50 bytes starting from index 100
interface TCPSocketConnectOptions<Data = undefined>
- allowHalfOpen?: boolean
Whether to allow half-open connections.
A half-open connection occurs when one end of the connection has called
close()
or sent a FIN packet, while the other end remains open. When set totrue
:- The socket won't automatically send FIN when the remote side closes its end
- The local side can continue sending data even after the remote side has closed
- The application must explicitly call
end()
to fully close the connection
When
false
, the socket automatically closes both ends of the connection when either side closes. - exclusive?: boolean
Whether to use exclusive mode.
When set to
true
, the socket binds exclusively to the specified address:port combination, preventing other processes from binding to the same port.When
false
(default), other sockets may be able to bind to the same port depending on the operating system's socket sharing capabilities and settings.Exclusive mode is useful in scenarios where you want to ensure only one instance of your server can bind to a specific port at a time.
interface TCPSocketListener<Data = unknown>
interface TCPSocketListenOptions<Data = undefined>
- allowHalfOpen?: boolean
Whether to allow half-open connections.
A half-open connection occurs when one end of the connection has called
close()
or sent a FIN packet, while the other end remains open. When set totrue
:- The socket won't automatically send FIN when the remote side closes its end
- The local side can continue sending data even after the remote side has closed
- The application must explicitly call
end()
to fully close the connection
When
false
(default), the socket automatically closes both ends of the connection when either side closes. - exclusive?: boolean
Whether to use exclusive mode.
When set to
true
, the socket binds exclusively to the specified address:port combination, preventing other processes from binding to the same port.When
false
(default), other sockets may be able to bind to the same port depending on the operating system's socket sharing capabilities and settings.Exclusive mode is useful in scenarios where you want to ensure only one instance of your server can bind to a specific port at a time.
interface TLSOptions
Options for TLS connections
- ca?: string | BunFile | BufferSource | unknown[]
Optionally override the trusted CA certificates. Default is to trust the well-known CAs curated by Mozilla. Mozilla's CAs are completely replaced when CAs are explicitly specified using this option.
- cert?: string | BunFile | BufferSource | unknown[]
Cert chains in PEM format. One cert chain should be provided per private key. Each cert chain should consist of the PEM formatted certificate for a provided private key, followed by the PEM formatted intermediate certificates (if any), in order, and not including the root CA (the root CA must be pre-known to the peer, see ca). When providing multiple cert chains, they do not have to be in the same order as their private keys in key. If the intermediate certificates are not provided, the peer will not be able to validate the certificate, and the handshake will fail.
- key?: string | BunFile | BufferSource | unknown[]
Private keys in PEM format. PEM allows the option of private keys being encrypted. Encrypted keys will be decrypted with options.passphrase. Multiple keys using different algorithms can be provided either as an array of unencrypted key strings or buffers, or an array of objects in the form {pem: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted keys will be decrypted with object.passphrase if provided, or options.passphrase if it is not.
- lowMemoryMode?: boolean
This sets
OPENSSL_RELEASE_BUFFERS
to 1. It reduces overall performance but saves some memory. - secureOptions?: number
Optionally affect the OpenSSL protocol behavior, which is not usually necessary. This should be used carefully if at all! Value is a numeric bitmask of the SSL_OP_* options from OpenSSL Options
interface TLSSocket
Represents a TCP or TLS socket connection used for network communication. This interface provides methods for reading, writing, managing the connection state, and handling TLS-specific features if applicable.
Sockets are created using
Bun.connect()
or accepted by aBun.listen()
server.- readonly alpnProtocol: null | string | false
String containing the selected ALPN protocol. Before a handshake has completed, this value is always null. When a handshake is completed but not ALPN protocol was selected, socket.alpnProtocol equals false.
- readonly bytesWritten: number
The total number of bytes successfully written to the socket since it was established. This includes data currently buffered by the OS but not yet acknowledged by the remote peer.
- data: undefined
The user-defined data associated with this socket instance. This can be set when the socket is created via
Bun.connect({ data: ... })
. It can be read or updated at any time.// In a socket handler function open(socket: Socket<{ userId: string }>) { console.log(`Socket opened for user: ${socket.data.userId}`); socket.data.lastActivity = Date.now(); // Update data }
- readonly listener?: SocketListener<undefined>
Get the server that created this socket
This will return undefined if the socket was created by Bun.connect or if the listener has already closed.
- readonly localAddress: string
Local IP address connected to the socket
"192.168.1.100" | "2001:db8::1"
- readonly localFamily: 'IPv4' | 'IPv6'
IP protocol family used for the local endpoint of the socket
"IPv4" | "IPv6"
- readonly readyState: -2 | -1 | 0 | 1 | 2
The ready state of the socket.
You can assume that a positive value means the socket is open and usable
-2
= Shutdown-1
= Detached0
= Closed1
= Established2
= Else
- readonly remoteAddress: string
Remote IP address connected to the socket
"192.168.1.100" | "2001:db8::1"
Alias for
socket.end()
. Allows the socket to be used withusing
declarations for automatic resource management.async function processSocket() { using socket = await Bun.connect({ ... }); socket.write("Data"); // socket.end() is called automatically when exiting the scope }
Closes the socket.
This is a wrapper around
end()
andshutdown()
.Disables TLS renegotiation for this
Socket
instance. Once called, attempts to renegotiate will trigger anerror
handler on theSocket
.There is no support for renegotiation as a server. (Attempts by clients will result in a fatal alert so that ClientHello messages cannot be used to flood a server and escape higher-level limits.)
- end(byteOffset?: number,byteLength?: number): number;
Sends the final data chunk and initiates a graceful shutdown of the socket's write side. After calling
end()
, no more data can be written usingwrite()
orend()
. The socket remains readable until the remote end also closes its write side or the connection is terminated. This sends a TCP FIN packet after writing the data.@param dataOptional final data to write before closing. Same types as
write()
.@param byteOffsetOptional offset for buffer data.
@param byteLengthOptional length for buffer data.
@returnsThe number of bytes written for the final chunk. Returns
-1
if the socket was already closed or shutting down.// send some data and close the write side socket.end("Goodbye!"); // or close write side without sending final data socket.end();
Close the socket immediately
- length: number,label: string,
Keying material is used for validations to prevent different kind of attacks in network protocols, for example in the specifications of IEEE 802.1X.
Example
const keyingMaterial = socket.exportKeyingMaterial( 128, 'client finished'); /* Example return value of keyingMaterial: <Buffer 76 26 af 99 c5 56 8e 42 09 91 ef 9f 93 cb ad 6c 7b 65 f8 53 f1 d8 d9 12 5a 33 b8 b5 25 df 7b 37 9f e0 e2 4f b8 67 83 a3 2f cd 5d 41 42 4c 91 74 ef 2c ... 78 more bytes>
@param lengthnumber of bytes to retrieve from keying material
@param labelan application specific label, typically this will be a value from the IANA Exporter Label Registry.
@param contextOptionally provide a context.
@returnsrequested bytes of the keying material
length: number,label: string,): void;Exports the keying material of the socket.
@param lengthThe length of the keying material to export.
@param labelThe label of the keying material to export.
@param contextThe context of the keying material to export.
Flush any buffered data to the socket This attempts to send the data immediately, but success depends on the network conditions and the receiving end. It might be necessary after several
write
calls if immediate sending is critical, though often the OS handles flushing efficiently. Note thatwrite
calls outsideopen
/data
/drain
might benefit from manualcork
/flush
.Returns the reason why the peer's certificate was not been verified. This property is set only when
socket.authorized === false
.Returns an object representing the local certificate. The returned object has some properties corresponding to the fields of the certificate.
If there is no local certificate, an empty object will be returned. If the socket has been destroyed,
null
will be returned.Returns an object containing information on the negotiated cipher suite.
For example, a TLSv1.2 protocol with AES256-SHA cipher:
{ "name": "AES256-SHA", "standardName": "TLS_RSA_WITH_AES_256_CBC_SHA", "version": "SSLv3" }
Returns an object representing the type, name, and size of parameter of an ephemeral key exchange in
perfect forward secrecy
on a client connection. It returns an empty object when the key exchange is not ephemeral. As this is only supported on a client socket;null
is returned if called on a server socket. The supported types are'DH'
and'ECDH'
. Thename
property is available only when type is'ECDH'
.For example:
{ type: 'ECDH', name: 'prime256v1', size: 256 }
.Returns an object representing the peer's certificate. If the peer does not provide a certificate, an empty object will be returned. If the socket has been destroyed,
null
will be returned.If the full certificate chain was requested, each certificate will include an
issuerCertificate
property containing an object representing its issuer's certificate.@returnsA certificate object.
Returns the servername of the socket.
As the
Finished
messages are message digests of the complete handshake (with a total of 192 bits for TLS 1.0 and more for SSL 3.0), they can be used for external authentication procedures when the authentication provided by SSL/TLS is not desired or is not enough.@returnsThe latest
Finished
message that has been sent to the socket as part of a SSL/TLS handshake, orundefined
if noFinished
message has been sent yet.As the
Finished
messages are message digests of the complete handshake (with a total of 192 bits for TLS 1.0 and more for SSL 3.0), they can be used for external authentication procedures when the authentication provided by SSL/TLS is not desired or is not enough.@returnsThe latest
Finished
message that is expected or has actually been received from the socket as part of a SSL/TLS handshake, orundefined
if there is noFinished
message so far.For a client, returns the TLS session ticket if one is available, or
undefined
. For a server, always returnsundefined
.It may be useful for debugging.
See
Session Resumption
for more information.Returns a string containing the negotiated SSL/TLS protocol version of the current connection. The value
'unknown'
will be returned for connected sockets that have not completed the handshaking process. The valuenull
will be returned for server sockets or disconnected client sockets.Protocol versions are:
'SSLv3'
'TLSv1'
'TLSv1.1'
'TLSv1.2'
'TLSv1.3'
See
Session Resumption
for more information.@returnstrue
if the session was reused,false
otherwise. TLS Only: Checks if the current TLS session was resumed from a previous session. Returnstrue
if the session was resumed,false
otherwise.Keep Bun's process alive at least until this socket is closed
After the socket has closed, the socket is unref'd, the process may exit, and this becomes a no-op
- ): void;
Reset the socket's callbacks. This is useful with
bun --hot
to facilitate hot reloading.This will apply to all sockets from the same Listener. it is per socket only for Bun.connect.
If this is a TLS Socket
- enable?: boolean,initialDelay?: number): boolean;
Enable/disable keep-alive functionality, and optionally set the initial delay before the first keepalive probe is sent on an idle socket. Set
initialDelay
(in milliseconds) to set the delay between the last data packet received and the first keepalive probe. Only available for already connected sockets, will return false otherwise.Enabling the keep-alive functionality will set the following socket options: SO_KEEPALIVE=1 TCP_KEEPIDLE=initialDelay TCP_KEEPCNT=10 TCP_KEEPINTVL=1
@param enableDefault:
false
@param initialDelayDefault:
0
@returnstrue if is able to setNoDelay and false if it fails.
- size?: number): boolean;
The
socket.setMaxSendFragment()
method sets the maximum TLS fragment size. Returnstrue
if setting the limit succeeded;false
otherwise.Smaller fragment sizes decrease the buffering latency on the client: larger fragments are buffered by the TLS layer until the entire fragment is received and its integrity is verified; large fragments can span multiple roundtrips and their processing can be delayed due to packet loss or reordering. However, smaller fragments add extra TLS framing bytes and CPU overhead, which may decrease overall server throughput.
@param sizeThe maximum TLS fragment size. The maximum value is
16384
. - noDelay?: boolean): boolean;
Enable/disable the use of Nagle's algorithm. Only available for already connected sockets, will return false otherwise
@param noDelayDefault:
true
@returnstrue if is able to setNoDelay and false if it fails.
- ): void;
Sets the session of the socket.
@param sessionThe session to set.
- requestCert: boolean,rejectUnauthorized: boolean): void;
Sets the verify mode of the socket.
@param requestCertWhether to request a certificate.
@param rejectUnauthorizedWhether to reject unauthorized certificates.
- halfClose?: boolean): void;
Shuts down the write-half or both halves of the connection. This allows the socket to enter a half-closed state where it can still receive data but can no longer send data (
halfClose = true
), or close both read and write (halfClose = false
, similar toend()
but potentially more immediate depending on OS). Callsshutdown(2)
syscall internally.@param halfCloseIf
true
, only shuts down the write side (allows receiving). Iffalse
or omitted, shuts down both read and write. Defaults tofalse
.// Stop sending data, but allow receiving socket.shutdown(true); // Shutdown both reading and writing socket.shutdown();
Forcefully closes the socket connection immediately. This is an abrupt termination, unlike the graceful shutdown initiated by
end()
. It usesSO_LINGER
withl_onoff=1
andl_linger=0
before callingclose(2)
. Consider using close() or end() for graceful shutdowns.socket.terminate();
- seconds: number): void;
Set a timeout until the socket automatically closes.
To reset the timeout, call this function again.
When a timeout happens, the
timeout
callback is called and the socket is closed. Allow Bun's process to exit even if this socket is still open
After the socket has closed, this function does nothing.
Upgrades the socket to a TLS socket.
@param optionsThe options for the upgrade.
@returnsA tuple containing the raw socket and the TLS socket.
- byteOffset?: number,byteLength?: number): number;
Writes
data
to the socket. This method is unbuffered and non-blocking. This uses thesendto(2)
syscall internally.For optimal performance with multiple small writes, consider batching multiple writes together into a single
socket.write()
call.@param dataThe data to write. Can be a string (encoded as UTF-8),
ArrayBuffer
,TypedArray
, orDataView
.@param byteOffsetThe offset in bytes within the buffer to start writing from. Defaults to 0. Ignored for strings.
@param byteLengthThe number of bytes to write from the buffer. Defaults to the remaining length of the buffer from the offset. Ignored for strings.
@returnsThe number of bytes written. Returns
-1
if the socket is closed or shutting down. Can return less than the input size if the socket's buffer is full (backpressure).// Send a string const bytesWritten = socket.write("Hello, world!\n"); // Send binary data const buffer = new Uint8Array([0x01, 0x02, 0x03]); socket.write(buffer); // Send part of a buffer const largeBuffer = new Uint8Array(1024); // ... fill largeBuffer ... socket.write(largeBuffer, 100, 50); // Write 50 bytes starting from index 100
interface TLSUpgradeOptions<Data>
interface TransactionSQL
Represents a client within a transaction context Extends SQL with savepoint functionality
- options: Merge<SQLiteOptions, PostgresOrMySQLOptions> | Merge<PostgresOrMySQLOptions, SQLiteOptions>
Current client options
- values: any[],
Creates a new SQL array parameter
@param valuesThe values to create the array parameter from
@param typeNameOrTypeIDThe type name or type ID to create the array parameter from, if omitted it will default to JSON
@returnsA new SQL array parameter
const array = sql.array([1, 2, 3], "INT"); await sql`CREATE TABLE users_posts (user_id INT, posts_id INT[])`; await sql`INSERT INTO users_posts (user_id, posts_id) VALUES (${user.id}, ${array})`;
Begins a new transaction.
Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.begin will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.begin(async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] })
options: string,Begins a new transaction with options.
Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.begin will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.begin("read write", async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] })
- name: string,
Begins a distributed transaction Also know as Two-Phase Commit, in a distributed transaction, Phase 1 involves the coordinator preparing nodes by ensuring data is written and ready to commit, while Phase 2 finalizes with nodes committing or rolling back based on the coordinator's decision, ensuring durability and releasing locks. In PostgreSQL and MySQL distributed transactions persist beyond the original session, allowing privileged users or coordinators to commit/rollback them, ensuring support for distributed transactions, recovery, and administrative tasks. beginDistributed will automatic rollback if any exception are not caught, and you can commit and rollback later if everything goes well. PostgreSQL natively supports distributed transactions using PREPARE TRANSACTION, while MySQL uses XA Transactions, and MSSQL also supports distributed/XA transactions. However, in MSSQL, distributed transactions are tied to the original session, the DTC coordinator, and the specific connection. These transactions are automatically committed or rolled back following the same rules as regular transactions, with no option for manual intervention from other sessions, in MSSQL distributed transactions are used to coordinate transactions using Linked Servers.
await sql.beginDistributed("numbers", async sql => { await sql`create table if not exists numbers (a int)`; await sql`insert into numbers values(1)`; }); // later you can call await sql.commitDistributed("numbers"); // or await sql.rollbackDistributed("numbers");
- options?: { timeout: number }): Promise<void>;
Closes the database connection with optional timeout in seconds. If timeout is 0, it will close immediately, if is not provided it will wait for all queries to finish before closing.
@param optionsThe options for the close
await sql.close({ timeout: 1 });
- name: string): Promise<void>;
Commits a distributed transaction also know as prepared transaction in postgres or XA transaction in MySQL
@param nameThe name of the distributed transaction
await sql.commitDistributed("my_distributed_transaction");
- name: string,
Alternative method to begin a distributed transaction
- end(options?: { timeout: number }): Promise<void>;
Closes the database connection with optional timeout in seconds. If timeout is 0, it will close immediately, if is not provided it will wait for all queries to finish before closing. This is an alias of SQL.close
@param optionsThe options for the close
await sql.end({ timeout: 1 });
Flushes any pending operations
sql.flush();
The reserve method pulls out a connection from the pool, and returns a client that wraps the single connection.
Using reserve() inside of a transaction will return a brand new connection, not one related to the transaction. This matches the behaviour of the
postgres
package.- name: string): Promise<void>;
Rolls back a distributed transaction also know as prepared transaction in postgres or XA transaction in MySQL
@param nameThe name of the distributed transaction
await sql.rollbackDistributed("my_distributed_transaction");
- name: string,): Promise<T>;
Creates a savepoint within the current transaction
Alternative method to begin a transaction.
Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.transaction will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.transaction(async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] })
options: string,Alternative method to begin a transaction with options Will reserve a connection for the transaction and supply a scoped sql instance for all transaction uses in the callback function. sql.transaction will resolve with the returned value from the callback function. BEGIN is automatically sent with the optional options, and if anything fails ROLLBACK will be called so the connection can be released and execution can continue.
const [user, account] = await sql.transaction("read write", async sql => { const [user] = await sql` insert into users ( name ) values ( 'Murray' ) returning * ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) returning * ` return [user, account] });
- string: string,values?: any[]
If you know what you're doing, you can use unsafe to pass any string you'd like. Please note that this can lead to SQL injection if you're not careful. You can also nest sql.unsafe within a safe sql expression. This is useful if only part of your fraction has unsafe elements.
const result = await sql.unsafe(`select ${danger} from users where id = ${dragons}`)
interface TransformerFlushCallback<O>
interface TransformerStartCallback<O>
interface TransformerTransformCallback<I, O>
interface TranspilerOptions
- deadCodeElimination?: boolean
Experimental
Enabled by default, use this to disable dead code elimination.
Some other transpiler options may still do some specific dead code elimination.
- define?: Record<string, string>
Replace key with value. Value must be a JSON string.
{ "process.env.NODE_ENV": "\"production\"" }
- inline?: boolean
This does two things (and possibly more in the future):
const
declarations to primitive types (excluding Object/Array) at the top of a scope before anylet
orvar
declarations will be inlined into their usages.let
andconst
declarations only used once are inlined into their usages.
JavaScript engines typically do these optimizations internally, however it might only happen much later in the compilation pipeline, after code has been executed many many times.
This will typically shrink the output size of code, but it might increase it in some cases. Do your own benchmarks!
- macro?: MacroMap
Replace an import statement with a macro.
This will remove the import statement from the final output and replace any function calls or template strings with the result returned by the macro
{ "react-relay": { "graphql": "bun-macro-relay" } }
Code that calls
graphql
will be replaced with the result of the macro.import {graphql} from "react-relay"; // Input: const query = graphql` query { ... on User { id } } }`;
Will be replaced with:
import UserQuery from "./UserQuery.graphql"; const query = UserQuery;
interface TSConfig
tsconfig.json options supported by Bun
- compilerOptions?: { baseUrl: string; importsNotUsedAsValues: 'error' | 'preserve' | 'remove'; jsx: 'preserve' | 'react' | 'react-jsx' | 'react-jsxdev'; jsxFactory: string; jsxFragmentFactory: string; jsxImportSource: string; moduleSuffixes: any; paths: Record<string, string[]>; useDefineForClassFields: boolean }
interface UnderlyingSink<W = any>
interface UnderlyingSinkAbortCallback
interface UnderlyingSinkCloseCallback
interface UnderlyingSinkStartCallback
interface UnderlyingSinkWriteCallback<W>
interface UnderlyingSource<R = any>
interface UnderlyingSourceCancelCallback
interface UnderlyingSourcePullCallback<R>
interface UnderlyingSourceStartCallback<R>
interface UnixSocketListener<Data>
interface UnixSocketOptions<Data = undefined>
- allowHalfOpen?: boolean
Whether to allow half-open connections.
A half-open connection occurs when one end of the connection has called
close()
or sent a FIN packet, while the other end remains open. When set totrue
:- The socket won't automatically send FIN when the remote side closes its end
- The local side can continue sending data even after the remote side has closed
- The application must explicitly call
end()
to fully close the connection
When
false
, the socket automatically closes both ends of the connection when either side closes.
interface WebSocketEventMap
interface WebSocketHandler<T>
Create a server-side ServerWebSocket handler for use with Bun.serve
import { websocket, serve } from "bun"; serve<{name: string}>({ port: 3000, websocket: { open: (ws) => { console.log("Client connected"); }, message: (ws, message) => { console.log(`${ws.data.name}: ${message}`); }, close: (ws) => { console.log("Client disconnected"); }, }, fetch(req, server) { const url = new URL(req.url); if (url.pathname === "/chat") { const upgraded = server.upgrade(req, { data: { name: new URL(req.url).searchParams.get("name"), }, }); if (!upgraded) { return new Response("Upgrade failed", { status: 400 }); } return; } return new Response("Hello World"); }, });
- backpressureLimit?: number
Sets the maximum number of bytes that can be buffered on a single connection.
Default is 16 MB, or
1024 * 1024 * 16
in bytes. - closeOnBackpressureLimit?: boolean
Sets if the connection should be closed if
backpressureLimit
is reached. - data?: T
Specify the type for the ServerWebSocket.data property on connecting websocket clients. You can pass this value when you make a call to Server.upgrade.
This pattern exists in Bun due to a TypeScript limitation (#26242)
Bun.serve({ websocket: { data: {} as { name: string }, // ← Specify the type of `ws.data` like this message: (ws, message) => console.log(ws.data.name, 'says:', message); }, // ... });
- idleTimeout?: number
Sets the the number of seconds to wait before timing out a connection due to no messages or pings.
- maxPayloadLength?: number
Sets the maximum size of messages in bytes.
Default is 16 MB, or
1024 * 1024 * 16
in bytes. - perMessageDeflate?: boolean | { compress: boolean | WebSocketCompressor; decompress: boolean | WebSocketCompressor }
Sets the compression level for messages, for clients that supports it. By default, compression is disabled.
- code: number,reason: string): void | Promise<void>;
Called when a connection is closed.
@param wsThe websocket that was closed
@param codeThe close code
@param reasonThe close reason
- ): void | Promise<void>;
Called when a connection was previously under backpressure, meaning it had too many queued messages, but is now ready to receive more data.
@param wsThe websocket that is ready for more data
- ): void | Promise<void>;
Called when the server receives an incoming message.
If the message is not a
string
, its type is based on the value ofbinaryType
.- if
nodebuffer
, then the message is aBuffer
. - if
arraybuffer
, then the message is anArrayBuffer
. - if
uint8array
, then the message is aUint8Array
.
@param wsThe websocket that sent the message
@param messageThe message received
- if
- @param ws
The websocket that was opened
- @param ws
The websocket that received the ping
@param dataThe data sent with the ping
- @param ws
The websocket that received the ping
@param dataThe data sent with the ping
interface WhichOptions
interface WorkerEventMap
interface WorkerOptions
Bun's Web Worker constructor supports some extra options on top of the API browsers have.
- argv?: any[]
List of arguments which would be stringified and appended to
Bun.argv
/process.argv
in the worker. This is mostly similar to thedata
but the values will be available on the globalBun.argv
as if they were passed as CLI options to the script. - env?: Record<string, string> | typeof SHARE_ENV
If set, specifies the initial value of process.env inside the Worker thread. As a special value, worker.SHARE_ENV may be used to specify that the parent thread and the child thread should share their environment variables; in that case, changes to one thread's process.env object affect the other thread as well. Default: process.env.
- name?: string
A string specifying an identifying name for the DedicatedWorkerGlobalScope representing the scope of the worker, which is mainly useful for debugging purposes.
- preload?: string | string[]
An array of module specifiers to preload in the worker.
These modules load before the worker's entry point is executed.
Equivalent to passing the
--preload
CLI argument, but only for this Worker. - ref?: boolean
When
true
, the worker will keep the parent thread alive until the worker is terminated orunref
'd. Whenfalse
, the worker will not keep the parent thread alive.By default, this is
false
. - smol?: boolean
Use less memory, but make the worker slower.
Internally, this sets the heap size configuration in JavaScriptCore to be the small heap instead of the large heap.
interface ZlibCompressionOptions
Compression options for
Bun.deflateSync
andBun.gzipSync
- level?: 0 | 1 | 5 | 3 | 4 | 6 | -1 | 2 | 7 | 8 | 9
The compression level to use. Must be between
-1
and9
.- A value of
-1
uses the default compression level (Currently6
) - A value of
0
gives no compression - A value of
1
gives least compression, fastest speed - A value of
9
gives best compression, slowest speed
- A value of
- memLevel?: 1 | 5 | 3 | 4 | 6 | 2 | 7 | 8 | 9
How much memory should be allocated for the internal compression state.
A value of
1
uses minimum memory but is slow and reduces compression ratio.A value of
9
uses maximum memory for optimal speed. The default is8
. - strategy?: number
Tunes the compression algorithm.
Z_DEFAULT_STRATEGY
: For normal data (Default)Z_FILTERED
: For data produced by a filter or predictorZ_HUFFMAN_ONLY
: Force Huffman encoding only (no string match)Z_RLE
: Limit match distances to one (run-length encoding)Z_FIXED
prevents the use of dynamic Huffman codes
Z_RLE
is designed to be almost as fast asZ_HUFFMAN_ONLY
, but give better compression for PNG image data.Z_FILTERED
forces more Huffman coding and less string matching, it is somewhat intermediate betweenZ_DEFAULT_STRATEGY
andZ_HUFFMAN_ONLY
. Filtered data consists mostly of small values with a somewhat random distribution. - windowBits?: 25 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 28 | -9 | -10 | -11 | -12 | -13 | -14 | -15 | 26 | 27 | 29 | 30 | 31
The base 2 logarithm of the window size (the size of the history buffer).
Larger values of this parameter result in better compression at the expense of memory usage.
The following value ranges are supported:
9..15
: The output will have a zlib header and footer (Deflate)-9..-15
: The output will not have a zlib header or footer (Raw Deflate)25..31
(16+9..15
): The output will have a gzip header and footer (gzip)
The gzip header will have no file name, no extra data, no comment, no modification time (set to zero) and no header CRC.
- type ArrayBufferView<TArrayBuffer extends ArrayBufferLike = ArrayBufferLike> = NodeJS.TypedArray<TArrayBuffer> | DataView<TArrayBuffer>
- type ArrayType = 'BOOLEAN' | 'BYTEA' | 'CHAR' | 'NAME' | 'TEXT' | 'CHAR' | 'VARCHAR' | 'SMALLINT' | 'INT2VECTOR' | 'INTEGER' | 'INT' | 'BIGINT' | 'REAL' | 'DOUBLE PRECISION' | 'NUMERIC' | 'MONEY' | 'OID' | 'TID' | 'XID' | 'CID' | 'JSON' | 'JSONB' | 'JSONPATH' | 'XML' | 'POINT' | 'LSEG' | 'PATH' | 'BOX' | 'POLYGON' | 'LINE' | 'CIRCLE' | 'CIDR' | 'MACADDR' | 'INET' | 'MACADDR8' | 'DATE' | 'TIME' | 'TIMESTAMP' | 'TIMESTAMPTZ' | 'INTERVAL' | 'TIMETZ' | 'BIT' | 'VARBIT' | 'ACLITEM' | 'PG_DATABASE' | string & {}
- type BeforeExitListener = (code: number) => void
- type BinaryType = keyof BinaryTypeList
- type BlobOrStringOrBuffer = string | NodeJS.TypedArray | ArrayBufferLike | Blob
- type BlobPart = string | Blob | BufferSource
- type BodyInit = ReadableStream | Bun.XMLHttpRequestBodyInit | AsyncIterable<string | ArrayBuffer | ArrayBufferView> | AsyncGenerator<string | ArrayBuffer | ArrayBufferView> | () => AsyncGenerator<string | ArrayBuffer | ArrayBufferView>
- type BufferSource = NodeJS.TypedArray<ArrayBufferLike> | DataView<ArrayBufferLike> | ArrayBufferLike
- type BunLockFile =
- packages: {}
INFO = { prod/dev/optional/peer dependencies, os, cpu, libc (TODO), bin, binDir } // first index is resolution for each type of package npm -> [ "name@version", registry (TODO: remove if default), INFO, integrity] symlink -> [ "name@link:path", INFO ] folder -> [ "name@file:path", INFO ] workspace -> [ "name@workspace:path" ] // workspace is only path tarball -> [ "name@tarball", INFO ] root -> [ "name@root:", { bin, binDir } ] git -> [ "name@git+repo", INFO, .bun-tag string (TODO: remove this) ] github -> [ "name@github:user/repo", INFO, .bun-tag string (TODO: remove this) ]
Types for
bun.lock
- type BunLockFileBasePackageInfo =
- type BunLockFilePackageArray = [pkg: string, registry: string, info: BunLockFilePackageInfo, integrity: string] | [pkg: string, info: BunLockFilePackageInfo] | [pkg: string] | [pkg: string, info: BunLockFilePackageInfo, bunTag: string] | [pkg: string, info: Pick<BunLockFileBasePackageInfo, 'bin' | 'binDir'>]
- type BunLockFilePackageInfo = BunLockFileBasePackageInfo & { bundled: true; cpu: string | string[]; os: string | string[] }
- type BunLockFileWorkspacePackage = BunLockFileBasePackageInfo & { name: string; version: string }
- type ColorInput = { a: number; b: number; g: number; r: number } | [number, number, number] | [number, number, number, number] | Uint8Array<ArrayBuffer> | Uint8ClampedArray<ArrayBuffer> | Float32Array | Float64Array | string | number | { toString(): string }
Valid inputs for color
- type CookieSameSite = 'strict' | 'lax' | 'none'
- type CSRFAlgorithm = 'blake2b256' | 'blake2b512' | 'sha256' | 'sha384' | 'sha512' | 'sha512-256'
- type DigestEncoding = 'utf8' | 'ucs2' | 'utf16le' | 'latin1' | 'ascii' | 'base64' | 'base64url' | 'hex'
- type DisconnectListener = () => void
- type DOMHighResTimeStamp = number
- type Encoding = 'utf-8' | 'windows-1252' | 'utf-16'
- type ExitListener = (code: number) => void
- type FFIFunctionCallable = Function & { __ffi_function_callable: FFIFunctionCallableSymbol }
- type FormDataEntryValue = File | string
- type HeadersInit = string[][] | Record<string, string | ReadonlyArray<string>> | Headers
- type HMREvent = `bun:${HMREventNames}` | string & {}
The event names for the dev server
- type HMREventNames = 'beforeUpdate' | 'afterUpdate' | 'beforeFullReload' | 'beforePrune' | 'invalidate' | 'error' | 'ws:disconnect' | 'ws:connect'
- type ImportKind = 'import-statement' | 'require-call' | 'require-resolve' | 'dynamic-import' | 'import-rule' | 'url-token' | 'internal' | 'entry-point-run' | 'entry-point-build'
- type JavaScriptLoader = 'jsx' | 'js' | 'ts' | 'tsx'
- type Loader = 'js' | 'jsx' | 'ts' | 'tsx' | 'json' | 'toml' | 'file' | 'napi' | 'wasm' | 'text' | 'css' | 'html'
https://bun.com/docs/bundler/loaders
- type MacroMap = Record<string, Record<string, string>>
This lets you use macros as regular imports
{ "react-relay": { "graphql": "bun-macro-relay/bun-macro-relay.tsx" } }
- type MaybePromise<T> = T | Promise<T>
- type MessageEvent<T = any> = Bun.__internal.UseLibDomIfAvailable<'MessageEvent', BunMessageEvent<T>>
- type MessageEventSource = Bun.__internal.UseLibDomIfAvailable<'MessageEventSource', undefined>
- type MessageListener = (message: unknown, sendHandle: unknown) => void
- type MultipleResolveType = 'resolve' | 'reject'
- type NullSubprocess = Subprocess<'ignore' | 'inherit' | null | undefined, 'ignore' | 'inherit' | null | undefined, 'ignore' | 'inherit' | null | undefined>
Utility type for any process from () with stdin, stdout, stderr all set to
null
or similar. - type NullSyncSubprocess = SyncSubprocess<'ignore' | 'inherit' | null | undefined, 'ignore' | 'inherit' | null | undefined>
Utility type for any process from () with both stdout and stderr set to
null
or similar - type OnBeforeParseCallback =
- type OnEndCallback = (result: BuildOutput) => void | Promise<void>
- type OnLoadCallback = (args: OnLoadArgs) => OnLoadResult | Promise<OnLoadResult>
- type OnLoadResult = OnLoadResultSourceCode | OnLoadResultObject | undefined | void
- type OnResolveCallback = (args: OnResolveArgs) => OnResolveResult | Promise<OnResolveResult | undefined | null> | undefined | null
- type OnStartCallback = () => void | Promise<void>
- type PipedSubprocess = Subprocess<'pipe', 'pipe', 'pipe'>
Utility type for any process from () with stdin, stdout, stderr all set to
"pipe"
. A combination of ReadableSubprocess and WritableSubprocess - type ReadableStreamController<T> = ReadableStreamDefaultController<T>
- type ReadableStreamReader<T> = ReadableStreamDefaultReader<T>
- type ReadableSubprocess = Subprocess<any, 'pipe', 'pipe'>
Utility type for any process from () with both stdout and stderr set to
"pipe"
- type ReadableSyncSubprocess = SyncSubprocess<'pipe', 'pipe'>
Utility type for any process from () with both stdout and stderr set to
"pipe"
- type RejectionHandledListener = (promise: Promise<unknown>) => void
- type ServerWebSocketSendStatus = number
A status that represents the outcome of a sent message.
- if 0, the message was dropped.
- if -1, there is backpressure of messages.
- if >0, it represents the number of bytes sent.
const status = ws.send("Hello!"); if (status === 0) { console.log("Message was dropped"); } else if (status === -1) { console.log("Backpressure was applied"); } else { console.log(`Success! Sent ${status} bytes`); }
- type ShellExpression = { toString(): string } | ShellExpression[] | string | { raw: string } | Subprocess<SpawnOptions.Writable, SpawnOptions.Readable, SpawnOptions.Readable> | SpawnOptions.Readable | SpawnOptions.Writable | ReadableStream
- type SignalsListener = (signal: NodeJS.Signals) => void
- type StringLike = string | { toString(): string }
- type StringOrBuffer = string | NodeJS.TypedArray | ArrayBufferLike
- type SupportedCryptoAlgorithms = 'blake2b256' | 'blake2b512' | 'blake2s256' | 'md4' | 'md5' | 'ripemd160' | 'sha1' | 'sha224' | 'sha256' | 'sha384' | 'sha512' | 'sha512-224' | 'sha512-256' | 'sha3-224' | 'sha3-256' | 'sha3-384' | 'sha3-512' | 'shake128' | 'shake256'
- type Target = 'bun' | 'node' | 'browser'
- type TimerHandler = (...args: any[]) => void
- type Transferable = ArrayBuffer | MessagePort
- type UncaughtExceptionOrigin = 'uncaughtException' | 'unhandledRejection'
- type WarningListener = (warning: Error) => void
- type WebSocketCompressor = 'disable' | 'shared' | 'dedicated' | '3KB' | '4KB' | '8KB' | '16KB' | '32KB' | '64KB' | '128KB' | '256KB'
Compression options for WebSocket messages.
- type WebSocketOptions = WebSocketOptionsProtocolsOrProtocol & WebSocketOptionsTLS & WebSocketOptionsHeaders
Constructor options for the
Bun.WebSocket
client - type WebSocketOptionsHeaders =
- type WebSocketOptionsProtocolsOrProtocol = { protocols: string | string[] } | { protocol: string }
- type WebSocketOptionsTLS =
- type WebSocketReadyState = 0 | 1 | 2 | 3
A state that represents if a WebSocket is connected.
WebSocket.CONNECTING
is0
, the connection is pending.WebSocket.OPEN
is1
, the connection is established andsend()
is possible.WebSocket.CLOSING
is2
, the connection is closing.WebSocket.CLOSED
is3
, the connection is closed or couldn't be opened.
- type WorkerType = 'classic' | 'module'
- type WritableSubprocess = Subprocess<'pipe', any, any>
Utility type for any process from () with stdin set to
"pipe"
- type XMLHttpRequestBodyInit = Blob | BufferSource | FormData | URLSearchParams | string