Bun’s S3 API is fast

Left: Bun v1.1.44. Right: Node.js v23.6.0
Response
and Blob
APIs (like Bun’s local filesystem APIs).
- AWS S3
- Cloudflare R2
- DigitalOcean Spaces
- MinIO
- Backblaze B2
- …and any other S3-compatible storage service
Basic Usage
There are several ways to interact with Bun’s S3 API.Bun.S3Client
& Bun.s3
Bun.s3
is equivalent to new Bun.S3Client()
, relying on environment variables for credentials.
To explicitly set credentials, pass them to the Bun.S3Client
constructor.
Working with S3 Files
Thefile
method in S3Client
returns a lazy reference to a file on S3.
Bun.file(path)
, the S3Client
’s file
method is synchronous. It does zero network requests until you call a method that depends on a network request.
Reading files from S3
If you’ve used thefetch
API, you’re familiar with the Response
and Blob
APIs. S3File
extends Blob
. The same methods that work on Blob
also work on S3File
.
Memory optimization
Methods liketext()
, json()
, bytes()
, or arrayBuffer()
avoid duplicating the string or bytes in memory when possible.
If the text happens to be ASCII, Bun directly transfers the string to JavaScriptCore (the engine) without transcoding and without duplicating the string in memory. When you use .bytes()
or .arrayBuffer()
, it will also avoid duplicating the bytes in memory.
These helper methods not only simplify the API, they also make it faster.
Writing & uploading files to S3
Writing to S3 is just as simple.Working with large files (streams)
Bun automatically handles multipart uploads for large files and provides streaming capabilities. The same API that works for local files also works for S3 files.Presigning URLs
When your production service needs to let users upload files to your server, it’s often more reliable for the user to upload directly to S3 instead of your server acting as an intermediary. To facilitate this, you can presign URLs for S3 files. This generates a URL with a signature that allows a user to securely upload that specific file to S3, without exposing your credentials or granting them unnecessary access to your bucket. The default behaviour is to generate aGET
URL that expires in 24 hours. Bun attempts to infer the content type from the file extension. If inference is not possible, it will default to application/octet-stream
.
Setting ACLs
To set an ACL (access control list) on a presigned URL, pass theacl
option:
ACL | Explanation |
---|---|
"public-read" | The object is readable by the public. |
"private" | The object is readable only by the bucket owner. |
"public-read-write" | The object is readable and writable by the public. |
"authenticated-read" | The object is readable by the bucket owner and authenticated users. |
"aws-exec-read" | The object is readable by the AWS account that made the request. |
"bucket-owner-read" | The object is readable by the bucket owner. |
"bucket-owner-full-control" | The object is readable and writable by the bucket owner. |
"log-delivery-write" | The object is writable by AWS services used for log delivery. |
Expiring URLs
To set an expiration time for a presigned URL, pass theexpiresIn
option.
method
To set the HTTP method for a presigned URL, pass the method
option.
new Response(S3File)
To quickly redirect users to a presigned URL for an S3 file, pass an S3File
instance to a Response
object as the body.
Support for S3-Compatible Services
Bun’s S3 implementation works with any S3-compatible storage service. Just specify the appropriate endpoint:Using Bun’s S3Client with AWS S3
AWS S3 is the default. You can also pass aregion
option instead of an endpoint
option for AWS S3.
Using Bun’s S3Client with Google Cloud Storage
To use Bun’s S3 client with Google Cloud Storage, setendpoint
to "https://storage.googleapis.com"
in the S3Client
constructor.
Using Bun’s S3Client with Cloudflare R2
To use Bun’s S3 client with Cloudflare R2, setendpoint
to the R2 endpoint in the S3Client
constructor. The R2 endpoint includes your account ID.
Using Bun’s S3Client with DigitalOcean Spaces
To use Bun’s S3 client with DigitalOcean Spaces, setendpoint
to the DigitalOcean Spaces endpoint in the S3Client
constructor.
Using Bun’s S3Client with MinIO
To use Bun’s S3 client with MinIO, setendpoint
to the URL that MinIO is running on in the S3Client
constructor.
Using Bun’s S3Client with supabase
To use Bun’s S3 client with supabase, setendpoint
to the supabase endpoint in the S3Client
constructor. The supabase endpoint includes your account ID and /storage/v1/s3 path. Make sure to set Enable connection via S3 protocol on in the supabase dashboard in https://supabase.com/dashboard/project/<account-id>/settings/storage
and to set the region informed in the same section.
Using Bun’s S3Client with S3 Virtual Hosted-Style endpoints
When using a S3 Virtual Hosted-Style endpoint, you need to set thevirtualHostedStyle
option to true
and if no endpoint is provided, Bun will use region and bucket to infer the endpoint to AWS S3, if no region is provided it will use us-east-1
. If you provide a the endpoint, there are no need to provide the bucket name.
Credentials
Credentials are one of the hardest parts of using S3, and we’ve tried to make it as easy as possible. By default, Bun reads the following environment variables for credentials.Option name | Environment variable |
---|---|
accessKeyId | S3_ACCESS_KEY_ID |
secretAccessKey | S3_SECRET_ACCESS_KEY |
region | S3_REGION |
endpoint | S3_ENDPOINT |
bucket | S3_BUCKET |
sessionToken | S3_SESSION_TOKEN |
S3_*
environment variable is not set, Bun will also check for the AWS_*
environment variable, for each of the above options.
Option name | Fallback environment variable |
---|---|
accessKeyId | AWS_ACCESS_KEY_ID |
secretAccessKey | AWS_SECRET_ACCESS_KEY |
region | AWS_REGION |
endpoint | AWS_ENDPOINT |
bucket | AWS_BUCKET |
sessionToken | AWS_SESSION_TOKEN |
.env
files or from the process environment at initialization time (process.env
is not used for this).
These defaults are overridden by the options you pass to s3.file(credentials)
, new Bun.S3Client(credentials)
, or any of the methods that accept credentials. So if, for example, you use the same credentials for different buckets, you can set the credentials once in your .env
file and then pass bucket: "my-bucket"
to the s3.file()
function without having to specify all the credentials again.
S3Client
objects
When you’re not using environment variables or using multiple buckets, you can create a S3Client
object to explicitly set credentials.
S3Client.prototype.write
To upload or write a file to S3, call write
on the S3Client
instance.
S3Client.prototype.delete
To delete a file from S3, call delete
on the S3Client
instance.
S3Client.prototype.exists
To check if a file exists in S3, call exists
on the S3Client
instance.
S3File
S3File
instances are created by calling the S3Client
instance method or the s3.file()
function. Like Bun.file()
, S3File
instances are lazy. They don’t refer to something that necessarily exists at the time of creation. That’s why all the methods that don’t involve network requests are fully synchronous.
Bun.file()
, S3File
extends Blob
, so all the methods that are available on Blob
are also available on S3File
. The same API for reading data from a local file is also available for reading data from S3.
Method | Output |
---|---|
await s3File.text() | string |
await s3File.bytes() | Uint8Array |
await s3File.json() | JSON |
await s3File.stream() | ReadableStream |
await s3File.arrayBuffer() | ArrayBuffer |
S3File
instances with fetch()
, Response
, and other web APIs that accept Blob
instances just works.
Partial reads with slice
To read a partial range of a file, you can use the slice
method.
Range
header to request only the bytes you want. This slice
method is the same as Blob.prototype.slice
.
Deleting files from S3
To delete a file from S3, you can use thedelete
method.
delete
is the same as unlink
.
Error codes
When Bun’s S3 API throws an error, it will have acode
property that matches one of the following values:
ERR_S3_MISSING_CREDENTIALS
ERR_S3_INVALID_METHOD
ERR_S3_INVALID_PATH
ERR_S3_INVALID_ENDPOINT
ERR_S3_INVALID_SIGNATURE
ERR_S3_INVALID_SESSION_TOKEN
S3Error
instance (an Error
instance with the name "S3Error"
).
S3Client
static methods
The S3Client
class provides several static methods for interacting with S3.
S3Client.write
(static)
To write data directly to a path in the bucket, you can use the S3Client.write
static method.
new S3Client(credentials).write("my-file.txt", "Hello World")
.
S3Client.presign
(static)
To generate a presigned URL for an S3 file, you can use the S3Client.presign
static method.
new S3Client(credentials).presign("my-file.txt", { expiresIn: 3600 })
.
S3Client.list
(static)
To list some or all (up to 1,000) objects in a bucket, you can use the S3Client.list
static method.
new S3Client(credentials).list()
.
S3Client.exists
(static)
To check if an S3 file exists, you can use the S3Client.exists
static method.
S3File
instances.
S3Client.size
(static)
To quickly check the size of S3 file without downloading it, you can use the S3Client.size
static method.
new S3Client(credentials).size("my-file.txt")
.
S3Client.stat
(static)
To get the size, etag, and other metadata of an S3 file, you can use the S3Client.stat
static method.
S3Client.delete
(static)
To delete an S3 file, you can use the S3Client.delete
static method.
s3://
protocol
To make it easier to use the same code for local files and S3 files, the s3://
protocol is supported in fetch
and Bun.file()
.
s3
options to the fetch
and Bun.file
functions.
UTF-8, UTF-16, and BOM (byte order mark)
LikeResponse
and Blob
, S3File
assumes UTF-8 encoding by default.
When calling one of the text()
or json()
methods on an S3File
:
- When a UTF-16 byte order mark (BOM) is detected, it will be treated as UTF-16. JavaScriptCore natively supports UTF-16, so it skips the UTF-8 transcoding process (and strips the BOM). This is mostly good, but it does mean if you have invalid surrogate pairs characters in your UTF-16 string, they will be passed through to JavaScriptCore (same as source code).
- When a UTF-8 BOM is detected, it gets stripped before the string is passed to JavaScriptCore and invalid UTF-8 codepoints are replaced with the Unicode replacement character (
\uFFFD
). - UTF-32 is not supported.