Blog post

Supabase Storage v3: Resumable Uploads with support for 50GB files

2023-04-12

9 minute read

Supabase Storage v3: Resumable Uploads with support for 50GB files

Supabase Storage is receiving a major upgrade, implementing many of the most requested features from our users: Resumable Uploads, Quality Filters, Next.js support, and WebP support.

The key feature: Resumable Uploads! With Resumable Uploads, you can continue uploading a file from where you left off, even if you lose internet connectivity or accidentally close your browser tab while uploading.

Resumable uploads divides the file into chunks before uploading them, emitting progress events during upload.

With this release, users on the Pro plan or higher can now upload files as large as 50GB! This substantial upgrade from the previous limit of 5GB offers even more flexibility for your file uploads.

To build this feature, we implemented Postgres Advisory locks which solved some gnarly concurrency problems. We can now handle edge-cases, like two clients uploading to the same location. We’ll deep dive into how we implemented Advisory locks later in the post.

New features

Storage v3 introduces a number of new features.

More image transformations options

We introduced image resizing last Launch Week. This time, we’ve added the ability to specify quality and format filters when downloading your image. When you request images via the transform endpoint, by default we render it as Webp, if the client supports it.


_10
supabase.storage.from('bucket').download('image.jpg', {
_10
transform: {
_10
width: 800,
_10
height: 300,
_10
quality: 75,
_10
format: 'origin',
_10
},
_10
})

Next.js loader

You can serve images from Storage with a simple Next.js loader for the Image component. Check out our docs on how to get started.


_21
// supabase-image-loader.js
_21
const projectId = '<SUPABASE_PROJECT_ID>'
_21
export default function supabaseLoader({ src, width, quality }) {
_21
return `https://${projectId}.supabase.co/storage/v1/render/image/public/${src}?width=${width}&quality=${
_21
quality || 75
_21
}`
_21
}
_21
_21
// nextjs.config.js
_21
module.exports = {
_21
images: {
_21
loader: 'custom',
_21
loaderFile: './supabase-image-loader.js',
_21
},
_21
}
_21
_21
// Using Next Image
_21
import Image from 'next/image'
_21
const MyImage = (props) => {
_21
return <Image src="bucket/image.png" alt="Picture of the author" width={500} height={500} />
_21
}

Presigned upload URLs

Authenticated users can now generate presigned URLs.

These URLs can then be shared with other users who can then upload to storage without further authorization. For example, you can generate a presigned URL on your server (ahem, Edge Function).

Shoutout to community members @abbit and @MagnusHJensen who implemented this feature on the Storage server and @Rawnly for the client library bindings 🎉.


_10
// create a signed upload url
_10
const filePath = 'users.txt'
_10
const { token } = await storage.from(newBucketName).createSignedUploadUrl(filePath)
_10
_10
// this token can then be used to upload to storage
_10
await storage.from(newBucketName).uploadToSignedUrl(filePath, token, file)

Size and file type limits per bucket

You can now restrict the size and type of objects on a per-bucket basis. These features make it easy to upload to Storage from the client directly, without requiring validation from an intermediary server.

For example, you can restrict your users a 1 MB and image/* files when uploading their profile images:

Bucket Restrictions

Deep Dive into Resumable Uploads

Let’s get into the nuts-and-bolts of how we implemented Resumable Uploads.

First, why do we need Resumable Uploads, when the HTTP protocol has a standard method for uploading files - multipart/form-data ? This approach works well for small files, since the file is streamed to the server in bytes over the network. For medium to large files this method becomes problematic, especially on spotty connections like mobile networks. Uploads that are interrupted need to be restarted from the beginning.

TUS - Resumable Protocol

We use S3 to store your files and it implements a proprietary protocol for resumable uploads. At Supabase, we support existing open source communities when possible and so, instead of exposing the S3 protocol to our users, we implemented TUS (historically an acronym for Transloadit Upload Server, later renamed to The Upload Server). TUS is an open protocol for resumable file uploads. By leveraging an open protocol, developers can use existing libraries with Supabase Storage.

TUS is a powerful protocol. It’s built on top of HTTP, making it easy to integrate your browser and mobile applications. Because of its open nature, a variety of powerful, drop-in clients and open-source libraries have been built around it. For example, at Supabase, we love Uppy.js, a multi-file uploader for TUS.

Using Uppy with Supabase Storage looks like this:


_35
import { Uppy, Dashboard, Tus } from 'https://releases.transloadit.com/uppy/v3.6.1/uppy.min.mjs'
_35
_35
const token = 'anon-key'
_35
const projectId = 'your-project-ref'
_35
const bucketName = 'avatars'
_35
const folderName = 'foldername'
_35
const supabaseUploadURL = `https://${projectId}.supabase.co/storage/v1/upload/resumable`
_35
_35
var uppy = new Uppy()
_35
.use(Dashboard, {
_35
inline: true,
_35
target: '#drag-drop-area',
_35
showProgressDetails: true,
_35
})
_35
.use(Tus, {
_35
endpoint: supabaseUploadURL,
_35
headers: {
_35
authorization: `Bearer ${token}`,
_35
},
_35
chunkSize: 6 * 1024 * 1024,
_35
allowedMetaFields: ['bucketName', 'objectName', 'contentType', 'cacheControl'],
_35
})
_35
_35
uppy.on('file-added', (file) => {
_35
file.meta = {
_35
...file.meta,
_35
bucketName: bucketName,
_35
objectName: folderName ? `${folderName}/${file.name}` : file.name,
_35
contentType: file.type,
_35
}
_35
})
_35
_35
uppy.on('complete', (result) => {
_35
console.log('Upload complete! We’ve uploaded these files:', result.successful)
_35
})

And there you have it, with a few lines of code, you can support parallel, resumable uploads of multiple files, with progress events!

Implementing TUS inside Supabase Storage

There were a few technical challenges we faced while implementing TUS in Supabase Storage.

Storage is powered by our Storage-API service, a Node.js server that interfaces with different storage backends (like AWS S3). It is fully integrated with the Supabase ecosystem, making it easy to protect files with Postgres RLS policies.

To implement the TUS protocol, we use tus-node-server, which was recently ported to Typescript. It was only missing a few features we needed:

  • Ability to limit the upload to files of a certain size
  • Ability to run multiple instances of TUS (more on this later)
  • Ability to expire upload URLs after a certain amount of time

We will be contributing these features back to TUS with discussions and PRs after Launch Week.

Scaling TUS

One of the biggest challenges we faced was the ability to scale TUS by running multiple instances of the server behind a load balancer. The protocol divides the file into chunks and sends it to any arbitrary server. Each chunk can be processed by a different server. Cases like these can lead to corrupted files with multiple servers trying to buffer the same file to S3 concurrently.

The TUS documentation gives 2 work-arounds:

  1. Use Sticky sessions to direct the client to the same server the upload was originally started.
  2. Implement some sort of distributed locking to ensure exclusive access to the storage backend.

Option 1 would have affected the even distribution of requests across servers. We decided to go with option 2 - Distributed Locking. Storage uses Postgres as a database, a queue, and now as a lock manager.

Enter Postgres Advisory Locks

Postgres advisory locks offer a way for defining locking behaviour of resources outside of the database. These are called advisory locks because Postgres does not enforce their use - it is up to the application to acquire and release the locks when accessing the shared resource. In our case, the shared resource is an object in S3. Advisory locks can be used to mediate concurrent operations to the same object.


_21
_21
const key = `/bucket-name/folder/bunny.jpg`
_21
const hashedKey = hash(key)
_21
_21
await db.withTransaction(() => {
_21
// try acquiring a transactional advisory lock
_21
// these locks are automatically released at the end of every transaction
_21
await db.run('SELECT pg_advisory_xact_lock(?)', hashedKey);
_21
_21
// the current server can upload to s3 at the given key
_21
await uploadObject();
_21
_21
if (isLastChunk) {
_21
// storage.objects stores the object metadata of all objects
_21
// It doubles up as a way to enforce authorization.
_21
// If a user is able to insert into this table, they can upload.
_21
await db.run('insert into storage.objects(..) values(..)')
_21
}
_21
});
_21
_21
// the advisory lock is automatically released at this point

With advisory locks, we’ve been able to utilize Postgres as a key part of the Supabase Stack to solve difficult concurrency problems.

Roll out

Because this is a major update, we’re rolling it out gradually over the next month. You will receive a notification in your dashboard when the feature is available for project. Reach out to us if you want early access to this feature.

Let me in

Coming up next

We’ve got an exciting roadmap for the next few Storage releases:

  • Presigned upload URLs for TUS
  • Increasing max file size limit to 500 GB
  • Transform images stored outside Supabase Storage
  • Smart CDN v2 with an even higher cache hit rate

Reach out on Twitter or Discord to share anything else you need to build amazing products.

Share this article

Last post

Supabase Auth: SSO, Mobile, and Server-side support

13 April 2023

Next post

Supabase Edge Runtime: Self-hosted Deno Functions

11 April 2023

Related articles

Supabase Beta May 2023

Supabase Vecs: a vector client for Postgres

Flutter Hackathon Winners

ChatGPT plugins now support Postgres & Supabase

Building ChatGPT Plugins with Supabase Edge Runtime

Build in a weekend, scale to millions