Skip to Content
FeaturesStorage

Storage

Media uploads (images, videos, documents) use local file storage by default. For production, you should use S3-compatible object storage so files persist across deployments and serverless invocations.

When S3 activates

The S3 storage adapter loads automatically when all four required env vars are set:

VariableDescription
S3_BUCKETBucket name
S3_ACCESS_KEY_IDAccess key
S3_SECRET_ACCESS_KEYSecret key
S3_REGIONAWS region (e.g. us-east-1)

If any of these are missing, the app falls back to local file storage.

Optional

VariableDefaultDescription
S3_PREFIX_ROOTFOLDERmediaPath prefix inside the bucket. All uploaded files are stored under this prefix.

Why S3 in production

  • Railway — the filesystem is ephemeral. Files uploaded at runtime are lost when the service redeploys or restarts.
  • Vercel — serverless functions have no persistent filesystem. Files written during a request are not available in subsequent requests.
  • Any containerized deploy — container images don’t include runtime-uploaded files.

Without S3, media uploads will work during the current deployment but disappear after the next deploy or restart.

Behavior

  • Development — both local and S3 storage work. Files are stored locally and in S3 (if configured).
  • Production — when S3 is active and NODE_ENV=production, local storage is disabled. Files are stored only in S3.

Compatible providers

The S3 adapter uses the AWS SDK and works with any S3-compatible service:

ProviderNotes
AWS S3 The original. Set S3_REGION to your bucket’s region.
Cloudflare R2 S3-compatible, no egress fees. Set S3_REGION to auto. You may need to set an endpoint URL via additional config.
DigitalOcean Spaces S3-compatible. Region format: nyc3, ams3, etc.
MinIO Self-hosted, S3-compatible. Good for local dev if you want to test S3 without AWS.
Backblaze B2 S3-compatible API. Low cost.

For providers that use a custom endpoint (e.g. R2, Spaces), you may need to extend the S3 configuration in src/domains/cms/payload/plugins/storage-s3.ts.

Setup steps

1. Create a bucket

In your S3 provider’s dashboard, create a new bucket. Note the bucket name and region.

2. Create credentials

Create an IAM user (or service account) with permissions to read and write to the bucket. At minimum, the user needs:

  • s3:PutObject
  • s3:GetObject
  • s3:DeleteObject
  • s3:ListBucket

Copy the access key ID and secret access key.

3. Set environment variables

S3_BUCKET=my-bucket-name S3_ACCESS_KEY_ID=AKIA... S3_SECRET_ACCESS_KEY=your-secret-key S3_REGION=us-east-1 S3_PREFIX_ROOTFOLDER=media # optional, defaults to "media"

4. Test

Upload a file via the Payload admin (Media collection). If S3 is configured correctly, the file will be stored in your bucket under the configured prefix.