DEV Community

Cover image for Drop-in S3 for IPFS: Pin to IPFS.NINJA with the AWS SDK You Already Use
Nacho Coll
Nacho Coll

Posted on

Drop-in S3 for IPFS: Pin to IPFS.NINJA with the AWS SDK You Already Use

Hi devs — I'm Nacho, part of the BWS (Blockchain Web Services) team. We just shipped IPFS.NINJA, a managed IPFS pinning service, and one of the features we're most excited about is the S3-compatible API: you can keep using the AWS SDK you already know and pin straight to IPFS, getting a permanent CID back as the ETag.

This post is a transparent walkthrough from the people who built it.

Why an S3 surface for IPFS?

Most teams already have working code that talks to S3 (or to S3-compatible services like Filebase, R2, MinIO). When you decide to move some of that storage to IPFS for verifiability, decentralization, or NFT use cases, you usually have to learn a new API, change your upload flow, and rewrite tooling.

We wanted to remove that friction. With IPFS.NINJA's S3 endpoint, you swap the endpoint and credentials and your existing PutObject / GetObject / ListObjectsV2 / DeleteObject calls keep working — but every upload also gets pinned on IPFS and returns a content-addressed CID.

The 30-second setup

Endpoint: https://s3.ipfs.ninja

Your IPFS.NINJA API key acts as both the access key and the secret key:

  • accessKeyId: first 12 chars of your key (e.g. bws_628bba35)
  • secretAccessKey: the full key (e.g. bws_628bba35e9e0079d9ff9c392b1b55a7b)
  • region: us-east-1 (always)
  • forcePathStyle: true
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";

const s3 = new S3Client({
  endpoint: "https://s3.ipfs.ninja",
  credentials: {
    accessKeyId: "bws_628bba35",
    secretAccessKey: "bws_628bba35e9e0079d9ff9c392b1b55a7b"
  },
  region: "us-east-1",
  forcePathStyle: true
});

const put = await s3.send(new PutObjectCommand({
  Bucket: "my-project",
  Key: "hello.json",
  Body: JSON.stringify({ hello: "IPFS" }),
  ContentType: "application/json"
}));

console.log("CID:", put.Metadata?.cid);
// CID: QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy
Enter fullscreen mode Exit fullscreen mode

That's the whole thing. The CID comes back in the response metadata (and in the ETag), and the file is immediately retrievable at https://ipfs.ninja/ipfs/<CID> — or from any public IPFS gateway, since the network is decentralized.

Buckets = Folders

S3 buckets map 1:1 to your IPFS.NINJA folders, so the mental model is exactly the same:

S3 Operation IPFS.NINJA Equivalent
CreateBucket Create a new folder
ListBuckets List your folders
PutObject Upload file into the folder
ListObjectsV2 List files in the folder
DeleteBucket Delete a folder and all files
await s3.send(new CreateBucketCommand({ Bucket: "nft-metadata" }));

await s3.send(new PutObjectCommand({
  Bucket: "nft-metadata",
  Key: "token-42.json",
  Body: JSON.stringify({ name: "My NFT #42" })
}));
Enter fullscreen mode Exit fullscreen mode

The folders you create from the S3 API show up in your dashboard and can be managed from the REST API too — same underlying storage.

Multipart uploads for big files

The standard AWS multipart flow works out of the box (up to 5 GB):

import { Upload } from "@aws-sdk/lib-storage";
import fs from "fs";

const upload = new Upload({
  client: s3,
  params: {
    Bucket: "my-project",
    Key: "large-dataset.tar.gz",
    Body: fs.createReadStream("large-dataset.tar.gz"),
    ContentType: "application/gzip"
  },
  partSize: 10 * 1024 * 1024
});

upload.on("httpUploadProgress", (p) =>
  console.log(`Uploaded ${p.loaded} of ${p.total} bytes`)
);

const result = await upload.done();
console.log("CID:", result.ETag);
Enter fullscreen mode Exit fullscreen mode

Python / Go also work

Because it's plain S3, anything that speaks S3 works. Here's boto3:

import boto3

s3 = boto3.client(
    "s3",
    endpoint_url="https://s3.ipfs.ninja",
    aws_access_key_id="bws_628bba35",
    aws_secret_access_key="bws_628bba35e9e0079d9ff9c392b1b55a7b",
    region_name="us-east-1"
)

s3.put_object(
    Bucket="my-project",
    Key="data.json",
    Body=b'{"hello": "IPFS"}',
    ContentType="application/json"
)
Enter fullscreen mode Exit fullscreen mode

Same for aws-sdk-go-v2, Rust S3 clients, s3cmd, rclone, etc.

Honest differences from Amazon S3

We want to be upfront about the parts that don't translate. Because IPFS is content-addressed and immutable:

Feature Amazon S3 IPFS.NINJA S3
Storage model Mutable objects Content-addressed (immutable CIDs)
Overwrite behavior Replaces in place Creates a new CID; old CID still works
Versioning Supported Use CIDs as version pointers
Presigned URLs Supported Use signed upload tokens instead
Max object size 5 TB 5 GB (multipart), 100 MB (single PUT)
Regions Multi-region us-east-1 only
ETag MD5 IPFS CID

That last row is actually the magic: your ETag is a real IPFS CID you can hand to anyone, anywhere on the network.

Migrating from Amazon S3 or Filebase

For Amazon S3, the diff is just the endpoint and credentials:

const s3 = new S3Client({
+ endpoint: "https://s3.ipfs.ninja",
  credentials: {
-   accessKeyId: "AKIA...",
-   secretAccessKey: "wJalrX..."
+   accessKeyId: "bws_628bba35",
+   secretAccessKey: "bws_628bba35e9e0..."
  },
  region: "us-east-1",
+ forcePathStyle: true
});
Enter fullscreen mode Exit fullscreen mode

For Filebase, only the endpoint changes:

- endpoint: "https://s3.filebase.com",
+ endpoint: "https://s3.ipfs.ninja",
Enter fullscreen mode Exit fullscreen mode

Your existing PutObject, GetObject, ListObjectsV2, DeleteObject calls work unchanged.

Where to go next

If you're building NFT metadata pipelines, static site deploys to IPFS, or just want pinning that doesn't require running a node, this is the fastest way to wire it into existing infrastructure.

  • Free tier (Dharma): 1 GB storage, 5 GB bandwidth/month, all features included
  • Bodhi ($5/mo): 100 GB storage, 200 GB bandwidth, IPNS, dedicated gateways
  • Nirvana ($29/mo): 1 TB storage, 10 dedicated gateways, IP whitelist

Full S3 docs: ipfs.ninja/docs/api/s3-compatibility
Sign up free: ipfs.ninja

Happy to answer questions about the design, limits, or migration paths in the comments — we read everything.

— Nacho, BWS team

Top comments (0)