DEV Community

Cover image for Fission API in ipfs-deploy
Boris Mann for Fission

Posted on • Originally published at blog.fission.codes on

Fission API in ipfs-deploy

We've had our eye on Agent of User's great ipfs-deploy tool for a while now. It is a quick way to get a static website served up through a regular Web2 domain name. It automates both IPFS uploading / pinning as well as updating DNS.

GitHub logo ipfs-shipyard / ipfs-deploy

Zero-Config CLI to Deploy Static Websites to IPFS

ipfs-deploy

standard-readme compliant Travis CI

Upload static website to IPFS pinning services and optionally update DNS.

The goal of ipfs-deploy is to make it as easy as possible to deploy a static website to IPFS.

Table of Contents

  1. Install
    1. No install
  2. Usage
    1. Supported Pinning Services
    2. Supported DNS Services
  3. API
  4. Security
  5. Contributing
    1. Contributors
    2. Add a Pinning Service
    3. Add a DNS Provider
  6. Users
  7. License

Install

npm install -g ipfs-deploy

Or

yarn global add ipfs-deploy

You can call it either as ipd or as ipfs-deploy:

ipd public/
ipfs-deploy public/

No install

You can run it directly with npx without needing to install anything:

npx ipfs-deploy _site

It will deploy to a public pinning service and give you a link to ipfs.io/ipfs/your-hash so you can check it out.

Usage

You can get started just by typing out ipd and it will have smart defaults By default, it deploys to Infura, which doesn't need signup and you'll get…

Now that we've got our own web API that supports IPFS, we have a PR to add Fission support to ipfs-deploy. Thanks Daniel for taking this on!

"Adding support prompted us to add another feature to our web api: manipulation of InterPlanetary Linked Data (IPLD) nodes." – Daniel Holmgren

Right now we're using this ourselves for some website experiments, leaning on the Cloudflare integration in ipfs-deploy to automate DNS updates. DNS Automation is something that we're building into the Fission Suite directly, as part of our "batteries included" approach.

As well, ipfs-deploy specifically works without a local IPFS node. The direction we're going with the Fission tools is going to assume that we can run a local IPFS node everywhere that our CLI can be installed, so we'll be leaning into "native" IPFS protocol functionality. We'll be release an alpha of our Fission CLI shortly so you can check out our approach.

Read Agent of User's Complete Beginner's Guide to Deploying Your First Static Website to IPFS for the full run down on using ipfs-deploy.

Skipping to the end, you can use your Fission credentials & Cloudflare API keys to run this one liner:

ipd -p fission -d cloudflare

Tell us about your IPFS hosted website in the Talk forum >

Top comments (4)

Collapse
 
agentofuser profile image
agentofuser

Hey @bmann , glad you found ipfs-deploy useful :) I was just answering a question in the comments for the guide I wrote and saw your message from before about the "even free-er" unauthenticated/rate-limited option to help people get that first happy experience. Is this something you all are still considering? Cheers!

Collapse
 
bmann profile image
Boris Mann

Hi Helder! Not something we’re focusing on right now as we work on the Fission CLI tool. You can register right on the command line so the experience is quite smooth. Try it out and let me know what you think.

We haven’t settled on what the lowest tier is. Max 100MB individual file size and a couple of GBs of pinned files maybe? Any thoughts?

Collapse
 
agentofuser profile image
agentofuser

Hey Boris! Yeah I saw the cli signup a little afterwards. It's really fantastic and should help people get started without much friction. Kudos for that :)

As for the tiers, there is a recent comparison that the folks at Temporal did that might provide a good benchmark for you all:

medium.com/temporal-cloud/comparin...

Thread Thread
 
bmann profile image
Boris Mann

Yes we saw that. Not really thinking about this in terms of cost per GB, especially when if the same CID is pinned, the file size could be shared across quotas.

We’re thinking large files should be a feature (since it drives bandwidth as well as backup), hence the thought about 100MB limit for a free tier.

First we need to build out some metrics and logging. Stay tuned!