DEV Community

Cover image for How to Richly Preprocess your IPFS NFT Images and Metadata
Gospel Darlington
Gospel Darlington

Posted on

How to Richly Preprocess your IPFS NFT Images and Metadata

See live demo usage on my NFT project on the Goerli testnet here. See Opensea’s location here. You can watch the video version on my YouTube Channel.

Processing NFT Metadata

Introduction

The first step you’ll take to building an NFT minting Dapp is to prepare the artwork. Without the artwork, you can’t have your NFT project actualized.

An equal reason for preprocessing your artworks is for generating metadata for each of the images. Without this metadata, your NFT cannot be sold in any of the big secondary markets such as Opensea.

For example, the image below is an NFT in the Opensea marketplace bearing some metadata which can also see below.

Metadata Images

Metadata Details and Traits

The above information including the artwork, its details, and traits can be seen on the IPFS image below.

IPFS Metadata

Subscribe to my YouTube channel to learn how to build a Web3 app from scratch. I also offer private and specialized classes for serious folks who want to learn one-on-one from a mentor. Book your Web3 classes here.

Now, let me show you how to represent the above image and metadata in your code.

Setting up Project Dependencies

Open your terminal and navigate to your project directory or create a project folder at a specific location. For example, running **mkdir preprocessor** on the terminal will create a folder called “preprocessor” on your specified location.

Next, run the following commands on the terminal.

cd preprocessor
npm init -y
npm install sharp @faker.js/faker --save-dev
Enter fullscreen mode Exit fullscreen mode

The above commands will install both sharp and faker libraries on your project. The faker library will help us generate random information. Whereas, the sharp library will help us process the image to a certain dimension, quality, and format.

Next, create a folder in your project called arts and another called outputs. In the “arts” folder, put all the images for processing inside it.

With the above steps accomplished, open the project on “VS code”. The project structure should look like the one below.

- preprocessor/
  - arts/
  - node_modules/
  - outputs/
  - package.json
  - package-lock.json
Enter fullscreen mode Exit fullscreen mode

Great, let’s move on to coding the engine responsible for processing the images.

Prerequisite

You’ll need the following installed on your computer to complete this tutorial.

  • NodeJs
  • IPFS Desktop app
  • VS Code ## Coding the Processing Engine

Create a file in the root of your project called **imagify.js** and paste the codes below inside of it.

The following steps will help you understand how this metadata processing engine works.

Importing Essential Libraries

const fs = require('fs')
const path = require('path')
const sharp = require('sharp')
const { faker } = require('@faker-js/faker')
const input = './arts'
const output = './outputs'
Enter fullscreen mode Exit fullscreen mode

The fs represents the file system, it is an inbuilt module that came with NodeJs. It has the responsibility of managing file reading and writing activities on your machine.

The path is another node module that helps you navigate through the directory structures on your machine. This will be instrumental in locating where our images are kept.

Sharp is the module we use for processing the image such as resizing and transforming to a different file type.

We’ll use the faker to generate a random number.

Lastly, the input variable contains the location where the images to be processed are located, and the output points to the location where the processed images will be saved.

Defining Essential Variables

let img_counter = 1
const imgSize = { width: 500, height: 500 }
const desired_ext = '.webp'
const base_url = 'https://ipfs.io/ipfs/REPLACE_WITH_IPFS_CID/'
const attributes = {
  weapon: [
    'Stick',
    'Knife',
    'Blade',
    'Clube',
    'Ax',
    'Sword',
    'Spear',
    'Gun',
    'Craft',
  ],
  environment: [
    'Space',
    'Sky',
    'Desert',
    'Forest',
    'Grassland',
    'Moiuntains',
    'Oceans',
    'Rainforest',
  ],
  rarity: Array.from(Array(10).keys()),
}
Enter fullscreen mode Exit fullscreen mode

The above codes contain important variables to be used in the course of generating our metadata.

  • **Image_counter** helps us to number the images consistently with the current iteration.
  • **ImgSize** defines the dimension of the width and height of each image to be processed.
  • **Desired_ext** speaks of the file format you want your processed images to bear.
  • **Base_url** specifies the location where the images are to be stored on the IPFS.
  • **Attributes** holds further information about each image’s metadata.

Executing Recursive Task

fs.readdirSync(input).forEach((file) => {
  if(['.jpg', '.jpeg', '.png', '.gif', '.webp'].includes(orginal_ext)) {
    // Images and metadata tasks are recursively performed here...
  }
})
Enter fullscreen mode Exit fullscreen mode

In the above block of code, we used the file system library (fs) to loop through all the images in the **input** location (arts). And for each of the images, we made sure our engine is only selecting images from an approved list of extensions.

Performing Metadata Task

const id = img_counter
const metadata = {
  id,
  name: `Adulam NFT #${id}`,
  description:
    'A.I Arts NFTs Collection, Mint and collect the hottest NFTs around.',
  price: 1,
  image: base_url + id + desired_ext,
  demand: faker.random.numeric({ min: 10, max: 100 }),
  attributes: [
    {
      trait_type: 'Environment',
      value: attributes.environment.sort(() => 0.5 - Math.random())[0],
    },
    {
      trait_type: 'Weapon',
      value: attributes.weapon.sort(() => 0.5 - Math.random())[0],
    },
    {
      trait_type: 'Rarity',
      value: attributes.rarity.sort(() => 0.5 - Math.random())[0],
      max_value: 10,
    },
    {
      display_type: 'date',
      trait_type: 'Created',
      value: Date.now(),
    },
    {
      display_type: 'number',
      trait_type: 'generation',
      value: 1,
    },
  ],
}
Enter fullscreen mode Exit fullscreen mode

In the code block above, we supplied values for each metadata space. For example, environments, weapons, and all the trait values are randomly and dynamically supplied.

Performing Image Transformation Task

if (fs.existsSync(`${input}/${orginal_file_name + orginal_ext}`)) {
  sharp(`${input}/${orginal_file_name + orginal_ext}`)
    .resize(imgSize.height, imgSize.width)
    .toFile(`${output}/images/${id + desired_ext}`, (err, info) =>
      console.log(err),
    )

  fs.writeFileSync(`${output}/metadata/${id}.json`, JSON.stringify(metadata), {
    encoding: 'utf-8',
    flag: 'w',
  })
}

console.log(metadata)
img_counter++
Enter fullscreen mode Exit fullscreen mode

In the snippet above, we used the file system module again to first locate each one of our artwork and resized it to our specified dimension (500 x 500). Also, we supplied a new name in line with the current iteration and gave it our desired extension (webp).

Resizing and transforming the images into webp greatly optimized our artworks to an astonishing height.
For example, in this video where I subjected this image preprocessing engine to 99 images totalling a size of 111MB. The size went down to 62MB for the .png extension, and an astonishing 4.5MB for the .webp extension. That huge size reduction accounts for a big leap in the load time of a Minting website built with my images.

Lastly from the block of code above, we created a JSON metadata for each image processed, bearing both an identical name and a URL pointing to the image’s location. This metadata is what we’ll deploy to IPFS after processing the images.

Now, run the command below to have your image transformed. Be sure you are in your project folder.

node imagify.js
Enter fullscreen mode Exit fullscreen mode

At this point, we are done with our image engine, our output folder should have the following file structure as the result.

- output/
  - images
    - 1.webp
    - 2.webp
    - ......
  - metadata
    - 1.json
    - 2.json
    - ......
Enter fullscreen mode Exit fullscreen mode

Uploading Images and Metadata to IPFS

Status screen of IPFS Desktop

IPFS stands for the interplanetary file system, it is peer-to-peer and decentralized. There isn’t an easy way to pull out data stored on the IPFS and as such, it is a near-perfect peer to use along with your blockchain applications for storing media contents.

To use the easy and less confusing way, head to the IPFS Desktop app installation page and follow the instructions specified there.

After the installation is successful, open up the IPFS app and upload FIRST, and I repeat, First the images folder.

A unique CID (content Identification) string will be generated as part of the folder name, see the image below.

CID

Now, copy the images folder CID as can be seen from the image above and replace it in your **imagify.js** code. See the code below.

const base_url = "https://ipfs.io/ipfs/REPLACE_WITH_IPFS_CID/" //old string
const base_url = "https://ipfs.io/ipfs/QmY1zrFibpdHQ7qcqZqq7THsqTremZYepXNWR5Au3MF1ST/" //new string
Enter fullscreen mode Exit fullscreen mode

Now, run the **node imagify.js** again to include the accurate location of each image to your JSON metadata. See an example of the generated JSON metadata before and after the replacement of the CID.

You can watch this video to understand how I used this Images and metadata on a full NFT minting Project.

NFT Minting Dapp

Before CID Replacement

{
  id: 97,
  name: 'Adulam NFT #97',
  description: 'A.I Arts NFTs Collection, Mint and collect the hottest NFTs around.',
  price: 1,
  image: 'https://ipfs.io/ipfs/REPLACE_WITH_IPFS_CID/97.webp',
  demand: '4',
  attributes: [
    { trait_type: 'Environment', value: 'Forest' },
    { trait_type: 'Weapon', value: 'Craft' },
    { trait_type: 'Rarity', value: 4, max_value: 10 },
    {
      display_type: 'date',
      trait_type: 'Created',
      value: 1664478034024
    },
    { display_type: 'number', trait_type: 'generation', value: 1 }
  ]
}
Enter fullscreen mode Exit fullscreen mode

After CID Replacement

{
  id: 97,
  name: 'Adulam NFT #97',
  description: 'A.I Arts NFTs Collection, Mint and collect the hottest NFTs around.',
  price: 1,
  image: 'https://ipfs.io/ipfs/QmY1zrFibpdHQ7qcqZqq7THsqTremZYepXNWR5Au3MF1ST/97.webp',
  demand: '7',
  attributes: [
    { trait_type: 'Environment', value: 'Moiuntains' },
    { trait_type: 'Weapon', value: 'Clube' },
    { trait_type: 'Rarity', value: 2, max_value: 10 },
    {
      display_type: 'date',
      trait_type: 'Created',
      value: 1664478110287
    },
    { display_type: 'number', trait_type: 'generation', value: 1 }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Finally, as shown in the image below, upload the metadata folder to IPFS alongside the images folder.

Uploaded Folders

Fantastic, now let’s pin it on the web for the world to see. Currently, both folders are seating on your local IPFS node (Your Computer), for it to be accessible worldwide, you need to use a Pinning service such as Pinata.

Pinning your folders to IPFS

First, head to Pinata pin manager and sign up if you haven’t done that before. Then click on the account icon and select API Keys. See the image below.

Pinata API Key

On the keys creation page, click on create a new key and enter the name for the key. Observe the image below.

Creating a New Key

Now copy the JWT key on your clipboard. This is what we’ll use to link our IPFS Desktop with our Pinata account. See the image below.

Copying Pinata JWT token

Next, open up your IPFS desktop application, head to the settings tab and add a new service, select Pinata and paste your JWT token to the space provided. Refer to the image below.

Setting up a Pinning Service

Fantastic, the last thing to do is to actually pin your folders to Pinata using the instruction below.

Head to the files tab, click on the triple dotted line, and select set pinning. This will pop up in the image below.

Select Pinata and apply, by so doing, your images folder will be accessible globally.

Confirming Gloabl Image Accessibility

Head to this website, copy and paste your CID on the IPFS input field and click on the cache button. This scans the entire set of publicly available IPFS gateways in search of your images. See the image below.

Public Gateway Cacher

The results from the above image show that many IPFS nodes now have copies of the images folder available and accessible globally even if you deleted the original copy on your local node.

With the steps clearly outlined for you, also Pin the metadata folder to make them publicly available online.

Now you can use any of the links in the image above as your base URL for ERC721 tokens. See the image below.

Your Images on the IPFS

And there you have it, that is how to prepare and upload your artworks on the IPFS.

Conclusion

You will always encounter the need to understand how to preprocess and upload artworks on the IPFS at a batch scale.

Once you understand how to work and process images to the IPFS you can start using it on your special web3 projects.

The world of web3 development is vast and broad, you might need a mentor, you can watch my free videos on my YouTube channel. Or book your private web3 classes with me to speed up your learning process.

Till next time, keep crushing it!

About the Author

Gospel Darlington is a full-stack blockchain developer with 6+ years of experience in the software development industry.

By combining Software Development, writing, and teaching, he demonstrates how to build decentralized applications on EVM-compatible blockchain networks.

His stacks include JavaScript, React, Vue, Angular, Node, React Native, NextJs, Solidity, and more.

For more information about him, kindly visit and follow his page on Twitter, Github, LinkedIn, or on his website.

Top comments (0)