DEV Community

Cover image for Loading Images With React/JavaScript
Adam Nathaniel Davis
Adam Nathaniel Davis

Posted on

Loading Images With React/JavaScript

[NOTE: The live web app that encompasses this functionality can be found here: https://www.paintmap.studio. All of the underlying code for that site can be found here: https://github.com/bytebodger/color-map.]

In the first article in this series, I talked about the challenges involved with capturing digital color codes from real-world objects. (Or, as it pertains to this tutorial, real-world samples of paint.) But once I had a reasonable digital representation of every paint in my inventory, I then had to set about actually grabbing the images I wanted to manipulate.


Image description

Loading the image

I debated whether to write this article because uploading files is a pretty basic task in a developer's toolbox. But most of the tutorials that you see on processing files assume that you're actually uploading the image - to some server.

But in this case, we're building a React app and we don't have a particular need to save the images on the server. We just need to grab them so they can be handled programmatically. So I'm gonna show how I do that here.

This is a vastly stripped-down version of the UI component that I use on Paint Map Studio:

/* UI.js */

export const UIState = createContext({});

export const UI = props => {
   const blob = useRef(null);
   const file = useRef(null);
   const [stats, setStats] = useState({});

   return <>
      <UIState.Provider value={{
         blob,
         file,
         setStats,
         stats,
      }}>
         <IndexContainer/>
         <div>
            <canvas id={'canvas'}></canvas>
         </div>
      </UIState.Provider>
   </>;
};
Enter fullscreen mode Exit fullscreen mode

The entire app lives under this UI component. I'll only note a few points here:

  1. I'm using context to persist variables and share state. That's why there are refs for blob and file. stats will hold info about the processed image. I want those values to remain in memory so the user doesn't have to constantly reload the same file every time they want to tweak the settings.

  2. Notice that there's a <canvas> element embedded right here at the top of the app. That element will eventually hold and display our processed image.

Now let's look at the IndexContainer component:

/* IndexContainer.js */

export const IndexState = createContext({});

export const IndexContainer = () => {
   return <>
      <IndexState.Provider value={{}}>
         <Index/>
      </IndexState.Provider>
   </>
}
Enter fullscreen mode Exit fullscreen mode

That looks like a pretty pointless component, right? Well, it's really just a placeholder for now. IndexContainer wraps Index because eventually I'll be using this to store all of the form variables that exist in the Index component. They'll be accessible through context. But for now, there's really nothing for this component to do, other than to call Index.

Now let's look at the Index component:

/* Index.js */

export const Index = () => {
   const selectImageInputRef = useRef(null);
   const file = useFile();

   const handleFile = (event = {}) => {
      const [source] = event.target.files;
      file.read(source);
   };

   const handleImageButton = () => {
      selectImageInputRef.current && selectImageInputRef.current.click();
   }

   return <>
      <input
         accept={'image/*'}
         className={'displayNone'}
         onChange={handleFile}
         ref={selectImageInputRef}
         type={inputType.file}
      />
      <Button
         onClick={handleImageButton}
         variant={'contained'}
      >
         Select Image
      </Button>
   </>;
};
Enter fullscreen mode Exit fullscreen mode

Here we have a hidden <input> element which will actually be doing the work of grabbing the user's selected image. We also have a Material UI <Button> which controls the <input> element. That's why we have the selectImageInputRef ref.

The handleFile() function is what actually starts the processing of the image file. Specifically, it calls the read() function in the useFile Hook.

So let's look at the useFile Hook:

/* useFile.js */

export const useFile = () => {
   const image = useImage();
   const uiState = useContext(UIState);

   const read = (chosenFile = {}) => {
      const fileReader = new FileReader();
      fileReader.onloadend = event => {
         uiState.file.current = chosenFile;
         uiState.blob.current = event.target.result;
         image.create(uiState.blob.current);
      };
      try {
         fileReader.readAsDataURL(chosenFile);
      } catch (e) {
         // no file - do nothing
      }
   };

   return {
      read,
   };
};
Enter fullscreen mode Exit fullscreen mode

There's also a reload() function in the finished version of the useFile Hook. But for now, we only need read(). Notice that read() saves the file location and data into the refs that we created in the UI component.

I'm instantiating a new FileReader and then using readAsDataURL() to load the contents. Once it's loaded, the onloadend will fire, which eventually calls the create() function in the useImage Hook.

Here's a bare-bones version of the useImage Hook:

/* useImage.js */

export const useImage = () => {
   const canvas = useRef(null);
   const context = useRef(null);
   const image = useRef(null);

   useEffect(() => {
      canvas.current = document.getElementById('canvas');
   }, []);

   const create = (src = '') => {
      const source = src === '' ? image.current.src : src;
      const newImage = new Image();
      newImage.src = source;
      newImage.onload = () => {
         image.current = newImage;
         canvas.current.width = newImage.width;
         canvas.current.height = newImage.height;
         context.current = canvas.current.getContext('2d', {alpha: false, willReadFrequently: true});
         context.current.drawImage(newImage, 0, 0);
      }
   };

   return {
      create,
   };
};
Enter fullscreen mode Exit fullscreen mode

Here's where the rendering magic occurs. The source was passed in from the read() function in the useFile Hook. That source is then used to create a virtual image in memory with new Image().

Once the virtual image is loaded, the onload event fires. In that function I use the canvas ref, which was connected inside useEffect. Once I've created a new canvas context, I can use newImage to render those values inside the context.

This will take the user's chosen image file and render it onscreen. At this point, all we've done is render the raw image. But this initial step is important. Because now that we have the canvas rendered (and resident in memory), we can go about manipulating it to our needs.


Image description

In the next installment...

Now that we've loaded the image, the next step will be to pixelate it. I'll show how to do that programmatically in the next article.

Top comments (0)