DEV Community

Andrew Smith
Andrew Smith

Posted on

How to build an image recognition app in React Native in 30 minutes

For a few months now I've been fascinated by React Native, and having already done some development in the Ionic framework I was excited by how well an app that renders through native components rather than a webview performs.

We'll be building a simple app that uses React Native's camera component (https://github.com/react-native-community/react-native-camera) to take a picture, then passes the Base64 representation of this image to the free Clarifai Predict image API (https://clarifai.com/) to get a description of what's in the image.

The Clarifai API is free, really simple to setup and use and will allow us to get a description from what's in an image.

Screenshots of finished app

This tutorial presumes you have NodeJS and React Native installed. If you don't then head over to https://facebook.github.io/react-native/docs/getting-started.html to get started. It also presumes you have a basic understanding of React and NodeJS.

You'll also need to grab your free Clarifai API key from https://clarifai.com/developer/account/signup

What we'll build

We'll be creating 2 React components ourselves:

  1. A camera preview component, which renders the built in React Native Camera component. This will handle the camera's preview and contain all the logic for identifying what's in the image
  2. A capture button component which handles the user pressing the button to take the picture, as well as the disabled state of the button.

Let's begin

Firstly, you'll need to initialise a new React Native app.

react-native init imageRecogitionReactNativeDemo

Then CD into your new React Native projects directory, and run the following command to boot up the iOS simulator.

cd imageRecogitionReactNativeDemo
react-native run-ios

Next we'll want to install the built in React Native Camera component that we'll be using

npm install react-native-camera --save~

Then we'll want to link our new library up

react-native link react-native-camera

You'll also want to install Clarifai too, which is what we'll be passing our images to to get the identification.

npm install clarifai

We'll also need to add a NSCameraUsageDescription in the Info.plist file otherwise the app will crash. This is just a small description where you state how your app is going to use the camera. So add the following to your Info.plist file in the iOS folder for the project.

<key>NSCameraUsageDescription</key>
<string>This app requires the camera to take an image to be identified</string>

Now you're pretty much all setup, so you're ready to build our 2 components.

Firstly, we want to build our camera component which will hold everything else.

So create a folder called 'components' and inside this create a Camera.js file.

At the top of the page, we'll want to import React, as well as the Dimensions, Alert, StyleSheet and ActivityIndicator modules from React Native to use.

import React from 'react';
import { Dimensions, Alert, StyleSheet, ActivityIndicator } from 'react-native';

Then we'll want to actually import the React Native Camera module we've installed via NPM.

import { RNCamera } from 'react-native-camera';

We'll also import our Capture button component, but we'll come to that later.

Setup the Camera's class


export default class Camera extends React.Component {

}

Next we'll want to setup the state of our camera component, so create a constructor for the Camera class. We'll need to set 2 variables of state

  1. The text we want to show in an alert containing the word of what's been identified in the image (which I've called identifiedAs)
  2. A boolean value to determine whether the camera is in a loading state (for use with the activity indicator when we're identifying what's in the image).

So your constructor should look like this

constructor(props){
    super(props);

    this.state = { 
        identifedAs: '',
        loading: false
    }
}

Inside the render function of the Camera class we'll want to add the following code, from the React Native Camera component docs. This will just loading up the built in Camera component from React Native.

<RNCamera ref={ref => {this.camera = ref;}} style={styles.preview}></RNCamera>

Now let's add the button to take the picture, for this we'll create a whole new component.

Go ahead and create a CaptureButton.js component inside your components folder.

Inside here we'll want to import the Button and TouchableHighlight components from React Native. As well as the default StyleSheet module and React.

import React  from 'react';
import { StyleSheet, Button, TouchableHighlight } from 'react-native';

Then inside the render function for this class, we'll add a TouchableHighlight component (https://facebook.github.io/react-native/docs/touchablehighlight) with a Button component inside, to get the default iOS and Android styling. We'll also add our own styles via the default style prop. We'll also need to use the disabled prop, which takes a prop we've passed down from the state of the parent Camera.js component.

<TouchableHighlight style={styles.captureButton} disabled={this.props.buttonDisabled}>
    <Button onPress={this.props.onClick} disabled={this.props.buttonDisabled} title="Capture" accessibilityLabel="Learn more about this button"/>
</TouchableHighlight>

We'll want to add a press event to this Button too, so that it knows what to do when the user presses it (i.e take the picture and identify from it). For this we'll add an onPress event and give it the props from the parent Camera.js component we had earlier, which is a function inside Camera.js.

We'll also want to disable the button when it's been clicked, so for this again we'll use some props passed down from the Camera.js component, as it's ultimately the camera component that determines the state of whether a picture is being taken, rather than the button.

Let's also add some styling for the button, to just and push it up and give it a background and some rounded corners.

const styles = StyleSheet.create({
    captureButton: {
        marginBottom:30,
        width:160,
        borderRadius:10,
        backgroundColor: "white",
    }
});

Then simply add this style into the style prop of the TouchableHighlight component

style={styles.captureButton}

So overall, your Button.js should look like this

import React  from 'react';
import { StyleSheet, Button, TouchableHighlight } from 'react-native';

export default class CaptureButton extends React.Component {
    render() {
        return (
            <TouchableHighlight style={styles.captureButton} disabled {this.props.buttonDisabled}>
                <Button onPress={this.props.onClick} disabled={this.props.buttonDisabled} title="Capture" accessibilityLabel="Learn more about this button"/>
            </TouchableHighlight>
        );
    }
}

const styles = StyleSheet.create({
    captureButton: {
        marginBottom:30,
        width:160,
        borderRadius:10,
        backgroundColor: "white"
    }
});


Now heading back to your Camera.js component, your render function should be looking like this. We've added some styling for the preview area to via the style props, and we've added our own buttonDisabled props which sends the loading state of the camera down to the child button component. We've also added our onClick props too, and bound this to the takePicture() function.

render() {
    return (
        <RNCamera ref={ref => {this.camera = ref;}} style={styles.preview}>
            <CaptureButton buttonDisabled={this.state.loading} onClick={this.takePicture.bind(this)}/>
        </RNCamera>
    );
}

We'll want to add an Activity Indicator (https://facebook.github.io/react-native/docs/activityindicator) to show the user that the image is being identified.

So for this let's use React Native's Activity Indicator component, which we imported earlier.

<ActivityIndicator size="large" style={styles.loadingIndicator} color="#fff" animating={this.state.loading}/>

For this we'll want to use the default animating prop and set it the loading state of the class so that when the state is updated the ActivityIndicator will be shown/hidden accordingly.

So overall, having added our ActivityIndicator and our own Capture Button component, the render function of your Camera.js component should look like this

render() {
    return (
        <RNCamera ref={ref => {this.camera = ref;}} style={styles.preview}>
            <ActivityIndicator size="large" style={styles.loadingIndicator} color="#fff" animating={this.state.loading}/>
            <CaptureButton buttonDisabled={this.state.loading} onClick={this.takePicture.bind(this)}/>
        </RNCamera>
    );
}

We'll also add some styling via the StyleSheet module to center the camera's preview and the loading indicator, we'll use the Dimensions import to dynamically make the camera preview take up the entire width and height of the phone screen.

const styles = StyleSheet.create({
    preview: {
        flex: 1,
        justifyContent: 'flex-end',
        alignItems: 'center',
        height: Dimensions.get('window').height,
        width: Dimensions.get('window').width,
    },
    loadingIndicator: {
        flex: 1,
        alignItems: 'center',
        justifyContent: 'center',
    }
});


So now you should have the UI all sorted, we want to add the functionality to take the picture. So first we want to wire up the click event for the Button.js component we created. Most of this code has been taken from the React Native Camera components docs, but I'll summarise it.

This wants to be an Async function

takePicture = async function(){

}

Then after checking the camera has been initialised and a picture has been taken, we want to pause the camera's preview, on the photo that we've taken

// Pause the camera's preview
this.camera.pausePreview();

Then after this we can simply update the state of the Camera to be calculating the image's tags.

// Update the state to indicate loading
this.setState((previousState, props) => ({
    loading: true
}));

We then want to actually take the picture, and get the Base64 representation of the picture

//Set the options for the camera
const options = {
    base64: true
};

// Get the base64 version of the image
const data = await this.camera.takePictureAsync(options)

Then we'll call a new function we'll create shortly that takes the Base64 representation of the image and pass it to the Clarifai API.

this.identifyImage(data.base64);

Again, using the Clarafai docs, we can initialise Clarafai with your API key and pass the Base64 to its Predict API. Then we'll pass the part of the JSON response that contains the top rated image tag to a new function.

identifyImage(imageData){

    // Initialise the Clarifai api
    const Clarifai = require('clarifai');

    const app = new Clarifai.App({
        apiKey: 'YOUR KEY HERE'
    });

    // Identify the image
    app.models.predict(Clarifai.GENERAL_MODEL, {base64: imageData})
        .then((response) =>  this.displayAnswer(response.outputs[0].data.concepts[0].name)
        .catch((err) => alert(err))
    );
}

In the displayAnswer function we'll want to update the state of the application. This will set the state of the alert message as well as disable the Activity Indicator, as well as re-enable all the buttons.

// Dismiss the acitivty indicator
this.setState((prevState, props) => ({
    identifedAs:identifiedImage,
    loading:false
}));

Now that we've got the answer we'll just show it on an alert to the user, using React Native's Alert module (https://facebook.github.io/react-native/docs/alert)

Alert.alert(this.state.identifedAs,'',{ cancelable: false });

Then we'll resume the camera's preview, so we can take a new picture.

// Resume the camera's preview
this.camera.resumePreview();

Overall, your displayAnswer() function should look like this

displayAnswer(identifiedImage){

    // Dismiss the acitivty indicator
    this.setState((prevState, props) => ({
        identifedAs:identifiedImage,
        loading:false
    }));

    // Show an alert with the answer on
    Alert.alert(this.state.identifedAs,'',{ cancelable: false });

    // Resume the preview
    this.camera.resumePreview();
}

And your whole Camera.js component

import React from 'react';
import { Dimensions, Alert, StyleSheet, ActivityIndicator } from 'react-native';
import { RNCamera } from 'react-native-camera';
import CaptureButton from './CaptureButton.js'

export default class Camera extends React.Component {

    constructor(props){
        super(props);
        this.state = { 
            identifedAs: '',
            loading: false
        }
    }

    takePicture = async function(){

        if (this.camera) {

            // Pause the camera's preview
            this.camera.pausePreview();

            // Set the activity indicator
            this.setState((previousState, props) => ({
                loading: true
            }));

            // Set options
            const options = {
                base64: true
            };

            // Get the base64 version of the image
            const data = await this.camera.takePictureAsync(options)

            // Get the identified image
            this.identifyImage(data.base64);
        }
    }

    identifyImage(imageData){

        // Initialise Clarifai api
        const Clarifai = require('clarifai');

        const app = new Clarifai.App({
            apiKey: 'YOUR KEY HERE'
        });

        // Identify the image
        app.models.predict(Clarifai.GENERAL_MODEL, {base64: imageData})
        .then((response) => this.displayAnswer(response.outputs[0].data.concepts[0].name)
        .catch((err) => alert(err))
        );
    }

    displayAnswer(identifiedImage){

        // Dismiss the acitivty indicator
        this.setState((prevState, props) => ({
            identifedAs:identifiedImage,
            loading:false
        }));

    // Show an alert with the answer on
    Alert.alert(
            this.state.identifedAs,
            '',
            { cancelable: false }
        )

        // Resume the preview
        this.camera.resumePreview();
    }

    render() {
        return (
            <RNCamera ref={ref => {this.camera = ref;}} style={styles.preview}>
            <ActivityIndicator size="large" style={styles.loadingIndicator} color="#fff" animating={this.state.loading}/>
            <CaptureButton buttonDisabled={this.state.loading} onClick={this.takePicture.bind(this)}/>
            </RNCamera>
        );
    }
}

const styles = StyleSheet.create({
    preview: {
        flex: 1,
        justifyContent: 'flex-end',
        alignItems: 'center',
        height: Dimensions.get('window').height,
        width: Dimensions.get('window').width,
    },
    loadingIndicator: {
        flex: 1,
        alignItems: 'center',
        justifyContent: 'center',
    }
});


Now heading back to the top level component, App.js, import your fancy new Camera Component that you've just created.

import Camera from './components/Camera.js';

Then add it between the React Native view.

So your App.js should look like this

import React  from 'react';
import { StyleSheet, View } from 'react-native';
import Camera from './components/Camera.js';

export default class App extends React.Component {

    constructor(props){
        super(props);
        process.nextTick = setImmediate;
    }

    render() {
        return (
            <View style={styles.container}>
                <Camera />
            </View>
        );
    }
}

const styles = StyleSheet.create({
    container: {
        flex: 1,
        backgroundColor: '#fff',
        alignItems: 'center',
        justifyContent: 'center',   
    }
});

So overall our simple application has been split up into 3 components, the app itself, our own camera component, and our button component. Then on top of this we're using the built in React Native Camera component.

We've also utilised a number of standard React Native components, such as Alerts, Activity Indicators, StyleSheets, TouchableHighlight and Buttons.

So simply connect up your phone, and open the Xcode project in Xcode to get it onto your device to give it a test.

The source code for this app is available here on Github https://github.com/andrewsmith1996/Image-Recogition-React-Native, and is also showcased on my portfolio here https://andrewsmithdeveloper.com

I hope you enjoyed this post, and if you have any questions at all or feedback on my post, code or anything then let me know!

Top comments (13)

Collapse
 
beobungbu profile image
beobungbu • Edited

Long Post but cannot Work as Expected. Provided Source code still containing Bug and after fix that bug we will found that the application cannot get using the Camera because he forget to Ask For Permission.

Collapse
 
we12010215 profile image
Hwa Lim

After fix the bug, but still don't have any prediction answer output. Is it my apiKey error??? Could you help me about this?

Collapse
 
ddamine profile image
DDAmine

it's not working for me , could u help me ?

Collapse
 
aditya1501 profile image
aditya1501

is it working now i am building a similar app. i want to use my deep learning model to predict but I am not able to store the image and pass it to the model

Collapse
 
beobungbu profile image
beobungbu

Okay, give me your personal email or skype/ telegram/ facebook/ to get contact.

Collapse
 
kris profile image
kris

This tutorial exemplifies the use of React Native Camera for image recognition using machine learning model from Clarafai. Proper guidance on setup and use of React Native Camera to capture an image. And use of captured images for image recognition is amazing. This tutorial shows that image recognition with an AI model is possible and moreover simple in React Native just by using the camera package and API from Clarafai.

Collapse
 
msmore profile image
Riddhi More

Hello! This is awesome, But is there any way I can use a machine learning App on real time video, I do not want to capture the images but instead capture a video and at the same time predict it frame by frame and give prediction in some text field on screen.
Any way I can use an async function with my model.predict() or at least automate the process of capturing images and minimising delay

Collapse
 
razorholt profile image
Razorholt

Does it work with Expo?

Collapse
 
curtis profile image
curti.s_

where is the camera variable declared initially.
However, nice piece of content. Greatly appreciated.

Collapse
 
curtis profile image
curti.s_

Never mind. I figured it was from the ref. Thanks.

Collapse
 
priyanga2496 profile image
priyanga2496

hi its not working for me after capturing image clarifai not parsing any output response

Collapse
 
samhkwest profile image
samhkwest

Hello Andrew, what a great work!
I am new to mobile development.
What android emulator should I use to run your program in window environment?