DEV Community

Cover image for The guide to add AI to your React Native app with AWS Amplify
rpostulart for AWS Community Builders

Posted on • Updated on

The guide to add AI to your React Native app with AWS Amplify

AWS Amplify is evolving fast. It is a framework which you can use to provision a cloud back end infrastructure in minutes with the same easiness of ordering a pizza. AWS Amplify will deliver your infrastructure even faster than that your pizza will be delivered.

It is offering a large scale of different services like authentication, storage, api's, datastore and much more. I have already written a few blogs about AWS Amplify, like GEO search and push notifications, but now it's time for AWS Amplify and AI/ML.

If you look at AI it is possible to build, test and deploy own models. You can for example use SageMaker for this (not integrated with AWS Amplify) There are also some predefined models in AWS Amplify which you can use like Translating text from one language to another, converting text to speech, text recognition from image and more.

Use case: text recognition from image

In this guide I will show how you can use a React Native app, Authenticated users can make a photo with the camera and match the text from the photo with the text from a database. You can connect a phone number or email address to the text string in the database and based on the match you can apply additional actions, like sending an email.

Getting Started

I will use NPM but of course you can also you Yarn.

Set up React Native

First, we'll create the React Native application we'll be working with.

$ npx expo init ScanApp

> Choose a template: **Tabs**

$ cd ScanApp

expo install expo-camera

$ npm install aws-amplify aws-amplify-react-native

Enter fullscreen mode Exit fullscreen mode

Set up AWS Amplify

We first need to have the AWS Amplify CLI installed. The Amplify CLI is a command line tool that allows you to create & deploy various AWS services.

To install the CLI, we'll run the following command:

$ npm install -g @aws-amplify/cli

Enter fullscreen mode Exit fullscreen mode

Next, we'll configure the CLI with a user from our AWS account:

$ amplify configure

Enter fullscreen mode Exit fullscreen mode

For a video walkthrough of the process of configuring the CLI, click

Now we can initialise a new Amplify project from within the root of our React Native application:

$ amplify init
Enter fullscreen mode Exit fullscreen mode

Here we'll be guided through a series of steps:

  • Enter a name for the project: amplifyAIapp (or your preferred project name)
  • Enter a name for the environment: dev (use this name, because we will reference to it)
  • Choose your default editor: Visual Studio Code (or your text editor)
  • Choose the type of app that you're building: javascript
  • What javascript framework are you using: react-native
  • Source Directory Path: /
  • Distribution Directory Path: build
  • Build Command: npm run-script build
  • Start Command: npm run-script start
  • Do you want to use an AWS profile? Y
  • Please choose the profile you want to use: YOUR_USER_PROFILE
  • Now, our Amplify project has been created & we can move on to the next steps.

Add Graphql to your project

Your React Native App is up and running and AWS Amplify is configured. Amplify comes with different services which you can use to enrich your app. Let’s add an API.

Amplify add api
Enter fullscreen mode Exit fullscreen mode

These steps will take place:

  • Select Graphql
  • Enter a name for the API: aiAPI (your preferred API name)
  • Select an authorisation type for the API: Amazon Cognito User Pool ( Because we are using this app with authenticated users only, but you can choose other options)
  • Select by do you want to use the default authentication and security configuration: Default configuration
  • How do you want users to be able to sign in? Username (with this also the AWS Amplify Auth module will be enabled)
  • Do you want to configure advanced settings? No, I am done.
  • Do you have an annotated GraphQL schema? n
  • Do you want a guided schema creation?: n
  • Provide a custom type name: user

Your API and your schema definition have been created now. You can find it in you project directory:

Amplify > backend > api > name of your api

Open schema.graphql and replace the code with this code:

type text @model {
    id: ID!
    text: String!
    email: String!
}

Enter fullscreen mode Exit fullscreen mode

The @model directive will create a DynamoDB for you. There are more directives possible, for the full set look at the AWS Amplify docs.

Let now first push the configuration to the cloud so that also your database is created:

amplify push
Enter fullscreen mode Exit fullscreen mode
  • Are you sure you want to continue? Y
  • Do you want to generate code for your newly created GraphQL API Y
  • Choose the code generation language target javascript
  • Enter the file name pattern of graphql queries, mutations and subscriptions enter (default) Do you want to generate/update all possible GraphQL operations - queries, mutations and subscriptions? Y
  • Enter maximum statement depth [increase from default if your schema is deeply nested] enter (default 2)

Add predictions to your app

We will first add the recognise text from an image capability to your app by following these steps:

Amplify add predictions
Enter fullscreen mode Exit fullscreen mode
  • Select Identify
  • Select Identify Text
  • Give your capability a name and press enter
  • Would you also like to identify documents? N
  • Who should have access? Auth users only

Add S3 storage to your app

We will save the photo in a S3 bucket. We will create the bucket.

amplify add storage
Enter fullscreen mode Exit fullscreen mode
  • Select Content (Images, audio, video, etc.)
  • Please provide a friendly name for your resource that will be used to label this category in the project:
  • Please provide bucket name:
  • Who should have access: Auth users only
  • Select create/update, read and delete
  • Do you want to add a Lambda Trigger for your S3 Bucket? N

Add a function to your project

By adding a function we are going to create a Lambda. The goal of this lambda is receiving the text from a photo and match that with the data from the database

Amplify add function
Enter fullscreen mode Exit fullscreen mode

Follow these steps:

  • Provide a friendly name for your resource to be used as a label for this category in the project: matchFunction
  • Provide the AWS Lambda function name:
  • Choose the function template that you want to use: Hello world function
  • Do you want to access other resources created in this project from your Lambda function? Y
  • Select API
  • Select the operations you want to permit for scanapp? Read
  • Do you want to edit the local lambda function now? N

Your function is created now and you can find it in your project directory:

Amplify > backend > function > name of your function

Go to the src directory of the matchFunction and install this package


$ npm install aws-sdk

Enter fullscreen mode Exit fullscreen mode

Open the file matchFunction-cloudformation-template.json and add this code to the statement array in the section lambdaexecutionpolicy. This code will give your lambda function access to your DynamoDB resource:

{
            "Effect": "Allow",
            "Action": [
                "dynamodb:Get*",
                "dynamodb:Query",
                "dynamodb:Scan"
            ],
            "Resource": {
               "Fn::Sub": [
                     "arn:aws:dynamodb:${region}:${account}:table/*",
             {
                                        "region": {
                                            "Ref": "AWS::Region"
                                        },
                                        "account": {
                                            "Ref": "AWS::AccountId"
                                        },
                      }
                ]
             }

}
Enter fullscreen mode Exit fullscreen mode

Open the src/app.js file and paste this code.

/* Amplify Params - DO NOT EDIT
You can access the following resource attributes as environment variables from your Lambda function
var environment = process.env.ENV
var region = process.env.REGION
var apiScanappGraphQLAPIIdOutput = process.env.API_SCANAPP_GRAPHQLAPIIDOUTPUT
var apiScanappGraphQLAPIEndpointOutput = process.env.API_SCANAPP_GRAPHQLAPIENDPOINTOUTPUT

Amplify Params - DO NOT EDIT */ const AWS = require("aws-sdk");
AWS.config.region = process.env.REGION;
const dynamodb = new AWS.DynamoDB.DocumentClient();

exports.handler = async (event, context, callback) => {
  try {
    let renderItems = await getItems();

    const text = event.arguments.input.text.toLowerCase();
    let foundItem = "";
    for (let i = 0, iMax = renderItems.length; i < iMax; i++) {
      if (text.includes(renderItems[i].text.toLowerCase())) {
        foundItem = renderItems[i];
        break;
      }
    }

    const response = {
      items: JSON.stringify(foundItem)
    };

    callback(null, response);
  } catch (error) {
    callback(error);
  }
};

function getItems() {
  let tableName = "text";
  if (process.env.ENV && process.env.ENV !== "NONE") {
    tableName =
      tableName +
      "-" +
      process.env.API_SCANAPP_GRAPHQLAPIIDOUTPUT +
      "-" +
      process.env.ENV;
  }

  let scanParams = {
    TableName: tableName
  };

  return new Promise((resolve, reject) => {
    dynamodb.scan(scanParams, (err, data) => {
      if (err) {
        console.log("err", err);
        reject(err);
      } else {
        console.log("Query succeeded.");
        resolve(data.Items);
      }
    });
  });
}

Enter fullscreen mode Exit fullscreen mode

We are going to push it one more time to the cloud and then we can build our App, but before we need also have to update our API schema, go to:

Amplify > backend > api > name of your api

Open schema.graphql and replace the code with this code:

type text @model {
  id: ID!
  text: String!
  email: String!
}

type Mutation {
  match(input: matchInput): matchResult @function(name: "matchFunction-${env}")
}

type matchResult {
  items: String
}

input matchInput {
  text: String!
}


Enter fullscreen mode Exit fullscreen mode

Now push it to AWS

amplify push
Enter fullscreen mode Exit fullscreen mode

Add some data via Cognito and AppSync

Go in the console to AWS Cognito and click on Manager User Pools > then on your user pool > user and groups > create user. Fill in the form and leave all the checkboxes as verified. Click on the new user and make a note of the sub value (something like b14cc22-c73f-4775-afd7-b54f222q4758) and then go to App Clients in the menu and make a note of the App client ID from the clientWeb (the top one) use these values in the next step.

Let’s add some data which you can use in your app. Go to the AppSync service in the console.

  • Go to AWS AppSync via the console.
  • Open your project
  • Click on Queries
  • Log in with a Cognito user by clicking on the button 'Login via Cognito User Pools' (You can create a user via Cognito in the console or via your App) (Use the data that your have written down)
  • Add the following code and run the code (Update with text that is also available in the picture where you are going to take a photo from and update with your e-mail address):
mutation createText {
  createText( input: {
    text: "<TEXT>",  
    email: "<EMAILADDRESS>"
  }
  ){
    id
    text
    email
  }
}
Enter fullscreen mode Exit fullscreen mode
  • Run this code a few times with some other values so you can see that it actually only match your text

Let's build the React Native App

I made an app where a user needs to log in and then immediately load the camera. With this camera you can take a photo from the picture. This photo will be uploaded to S3, then the text will be abstracted from the picture, send to the lambda function and there a match will take place.

As you are used from me I don't spend much time on the UX design of the app, I just want to show how easily you can set it up. The UX is up to you :)

You can download the project from: https://github.com/rpostulart/Aiapp

-- or --

Go to the root of your project and open App.js and replace it with this code:

import * as React from "react";
import { Platform, StatusBar, StyleSheet, View } from "react-native";
import { SplashScreen } from "expo";
import * as Font from "expo-font";
import { Ionicons } from "@expo/vector-icons";
import { NavigationContainer } from "@react-navigation/native";
import { createStackNavigator } from "@react-navigation/stack";

import AppNavigator from "./navigation/AppNavigator";
import useLinking from "./navigation/useLinking";

const Stack = createStackNavigator();

export default function App(props) {
  const [isLoadingComplete, setLoadingComplete] = React.useState(false);
  const [initialNavigationState, setInitialNavigationState] = React.useState();
  const containerRef = React.useRef();
  const { getInitialState } = useLinking(containerRef);

  // Load any resources or data that we need prior to rendering the app
  React.useEffect(() => {
    async function loadResourcesAndDataAsync() {
      try {
        SplashScreen.preventAutoHide();

        // Load our initial navigation state
        setInitialNavigationState(await getInitialState());

        // Load fonts
        await Font.loadAsync({
          ...Ionicons.font,
          "space-mono": require("./assets/fonts/SpaceMono-Regular.ttf")
        });
      } catch (e) {
        // We might want to provide this error information to an error reporting service
        console.warn(e);
      } finally {
        setLoadingComplete(true);
        SplashScreen.hide();
      }
    }

    loadResourcesAndDataAsync();
  }, []);

  if (!isLoadingComplete && !props.skipLoadingScreen) {
    return null;
  } else {
    return (
      <View style={styles.container}>
        {Platform.OS === "ios" && <StatusBar barStyle="default" />}
        <NavigationContainer
          ref={containerRef}
          initialState={initialNavigationState}
        >
          <Stack.Navigator>
            <Stack.Screen name="Root" component={AppNavigator} />
          </Stack.Navigator>
        </NavigationContainer>
      </View>
    );
  }
}

const styles = StyleSheet.create({
  container: {
    flex: 1,
    backgroundColor: "#fff"
  }
});

Enter fullscreen mode Exit fullscreen mode

Go to the screens directory and add these files

HomeScreen.js

import React from "react";
import { Alert, StyleSheet, Text, View, TouchableOpacity } from "react-native";
import Constants from "expo-constants";
import { Camera } from "expo-camera";
import * as Permissions from "expo-permissions";

import Amplify, {
  API,
  Storage,
  Predictions,
  graphqlOperation
} from "aws-amplify";

import { AmazonAIPredictionsProvider } from "@aws-amplify/predictions";
import * as mutations from "../src/graphql/mutations";
import awsconfig from "../aws-exports";

Amplify.configure(awsconfig);
Amplify.addPluggable(new AmazonAIPredictionsProvider());

import { Ionicons } from "@expo/vector-icons";

export default class CameraScreen extends React.Component {
  state = {
    flash: "off",
    zoom: 0,
    autoFocus: "on",
    type: "back",
    whiteBalance: "auto",
    ratio: "16:9",
    newPhotos: false,
    permissionsGranted: false,
    pictureSize: "1280x720",
    pictureSizes: ["1280x720"],
    pictureSizeId: 0
  };

  async componentDidMount() {
    const { status } = await Permissions.askAsync(Permissions.CAMERA);
    this.setState({ permissionsGranted: status === "granted" });
  }

  toggleFocus = () =>
    this.setState({ autoFocus: this.state.autoFocus === "on" ? "off" : "on" });

  takePicture = () => {
    if (this.camera) {
      this.camera.takePictureAsync({ onPictureSaved: this.onPictureSaved });
    }
  };

  uploadToStorage = async pathToImageFile => {
    try {
      const response = await fetch(pathToImageFile);

      const blob = await response.blob();

      const s3photo = await Storage.put("file-" + Date.now() + ".jpeg", blob, {
        contentType: "image/jpeg"
      });

      await Predictions.identify({
        text: {
          source: {
            key: s3photo.key,
            level: "public" //optional, default is the configured on Storage category
          },
          format: "PLAIN" // Available options "PLAIN", "FORM", "TABLE", "ALL"
        }
      })
        .then(async ({ text: { fullText } }) => {
          const input = {
            text: fullText
          };

          await API.graphql(graphqlOperation(mutations.match, { input: input }))
            .then(result => {
              const item = JSON.parse(result.data.match.items);

              if (typeof item.text === "undefined") {
                Alert.alert(`There was no match!`);
              } else {
                Alert.alert(
                  `Whoohoo! There was a match with ${item.text} the email has been send!`
                );
              }
            })
            .catch(err => console.log(err));
        })
        .catch(err => console.log(err));

      //
    } catch (err) {
      console.log(err);
    }
  };

  handleMountError = ({ message }) => console.error(message);

  onPictureSaved = async photo => {
    this.uploadToStorage(photo.uri);
  };

  renderNoPermissions = () => (
    <View style={styles.noPermissions}>
      <Text style={{ color: "white" }}>
        Camera permissions not granted - cannot open camera preview.
      </Text>
    </View>
  );

  renderTopBar = () => (
    <View style={styles.topBar}>
      <TouchableOpacity style={styles.toggleButton} onPress={this.toggleFocus}>
        <Text
          style={[
            styles.autoFocusLabel,
            { color: this.state.autoFocus === "on" ? "white" : "#6b6b6b" }
          ]}
        >
          FOCUS
        </Text>
      </TouchableOpacity>
    </View>
  );

  renderBottomBar = () => (
    <View style={styles.bottomBar}>
      <View style={{ flex: 0.4 }}>
        <TouchableOpacity
          onPress={this.takePicture}
          style={{ alignSelf: "center" }}
        >
          <Ionicons name="ios-radio-button-on" size={70} color="white" />
        </TouchableOpacity>
      </View>
    </View>
  );

  renderCamera = () => (
    <View style={{ flex: 1 }}>
      <Camera
        ref={ref => {
          this.camera = ref;
        }}
        style={styles.camera}
        onCameraReady={this.collectPictureSizes}
        type={this.state.type}
        autoFocus={this.state.autoFocus}
        zoom={this.state.zoom}
        whiteBalance={this.state.whiteBalance}
        ratio={this.state.ratio}
        pictureSize={this.state.pictureSize}
        onMountError={this.handleMountError}
      >
        {this.renderTopBar()}
        {this.renderBottomBar()}
      </Camera>
    </View>
  );

  render() {
    const cameraScreenContent = this.state.permissionsGranted
      ? this.renderCamera()
      : this.renderNoPermissions();
    return <View style={styles.container}>{cameraScreenContent}</View>;
  }
}

const styles = StyleSheet.create({
  container: {
    flex: 1,
    backgroundColor: "#000"
  },
  camera: {
    flex: 1,
    justifyContent: "space-between"
  },
  topBar: {
    flex: 0.2,
    backgroundColor: "transparent",
    flexDirection: "row",
    justifyContent: "space-around",
    paddingTop: Constants.statusBarHeight / 2
  },
  bottomBar: {
    backgroundColor: "transparent",
    alignSelf: "flex-end",
    justifyContent: "space-between",
    flex: 0.12,
    flexDirection: "row"
  },
  noPermissions: {
    flex: 1,
    alignItems: "center",
    justifyContent: "center",
    padding: 10
  },

  toggleButton: {
    flex: 0.25,
    height: 40,
    marginHorizontal: 2,
    marginBottom: 10,
    marginTop: 20,
    padding: 5,
    alignItems: "center",
    justifyContent: "center"
  },
  autoFocusLabel: {
    fontSize: 20,
    fontWeight: "bold"
  }
});


Enter fullscreen mode Exit fullscreen mode

The uploadToStorage function will upload your photo to a S3 bucket, then prediction will identify the text, send this to a lambda function and receives the result from the match with DynamoDB.

Go to the navigation directory and add or update these files:

AppNavigator.js

import * as React from "react";
import { createStackNavigator } from "@react-navigation/stack";
import { Auth } from "aws-amplify";
import HomeScreen from "../screens/HomeScreen";
import LoginScreen from "../screens/Login";

const Stack = createStackNavigator();

export default class Navigator extends React.Component {
  //export default async function AppNavigator({ navigation, route }) {
  // Set the header title on the parent stack navigator depending on the
  // currently active tab. Learn more in the documentation:
  // https://reactnavigation.org/docs/en/screen-options-resolution.html
  state = {
    user: "not authenticated"
  };

  async componentDidMount() {
    await Auth.currentAuthenticatedUser({
      bypassCache: true // Optional, By default is false. If set to true, this call will send a request to Cognito to get the latest user data
    })
      .then(async user => {
        this.setState({ user: user });
      })
      .catch(err => {
        // Is NOT logged in
        console.log(err);
      });
  }

  render() {
    const user = this.state.user;
    return (
      <Stack.Navigator>
        {user === "not authenticated" ? (
          // No token found, user isn't signed in
          <Stack.Screen
            name="SignIn"
            component={LoginScreen}
            options={{
              title: "Sign in"
              // When logging out, a pop animation feels intuitive
            }}
          />
        ) : (
          // User is signed in
          <Stack.Screen name="Home" component={HomeScreen} />
        )}
      </Stack.Navigator>
    );
  }
}


Enter fullscreen mode Exit fullscreen mode

BottomTabNavigator.js

import * as React from "react";
import { createBottomTabNavigator } from "@react-navigation/bottom-tabs";
import TabBarIcon from "../components/TabBarIcon";
import HomeScreen from "../screens/HomeScreen";

const BottomTab = createBottomTabNavigator();
const INITIAL_ROUTE_NAME = "Home";

export default function BottomTabNavigator({ navigation, route }) {
  // Set the header title on the parent stack navigator depending on the
  // currently active tab. Learn more in the documentation:
  // https://reactnavigation.org/docs/en/screen-options-resolution.html

  return (
    <BottomTab.Navigator initialRouteName={INITIAL_ROUTE_NAME}>
      <BottomTab.Screen
        name="Home"
        component={HomeScreen}
        options={{
          title: "Photo",
          tabBarIcon: ({ focused }) => (
            <TabBarIcon focused={focused} name="md-code-working" />
          )
        }}
      />
    </BottomTab.Navigator>
  );
}


Enter fullscreen mode Exit fullscreen mode

metro.config.js

To be sure that metro is not conflicting you need to add this file to the root of your project.

module.exports = {
  resolver: {
    blacklistRE: /#current-cloud-backend\/.*/
  },
  transformer: {
    getTransformOptions: async () => ({
      transform: {
        experimentalImportSupport: false,
        inlineRequires: false
      }
    })
  }
};

Enter fullscreen mode Exit fullscreen mode

Your app is ready and you can start it from your root project with:

> expo start
> press "i" (this will load the simulator)
Enter fullscreen mode Exit fullscreen mode

Sign in with the user that you have created via AWS Cognito, make a picture, wait a few seconds .... et voilà .... there is a match with your text. Now you can use the email address from the returned object to send an additional email. We are not diving into this functionality now.

Here you can see the movie:
https://twitter.com/i/status/1230221875248279553

Conclusion

AI is the next capability to enrich your app so you can bring more value to your customers. It took me literally two evenings to set up this app and back end and that is of course really great. Imaging how fast you can deliver the right functionalities to your customers. Don't wait and start now! Start also exploring with the other predictions capabilities. The set up would be kind of similar.

I hope you liked this guide and I am looking forward for your feedback in the comments. Happy coding!

See github for the actual code: https://github.com/rpostulart/Aiapp

Do you want to keep up to date about new blogs or if you have questions? Follow me on twitter

Read also my other blogs:

The next blog I am going to write soon:
The guide to add payment functionalities to your app with React Native and AWS Amplify

Top comments (7)

Collapse
 
msmittalas profile image
msmittalas

Nice use of AWS Cognito Service. I have got an idea from this. How about reading News of particular Company from RSS feeds and give it to Cognito Service to predict whether news is positive or negative. Based on the output we can get suggestions to buy or to sell shares of that company. This is just an idea which i got after reading this article.

Good article

Collapse
 
kris profile image
0xAirdropfarmer

Wow, this is the first time I came across an article explaining the integration of AI/ML using React Native AWS amplify in React Native app. The explanation on each step is proper and detailed, easy to understand. The flow of the implementation is actually easy to grasp. However, I do prefer some screenshots to get the better understand of how each process will have an successful end result. This will help to give proper implementation example as well.

Collapse
 
rpostulart profile image
rpostulart

Hi Kris, thanks for that tip. I will take that into account the next time! I am glad you liked to guide.

Collapse
 
dwyanelin profile image
dwyanelin

why I always meet the error: Resource is not in the state stackUpdateComplete?

Collapse
 
rpostulart profile image
rpostulart

Are you running the latest version and do you build this from scratch or add it to an existing project

Collapse
 
dwyanelin profile image
dwyanelin

I change to us-east-2 at amazon aws console webpage, and push successfully. amazing... I didn't believe this approach until I tried.
I learn from here: github.com/aws-amplify/amplify-con...

Thread Thread
 
rpostulart profile image
rpostulart

Cool let me know if you need additional help