DEV Community

Cover image for Image recognition using Mulesoft and Salesforce
Edgar Moran
Edgar Moran

Posted on • Updated on

Image recognition using Mulesoft and Salesforce

Mulesoft and Salesforce seem to be the right combination of technologies to be able to deliver projects robust and complex in short time. I would like to demonstrate how we can use both of them to recognize images produced from a mobile device and recognize a picture bringing more information and interesting data for kind of a real scenario.

So, how this is done? Well here are some of components I'm using for this project (I will go deep on each one):

  1. Salesforce developer Org
  2. Anypoint Platform (Sandbox) account
  3. Mulesoft mule-aws-recognition-system-api.
  4. Mulesoft mule-aws-recognition-process-api

Salesforce developer Org

I have got a developer account for Salesforce from their developer site (developerforce.com). Salesforce in this case allows me to have:

  • Custom Objects (tables)
  • Custom Fields on each of those objects created
  • A way to expose a mobile application (previously known as Salesforce 1) available to install on IOS or Android devices
  • Visualforce pages, allowing to customize what we cant to show on a mobile app or browser
  • Apex Classes, custom apex code to handle data from a page or allowing to expose REST services from a custom apex (java style) definition.

So here we have the design:

  • Standard Object (Content Version and ContentDocumentLink). Allows to store the actual binary file in Salesforce
  • Custom Object (Hackathon Image). Allows to have a record to link the photo taken
  • Custom Object (Image Label). Stores the image information labels and how accurate is the image with the label from AWS.

here comes the fun part..

  • Visualforce page allowing to show an UI to take the picture:

Alt Text

  • Apex Controller. Allows to get all information from the picture and Create the Content Version and Content Link record related to the Hackathon image.

  • Apex Controller REST . Exposes the mentions endpoint allowing to trigger a push notification in the mobile device.

Where I can get this code from? https://github.com/emoran/sfdc-mulesoft-hackathon-2020.git

Now a basic flow:

Alt Text

Mulesoft mule-aws-recognition-system-api.

Initially this system api was for AWS only but because the time and resources I also included here one of the pieces I need to complete this exercise.

As I mentioned this system API allows to process a Base64 image and send it to Amazon Rekognition API, the result of this call is to be able to retrieve the labels generated from this call.

This same application contains the logic to pull a few tweets using a parameter based on hashtags.

#%RAML 1.0
title: mule-aws-recognition-system-api

/image:
  post:
    body:
      application/json:

    responses:
      200:
        body:
          application/json:

/twitter:
  /tweets:
    get:
      queryParameters:
        q:
          description: "Parameters to filter by hashtag"
      responses:
        200:
          body:
            application/json:
Enter fullscreen mode Exit fullscreen mode

To process the image basically I used the AWS Java SDK to use the API my flow looks like this:

Alt Text

In the other hand for the Tweets we have a different endpoint which receives only the GET request and we return all tweets based on the hashtags provided.

Here how the flow looks like:
Alt Text

As you can see this is just a pretty simple HTTP Request to the Twitter API, It's not included in the process API as we are not using a connector to extract the logic of this request.

Alt Text

Alt Text

You can get the code of the system API from here: https://github.com/emoran/mule-aws-recognition-system-api.git

Mulesoft mule-aws-recognition-process-api

At this point in the process api, now we are really doing more things and connecting the dots. I will try to explain step by step what happens.

The process API has this RAML:

#%RAML 1.0
title: mule-aws-recognition-process-api


/image:
  post:
    body:
      application/json:
    responses:
      200:
        body:
          application/json:
            example:

/sfdc:
  /images:
    get:
      responses:
        200:
          body:
            application/json:
  /contentVersion:
    get:
      queryParameters:
        id:
          description: imageId
          type: string
/tweets:
  get:
    responses:
      200:
        body:
          application/json:


Enter fullscreen mode Exit fullscreen mode
  1. After the mobile application saves the picture we took with our device, Salesforce calls the /images endpoint we exposed in Mulesoft, basically it passes three params imageRecordId (Hackathon Image), contentVersionId (Id of the actual file in Saleforce) and contentDocumentLinkId (Link document to the picture.)

  2. Mulesoft gets the parameters, then using the Salesforce connector we make a query to Content Version and we download the file (actual image in Base64), then we call the system API to passing the image and wait for the bunch of labels that AWS recognized

Alt Text

Alt Text

  1. Once AWS responded we also create the labels in Salesforce (Image Labels) for the uploaded image as records and lastly we call the REST Service we exposed in Salesforce in order to notify the person the image has been processed and now it has labels created.

Alt Text

It was really interesting to check how to call that REST service from the connector, since on older versions of the connector we were able to connect getting the session ID and use the REST endpoint directly. In Mule 4 we are not able to do so, in this case we use the connector capabilities to do it

Alt Text

  1. Now in the last part as user you can use your device to see the labels created per record, but also I created a feature on this process API. I've created a page served on Mulesoft to show the information we saved!

How I did do it?, in the same process API I placed a new configuration file named "portal", a flow that contains a "Load Static Resource" that serves a page stored in a folder named "web" on src/main/resources

Alt Text

Now the main page contains a script that uses JQuery in order to show the information of images and tweets.

This is how the render page looks like:

Alt Text

Basically the page tells you the labels we got from AWS, shows the image we took the picture and the tweets related to the generated labels.

you can get this code from here https://github.com/emoran/mule-aws-recognition-process-api.git

Watch the video:
http://www.youtube.com/watch?v=GWKP4U0o2Ng

Top comments (1)

Collapse
 
roystonlobo profile image
Royston Lobo

Thank you for your submission, Edgar!