<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Edgar Moran</title>
    <description>The latest articles on DEV Community by Edgar Moran (@yucelmoran).</description>
    <link>https://dev.to/yucelmoran</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yucelmoran"/>
    <language>en</language>
    <item>
      <title>Faster Mule deployments using Gitlab cache</title>
      <dc:creator>Edgar Moran</dc:creator>
      <pubDate>Thu, 31 Aug 2023 22:21:17 +0000</pubDate>
      <link>https://dev.to/yucelmoran/faster-mule-deployments-using-gitlab-cache-3pek</link>
      <guid>https://dev.to/yucelmoran/faster-mule-deployments-using-gitlab-cache-3pek</guid>
      <description>&lt;p&gt;Today I was curious about how we can make our deployments faster using CI processes, we have multiple platforms to handle the CI deployments for example GitHub, GitLab, Bitbucket CircleCI, TravisCI etc.. In this case I’m using GitLab.&lt;/p&gt;

&lt;p&gt;I created one application in MulesSoft with one simple scheduler and a logger, really I just want to test the deployment&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu1bov4xq18sy4uzzh2aw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu1bov4xq18sy4uzzh2aw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;the only couple important items to consider is to add the .gitlab-ci.yml file and to setup your build tag in your pom.xml file. Lets see how our .gitlab-ci.yml looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;image: maven:3.6.1-jdk-8

variables: 
  MAVEN_OPTS: "-Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository"

cache:
  key: ${CI_COMMIT_REF_SLUG}
  paths:
    - .m2/repository

stages:
  - build 
  - test
  - deploy-staging
  - deploy-production

build:
  stage: build
  script:
    - mvn  -U -V -e -B clean -DskipTests package
  only:
    - merge_requests

test:
  stage: test
  script:
    - mvn -U clean test
  only:
    - merge_requests
  artifacts:
    when: always
    reports:
      junit:
        - target/surefire-reports/TEST-*.xml  

deploy-staging:
  stage: deploy-staging
  script:
    - mvn -U -V -e -B clean -DskipTests deploy -DmuleDeploy
  rules:
    - if: '$CI_COMMIT_BRANCH == "staging"'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we can see, I’m specifying these lines, with this, I tell Gitlab to cache the dependencies in the .m2 repository and the key there will allow to persist the dependencies in every branch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variables: 
  MAVEN_OPTS: "-Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository"

cache:
  key: ${CI_COMMIT_REF_SLUG}
  paths:
    - .m2/repository
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In my pom.xml this is the setup I have:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="UTF-8"?&amp;gt;
&amp;lt;project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"&amp;gt;
 &amp;lt;modelVersion&amp;gt;4.0.0&amp;lt;/modelVersion&amp;gt;

 &amp;lt;groupId&amp;gt;com.mycompany&amp;lt;/groupId&amp;gt;
 &amp;lt;artifactId&amp;gt;gitlab-cache-deployment&amp;lt;/artifactId&amp;gt;
 &amp;lt;version&amp;gt;1.0.0&amp;lt;/version&amp;gt;
 &amp;lt;packaging&amp;gt;mule-application&amp;lt;/packaging&amp;gt;

 &amp;lt;name&amp;gt;gitlab-cache-deployment&amp;lt;/name&amp;gt;

 &amp;lt;properties&amp;gt;
  &amp;lt;project.build.sourceEncoding&amp;gt;UTF-8&amp;lt;/project.build.sourceEncoding&amp;gt;
  &amp;lt;project.reporting.outputEncoding&amp;gt;UTF-8&amp;lt;/project.reporting.outputEncoding&amp;gt;

  &amp;lt;app.runtime&amp;gt;4.4.0&amp;lt;/app.runtime&amp;gt;
  &amp;lt;mule.maven.plugin.version&amp;gt;3.8.2&amp;lt;/mule.maven.plugin.version&amp;gt;
 &amp;lt;/properties&amp;gt;

 &amp;lt;build&amp;gt; 
  &amp;lt;plugins&amp;gt;
   &amp;lt;plugin&amp;gt;
    &amp;lt;groupId&amp;gt;org.apache.maven.plugins&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;maven-clean-plugin&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;3.2.0&amp;lt;/version&amp;gt;
   &amp;lt;/plugin&amp;gt;
   &amp;lt;plugin&amp;gt;
    &amp;lt;groupId&amp;gt;org.mule.tools.maven&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;mule-maven-plugin&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;${mule.maven.plugin.version}&amp;lt;/version&amp;gt;
    &amp;lt;extensions&amp;gt;true&amp;lt;/extensions&amp;gt;
    &amp;lt;configuration&amp;gt;
     &amp;lt;classifier&amp;gt;mule-application&amp;lt;/classifier&amp;gt;
     &amp;lt;cloudHubDeployment&amp;gt;
      &amp;lt;uri&amp;gt;${CLOUDHUB_URI}&amp;lt;/uri&amp;gt;
      &amp;lt;muleVersion&amp;gt;4.4.0&amp;lt;/muleVersion&amp;gt;
      &amp;lt;connectedAppClientId&amp;gt;${CLIENT_ID}&amp;lt;/connectedAppClientId&amp;gt;
      &amp;lt;connectedAppClientSecret&amp;gt;${CLIENT_SECRET}&amp;lt;/connectedAppClientSecret&amp;gt;
      &amp;lt;connectedAppGrantType&amp;gt;client_credentials&amp;lt;/connectedAppGrantType&amp;gt;
      &amp;lt;environment&amp;gt;Sandbox&amp;lt;/environment&amp;gt;
      &amp;lt;applicationName&amp;gt;gitlab-cache-deployment&amp;lt;/applicationName&amp;gt;
      &amp;lt;workerType&amp;gt;Micro&amp;lt;/workerType&amp;gt;
      &amp;lt;objectStoreV2&amp;gt;true&amp;lt;/objectStoreV2&amp;gt;
     &amp;lt;/cloudHubDeployment&amp;gt;
    &amp;lt;/configuration&amp;gt;
   &amp;lt;/plugin&amp;gt;
  &amp;lt;/plugins&amp;gt;
 &amp;lt;/build&amp;gt;

 &amp;lt;dependencies&amp;gt;
  &amp;lt;dependency&amp;gt;
   &amp;lt;groupId&amp;gt;org.mule.connectors&amp;lt;/groupId&amp;gt;
   &amp;lt;artifactId&amp;gt;mule-http-connector&amp;lt;/artifactId&amp;gt;
   &amp;lt;version&amp;gt;1.7.3&amp;lt;/version&amp;gt;
   &amp;lt;classifier&amp;gt;mule-plugin&amp;lt;/classifier&amp;gt;
  &amp;lt;/dependency&amp;gt;
  &amp;lt;dependency&amp;gt;
   &amp;lt;groupId&amp;gt;org.mule.connectors&amp;lt;/groupId&amp;gt;
   &amp;lt;artifactId&amp;gt;mule-sockets-connector&amp;lt;/artifactId&amp;gt;
   &amp;lt;version&amp;gt;1.2.3&amp;lt;/version&amp;gt;
   &amp;lt;classifier&amp;gt;mule-plugin&amp;lt;/classifier&amp;gt;
  &amp;lt;/dependency&amp;gt;
 &amp;lt;/dependencies&amp;gt;

 &amp;lt;repositories&amp;gt;
  &amp;lt;repository&amp;gt;
   &amp;lt;id&amp;gt;anypoint-exchange-v3&amp;lt;/id&amp;gt;
   &amp;lt;name&amp;gt;Anypoint Exchange&amp;lt;/name&amp;gt;
   &amp;lt;url&amp;gt;https://maven.anypoint.mulesoft.com/api/v3/maven&amp;lt;/url&amp;gt;
   &amp;lt;layout&amp;gt;default&amp;lt;/layout&amp;gt;
  &amp;lt;/repository&amp;gt;
  &amp;lt;repository&amp;gt;
   &amp;lt;id&amp;gt;mulesoft-releases&amp;lt;/id&amp;gt;
   &amp;lt;name&amp;gt;MuleSoft Releases Repository&amp;lt;/name&amp;gt;
   &amp;lt;url&amp;gt;https://repository.mulesoft.org/releases/&amp;lt;/url&amp;gt;
   &amp;lt;layout&amp;gt;default&amp;lt;/layout&amp;gt;
  &amp;lt;/repository&amp;gt;
 &amp;lt;/repositories&amp;gt;

 &amp;lt;pluginRepositories&amp;gt;
  &amp;lt;pluginRepository&amp;gt;
   &amp;lt;id&amp;gt;mulesoft-releases&amp;lt;/id&amp;gt;
   &amp;lt;name&amp;gt;MuleSoft Releases Repository&amp;lt;/name&amp;gt;
   &amp;lt;layout&amp;gt;default&amp;lt;/layout&amp;gt;
   &amp;lt;url&amp;gt;https://repository.mulesoft.org/releases/&amp;lt;/url&amp;gt;
   &amp;lt;snapshots&amp;gt;
    &amp;lt;enabled&amp;gt;false&amp;lt;/enabled&amp;gt;
   &amp;lt;/snapshots&amp;gt;
  &amp;lt;/pluginRepository&amp;gt;
 &amp;lt;/pluginRepositories&amp;gt;

&amp;lt;/project&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now I created couple repositories one for an application using the cache and a second one NO using the cache in the ci yaml file, this way we can validate and check performance between both apps. I created in both repos three branches (master, staging, mydevbranch). In order to verify performance, out pipeline has three stages&lt;/p&gt;

&lt;p&gt;build: Only builds the project and verifies is succesful&lt;br&gt;
test: will run the test (MUnit) in the pipeline&lt;br&gt;
deploy: After a PR is approved and merged from Dev branch to Staging will deploy to Anypoint Platform.&lt;br&gt;
The comparison&lt;br&gt;
In the end using the cache improves in terms of minutes the time of running a build, test or deploy&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No cache&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build&lt;/strong&gt;: took 1 minute, 37 seconds&lt;br&gt;
&lt;strong&gt;Test&lt;/strong&gt;: took 1 minute, 37 seconds&lt;br&gt;
&lt;strong&gt;Deploy&lt;/strong&gt;: Took 3 minutes 42 seconds&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiu1qdqtv0urbkgqj1vsa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiu1qdqtv0urbkgqj1vsa.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With cache&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build&lt;/strong&gt;: took 32 seconds&lt;br&gt;
&lt;strong&gt;Test&lt;/strong&gt;: 38 seconds&lt;br&gt;
&lt;strong&gt;Deploy&lt;/strong&gt;: 3 minutes 42 seconds&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfzeg73pr8d8sbk46j28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfzeg73pr8d8sbk46j28.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see there’s improvement on test and build while deployment seems to be the same, in the end a few minutes gained is better.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxd69psykfmx2god5pzq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxd69psykfmx2god5pzq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfy5qyvsqk7dorwwotp0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfy5qyvsqk7dorwwotp0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will keep investigating if there are better ways to enhance the time of deployment, even some times the time is related to network, and the availability on the platform as well.&lt;/p&gt;

&lt;p&gt;Hope this help you in your deployments!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Basic Google Big Query Operations with a Salesforce sync demo in MULE 4</title>
      <dc:creator>Edgar Moran</dc:creator>
      <pubDate>Thu, 17 Feb 2022 20:21:56 +0000</pubDate>
      <link>https://dev.to/yucelmoran/basic-google-big-query-operations-with-a-salesforce-sync-demo-mule-4-52ak</link>
      <guid>https://dev.to/yucelmoran/basic-google-big-query-operations-with-a-salesforce-sync-demo-mule-4-52ak</guid>
      <description>&lt;p&gt;If we think about data storage the first think it comes to our mind is a regular database, this can be any of the most popular ones like Mysql, SQL server, Postgres, Vertica etc, but I noticed no too many have interacted to one of the services Google provides with the same purpose Google Big Query. And maybe it is because of the &lt;a href="https://cloud.google.com/bigquery/pricing?utm_source=google&amp;amp;utm_medium=cpc&amp;amp;utm_campaign=na-US-all-en-dr-bkws-all-all-trial-e-dr-1011347&amp;amp;utm_content=text-ad-none-any-DEV_c-CRE_573148306951-ADGP_Desk%20%7C%20BKWS%20-%20EXA%20%7C%20Txt%20~%20Data%20Analytics%20~%20BigQuery_Pricing%20Google%20Google-KWID_43700068582852990-kwd-166600832170&amp;amp;utm_term=KW_google%20bigquery%20pricing-ST_google%20bigquery%20pricing&amp;amp;gclsrc=aw.ds&amp;amp;gclid=Cj0KCQiAjJOQBhCkARIsAEKMtO3fCohKU2ihxQrQ21u9XixqWhXy9w7QNu8MqGaVp58zn59WTN0ekA4aAtAeEALw_wcB"&gt;pricing&lt;/a&gt;, but in the end many companies are moving to cloud services and this service seems to be a great fit for them.&lt;/p&gt;

&lt;p&gt;In this post I would like to demonstrate in a few steps how we can make a sync job that allows us to describe a Salesforce instance and use a few objects to create a full schema of those objects (tables) into a Google Big Query Dataset. Then with the schema created we should be able to push some data into Bigquery from Salesforce and see it in our Google Cloud Console project.&lt;/p&gt;

&lt;p&gt;In order to connect to Salesforce and Google Big Query, there are a few prerequisites we need:&lt;/p&gt;

&lt;p&gt;Salesforce:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  If you don't have a salesforce instance, you can create a developer one &lt;a href="https://developer.salesforce.com/signup"&gt;here&lt;/a&gt;,&lt;/li&gt;
&lt;li&gt;  From Salesforce side you will need username, password and security token (you can follow &lt;a href="https://help.salesforce.com/s/articleView?id=sf.user_security_token.htm&amp;amp;type=5"&gt;this process&lt;/a&gt; to get it)&lt;/li&gt;
&lt;li&gt;  A developer instance contains a few records, but if you need to have some more data, this will help the process to sync that information over.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GCP (Google Cloud Platform)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  You can sign up &lt;a href="https://console.cloud.google.com/freetrial/signup/tos?_ga=2.68094097.1640278748.1644510554-1516430238.1644510554&amp;amp;_gac=1.218077796.1644510554.Cj0KCQiAjJOQBhCkARIsAEKMtO0NyXkbcz86jMGZOta5V7HYUNkiDHCDR_6OSc4ioZFtAHlp0tw8_JUaAnI7EALw_wcB"&gt;here&lt;/a&gt; for free. Google gives you $300 for 90 days to test the product (similar to Azure). Also if you already have a google account you can use it for this.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  CREATING A NEW PROJECT IN GCP AND SETTING UP OUR SERVICE ACCOUNT KEY.
&lt;/h1&gt;

&lt;p&gt;Once you sign up for you account on GCP, you should be able to click on New Project option and write a project name, in this example I choose mulesoft&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--n9FinK3F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AR9cDYzV2GidJIUb-" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--n9FinK3F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AR9cDYzV2GidJIUb-" alt="1" width="880" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fTcIOgh2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2Ax1q_Mk2Ot1vKv2vd" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fTcIOgh2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2Ax1q_Mk2Ot1vKv2vd" alt="2" width="880" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once a project is created we should be able to go to the menu in the left and we should be able to select IAM &amp;amp; Admin &amp;gt; Service Accounts option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7qTlinnw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AuS-P6MIGO6oIw1wL" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7qTlinnw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AuS-P6MIGO6oIw1wL" alt="3" width="880" height="1326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we should be able to create our service account&lt;/p&gt;

&lt;p&gt;"A service account is a special type of Google account intended to represent a non-human user that needs to authenticate and be authorized to access data in Google APIs. Typically, service accounts are used in scenarios such as: Running workloads on virtual machines"&lt;/p&gt;

&lt;p&gt;At the top of the page you should be able to see the option to create it, then just you need to specify a Name and click on create and continue,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--523B2Afq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2Aem_KPwQsEYKRWxgl" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--523B2Afq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2Aem_KPwQsEYKRWxgl" alt="4" width="880" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next step is to set the permissions, so for this we need to select from the roles combo BigQuery Admin.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i6Cx3CQD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2ATUGCNyhClV0vIBlk" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i6Cx3CQD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2ATUGCNyhClV0vIBlk" alt="5" width="880" height="683"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once created, we should be able to select from the three dot menu on the right the option Manage Keys&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7sdjvdSC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1244/0%2Atf1JBbFVxaEzKD6D" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7sdjvdSC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1244/0%2Atf1JBbFVxaEzKD6D" alt="6" width="622" height="686"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then we can create a new Key, in this case one as json should be enough. The key will get downloaded automatically in your computer (Please keep this json key somewhere you can use it later.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YMdcQMda--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1144/0%2AqV-jZHfhqDjjrEeh" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YMdcQMda--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1144/0%2AqV-jZHfhqDjjrEeh" alt="7" width="572" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  DATASET IN BIG QUERY
&lt;/h1&gt;

&lt;p&gt;Datasets are top-level containers that are used to organize and control access to your tables and views. A table or view must belong to a dataset, so you need to create at least one dataset before loading data into BigQuery.&lt;/p&gt;

&lt;p&gt;From the left menu we can search for BigQuery and click on it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3CMOoU3V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1084/0%2Aco-42V2xq7tRUCQz" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3CMOoU3V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1084/0%2Aco-42V2xq7tRUCQz" alt="8" width="542" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That will take us to the Bigquery console, now we can click in the three dots menu and select Create dataset option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m9DzLWeV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AJ52tiRCJEcidraxt" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m9DzLWeV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AJ52tiRCJEcidraxt" alt="9" width="880" height="752"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we just need to set the name as salesforce and click on "Create Dataset"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FbKgqaW5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AUnK4bWtQKddU_f7J" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FbKgqaW5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AUnK4bWtQKddU_f7J" alt="10" width="880" height="1009"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  SETTING UP OUR MULE APPLICATION.
&lt;/h1&gt;

&lt;p&gt;Since this is a sync job, we don't need any API specification but totally can fit some scenarios where we have another application that needs to consume specific endpoints / operations.&lt;/p&gt;

&lt;p&gt;Let's then open our Anypoint Studio app (In my case I'm using mac) and let's use the default template. For this we are going to create five flows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Sync. This flow just triggers the process.&lt;/li&gt;
&lt;li&gt; DescribeInstance. This flow will be in charge of calling the describe operation using the Salesforce connector and provide all objects information from the Salesforce instance, also will have a loop that will allow us to process the job for the objects we are going to use.&lt;/li&gt;
&lt;li&gt; DescribeIndividualSalesforceObject. Allows to describe an specific Salesforce object, this will basically will capture the fields and field types (STRING, EMAIL, ID, REFERENCE etc.) and will be on charge to create a payload that BigQuery will recognize in order to get created in GBQ&lt;/li&gt;
&lt;li&gt; BigQueryCreateTable. This flow only will be in charge of creating the table in BigQuery based on the Salesforce object name and the fields.&lt;/li&gt;
&lt;li&gt; QuerySalesforceObject. This flow dynamically will query the Salesforce object and pull the data (&lt;em&gt;For this we are limiting the output to 100 records but in a bigger scale it should be done on a batch process of course.&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt; InsertDataIntoBigQuery. This flow will push the data over into BigQuery only&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now let's grab our json key generated by google and copy the file under src/main/resources folder. The key will let us authenticate against our project and execute the operations&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t0RvNYjL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/872/0%2A7kE2a6bTDkCIQTIG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t0RvNYjL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/872/0%2A7kE2a6bTDkCIQTIG" alt="11" width="436" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  IMPORT THE GOOGLE BIG QUERY CONNECTOR.
&lt;/h1&gt;

&lt;p&gt;From Exchange we can search "Big Query" and we should be able to see the connector listed&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_by9eB2X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A78uaKN-gFwvJ0sNH" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_by9eB2X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A78uaKN-gFwvJ0sNH" alt="12" width="880" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;then we can just use the "Add to project" option and we should be able to see the operations in the Palette&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L0zxb-c9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1008/0%2AHGiDcGcpHF81nJZk" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L0zxb-c9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1008/0%2AHGiDcGcpHF81nJZk" alt="13" width="504" height="804"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  SYNC FLOW
&lt;/h1&gt;

&lt;p&gt;As I mentioned, this is only in charge of triggering the whole application, so we only need one scheduler component and a flow reference to the DescribeInstance flow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5RjRMuX3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/984/0%2AORVkakau2T0QX3cn" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5RjRMuX3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/984/0%2AORVkakau2T0QX3cn" alt="14" width="492" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  DESCRIBEINSTANCE
&lt;/h1&gt;

&lt;p&gt;This flow will describe the whole Salesforce instance using the &lt;a href="https://developer.salesforce.com/docs/atlas.en-us.208.0.api_rest.meta/api_rest/resources_describeGlobal.htm"&gt;Describe Global&lt;/a&gt; operation. Next&lt;/p&gt;

&lt;p&gt;steps on this is to use a Dataweave transform to filter to get only the Objects we are interested in, so in this case I'm only pulling three, Accounts, Contacts and a custom object called Project__c. I left in the transformation a few more attributes to only pull the objects that we are able to query.&lt;/p&gt;

&lt;p&gt;%dw 2.0import try, fail from dw::Runtimeoutput application/java fun isDate(value: Any): Boolean = try(() -&amp;gt; value as Date).successfun getDate(value: Any): Date | Null | Any = ( if ( isDate(value) ) value as Date as String else value ) --- -(payload map (item,index) -&amp;gt;{ (item mapObject ((value, key, index) -&amp;gt; { (key):(getDate(value)) } ))})&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/emoran/055a8b509044c899d9fdcddbfe66ff41/raw/6d80d9ca2fc6371ab575a45a41a0e8909cf6f656/mapSalesforceReocrds"&gt;view raw&lt;/a&gt;&lt;a href="https://gist.github.com/emoran/055a8b509044c899d9fdcddbfe66ff41#file-mapsalesforcereocrds"&gt;mapSalesforceReocrds&lt;/a&gt; hosted with ❤ by &lt;a href="https://github.com/"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s3qzuoYi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AnGmyj4akCqu_S5xT" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s3qzuoYi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AnGmyj4akCqu_S5xT" alt="15" width="880" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally you need to loop over these three objects and there's a flow reference for this sample that will call the other flows to be able to continue the process.&lt;/p&gt;

&lt;h1&gt;
  
  
  DESCRIBEINDIVIDUALSALESFORCEOBJECT
&lt;/h1&gt;

&lt;p&gt;The flow basically takes the name of the Salesforce Object and will allow to describe it, the connector only ask for the object name, then we have a pretty interesting DW Transformation&lt;/p&gt;

&lt;p&gt;%dw 2.0input payload application/javaoutput application/javafun validateField(field) = if ( (field == "REFERENCE") or (field == "ID") or (field == "PICKLIST") or (field == "TEXTAREA") or (field == "ADDRESS")or (field == "EMAIL")or (field == "PHONE") or (field == "URL")) "STRING" else if ( (field == "DOUBLE") or (field == "CURRENCY") ) "FLOAT" else if ((field == "INT")) "INTEGER" else field --- -(payload.fields filter ($."type" != "LOCATION") map { fieldName : $.name, fieldType : validateField($."type")})&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/emoran/170bde67c68a8dd5a18c9c3daee831e8/raw/07b6e97ba6cf1f9273fd31f16e40e83326f170a0/Salesforce%20to%20Bigquery%20Fields%20Schema"&gt;view raw&lt;/a&gt;&lt;a href="https://gist.github.com/emoran/170bde67c68a8dd5a18c9c3daee831e8#file-salesforce-to-bigquery-fields-schema"&gt;Salesforce to Bigquery Fields Schema&lt;/a&gt; hosted with ❤ by &lt;a href="https://github.com/"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Salesforce data types are not 100% the same as BigQuery, so we need to make a little trick to be able to create the schema in BigQuery seamless as Salesforce so in this case I've created an small function to convert some fields like (ID, REFERENCE,TEXTAREA,PHONE, ADDRESS,PICKLIST, EMAIL) to be STRING, in this case the reference or values are not really anything else than a text, for (DOUBLE and CURRENCY) I'm using the value FLOAT and finally for INT fields are changed to be INTEGER&lt;/p&gt;

&lt;p&gt;Finally because Location fields are a bit tricky and we are not able to make much with the API on them, I'm removing all location fields.&lt;/p&gt;

&lt;p&gt;The output of this is the actual schema we will use to create the table in Google BigQuery.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N3j12KIX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2Acb032ELY2g1anSgJ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N3j12KIX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2Acb032ELY2g1anSgJ" alt="16" width="880" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  BIGQUERYCREATETABLE
&lt;/h1&gt;

&lt;p&gt;This flow allows us to create the table in BigQuery; we only need to specify Table, Dataset and Table Fields.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dE0t9zw5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AxG29lQviGNNLWXYW" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dE0t9zw5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AxG29lQviGNNLWXYW" alt="17" width="880" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BmqCccOR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/960/0%2AFrXheP5MxHkdQ3oJ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BmqCccOR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/960/0%2AFrXheP5MxHkdQ3oJ" alt="18" width="480" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  QUERYSALESFORCEOBJECT
&lt;/h1&gt;

&lt;p&gt;This flow basically query the Object in Salesforce and then maps the data dynamically to prepare the payload for BigQuery.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m6K3UxAu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A9NH9k3AVkfWrGg-4" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m6K3UxAu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A9NH9k3AVkfWrGg-4" alt="19" width="880" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The query basically comes from a variable "salesforceFields" same field we collected when we described the Object using this script&lt;/p&gt;

&lt;p&gt;(payload.fields filter ($."type" != "LOCATION") map { fieldName : $.name}).fieldName joinBy ","&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/emoran/5d87cb4d2477e6b422687066b9150f2c/raw/10bc9ba926737f3c22538c1dd8a5505085ce1a1f/salesforceFields"&gt;view raw&lt;/a&gt;&lt;a href="https://gist.github.com/emoran/5d87cb4d2477e6b422687066b9150f2c#file-salesforcefields"&gt;salesforceFields&lt;/a&gt; hosted with ❤ by &lt;a href="https://github.com/"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And finally I'm limiting the result to only 100 records.&lt;/p&gt;

&lt;p&gt;Next step is to map the Salesforce result data and map it dynamically using this script:&lt;/p&gt;

&lt;p&gt;%dw 2.0import try, fail from dw::Runtimeoutput application/java fun isDate(value: Any): Boolean = try(() -&amp;gt; value as Date).successfun getDate(value: Any): Date | Null | Any = ( if ( isDate(value) ) value as Date as String else value ) --- -(payload map (item,index) -&amp;gt;{ (item mapObject ((value, key, index) -&amp;gt; { (key):(getDate(value)) } ))})&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/emoran/055a8b509044c899d9fdcddbfe66ff41/raw/6d80d9ca2fc6371ab575a45a41a0e8909cf6f656/mapSalesforceReocrds"&gt;view raw&lt;/a&gt;&lt;a href="https://gist.github.com/emoran/055a8b509044c899d9fdcddbfe66ff41#file-mapsalesforcereocrds"&gt;mapSalesforceReocrds&lt;/a&gt; hosted with ❤ by &lt;a href="https://github.com/"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks so much to Alexandra Martinez for the insights on the utilities for DW 2.0! (&lt;a href="https://github.com/alexandramartinez/DataWeave-scripts/blob/main/utilities/utilities.dwl"&gt;https://github.com/alexandramartinez/DataWeave-scripts/blob/main/utilities/utilities.dwl&lt;/a&gt; )&lt;/p&gt;

&lt;p&gt;This last script basically maps the records and uses the key as field and the value, but the value needs to be replaced as Date in this case for the Strings that are date or date time. So I consider this the best script in this app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---UHkMz69--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A1SnsNjRyLH8VTDuM" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---UHkMz69--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A1SnsNjRyLH8VTDuM" alt="" width="880" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  INSERTDATAINTOBIGQUERY
&lt;/h1&gt;

&lt;p&gt;This flow just inserts the data we prepared only, so basically we only need to specify table id , dataset id and the Row Data&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3Z4C_h1V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A-GRExUX-MkjriLG4" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3Z4C_h1V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A-GRExUX-MkjriLG4" alt="20" width="880" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dyLGHko_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2Ar0z9PyW37QG1Q0Bt" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dyLGHko_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2Ar0z9PyW37QG1Q0Bt" alt="21" width="880" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  SETTING UP OUR MULE APPLICATION.
&lt;/h1&gt;

&lt;p&gt;Now we should be able to run our application and see the new tables and the data over Google big query.&lt;/p&gt;

&lt;p&gt;On GCP I can see the tables I selected created:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gXA1ffrS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1336/0%2AWlTHj49AZqCjVyzn" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gXA1ffrS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1336/0%2AWlTHj49AZqCjVyzn" alt="22" width="668" height="782"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if we open any of them we should look into the schema to verify all fields are there&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c1Xa9QJE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A3Jl5IAwmGF0pvd-G" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c1Xa9QJE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A3Jl5IAwmGF0pvd-G" alt="23" width="880" height="1582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally we should be able to query the table in the console or clic on the Preview option to check the data is there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RHEyiLst--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2Aej9ZJ1DKkWkTSy4_" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RHEyiLst--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2Aej9ZJ1DKkWkTSy4_" alt="24" width="880" height="754"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I think this is kind of a common request we get on the integration space and many tweaks can be implemented if we are thinking of big migrations or setting some jobs that eventually will require tables to be created automatically from Salesforce to GCP.&lt;/p&gt;

&lt;p&gt;If you like to try it, I created this &lt;a href="https://github.com/emoran/sfdc-to-bigquery"&gt;GitHub&lt;/a&gt; repository. I hope this was useful and I'm open to hear any enhancement / scenario.&lt;/p&gt;

</description>
      <category>mule4</category>
      <category>google</category>
      <category>bigquery</category>
      <category>mulesoft</category>
    </item>
    <item>
      <title>Image recognition using Mulesoft and Salesforce</title>
      <dc:creator>Edgar Moran</dc:creator>
      <pubDate>Mon, 16 Nov 2020 22:01:48 +0000</pubDate>
      <link>https://dev.to/yucelmoran/image-recognition-using-mulesoft-and-salesforce-4fe7</link>
      <guid>https://dev.to/yucelmoran/image-recognition-using-mulesoft-and-salesforce-4fe7</guid>
      <description>&lt;p&gt;Mulesoft and Salesforce seem to be the right combination of technologies to be able to deliver projects robust and complex in short time. I would like to demonstrate how we can use both of them to recognize images produced from a mobile device and recognize a picture bringing more information and interesting data for kind of a real scenario.&lt;/p&gt;

&lt;p&gt;So, how this is done? Well here are some of components I'm using for this project (I will go deep on each one):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Salesforce developer Org&lt;/li&gt;
&lt;li&gt;Anypoint Platform (Sandbox) account&lt;/li&gt;
&lt;li&gt;Mulesoft mule-aws-recognition-system-api.&lt;/li&gt;
&lt;li&gt;Mulesoft mule-aws-recognition-process-api&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Salesforce developer Org
&lt;/h2&gt;

&lt;p&gt;I have got a developer account for Salesforce from their developer site (developerforce.com). Salesforce in this case allows me to have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Custom Objects (tables)&lt;/li&gt;
&lt;li&gt;Custom Fields on each of those objects created&lt;/li&gt;
&lt;li&gt;A way to expose a mobile application (previously known as Salesforce 1) available to install on IOS or Android devices&lt;/li&gt;
&lt;li&gt;Visualforce pages, allowing to customize what we cant to show on a mobile app or browser&lt;/li&gt;
&lt;li&gt;Apex Classes, custom apex code to handle data from a page or allowing to expose REST services from a custom apex (java style) definition.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So here we have the design:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Standard Object (Content Version and         ContentDocumentLink). Allows to store the actual binary file in Salesforce&lt;/li&gt;
&lt;li&gt;Custom Object (Hackathon Image). Allows to have a record to link the photo taken&lt;/li&gt;
&lt;li&gt;Custom Object (Image Label). Stores the image information labels and how accurate is the image with the label from AWS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;here comes the fun part..&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visualforce page allowing to show an UI to take the picture:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F17v83buqsp4tb1isij7v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F17v83buqsp4tb1isij7v.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Apex Controller. Allows to get all information from the picture and Create the Content Version and Content Link record related to the Hackathon image.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Apex Controller REST . Exposes the mentions endpoint allowing to trigger a push notification in the mobile device.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Where I can get this code from?  &lt;a href="https://github.com/emoran/sfdc-mulesoft-hackathon-2020.git" rel="noopener noreferrer"&gt;https://github.com/emoran/sfdc-mulesoft-hackathon-2020.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now a basic flow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvffdari364hl066tdh3r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvffdari364hl066tdh3r.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Mulesoft mule-aws-recognition-system-api.
&lt;/h2&gt;

&lt;p&gt;Initially this system api was for AWS only but because the time and resources I also included here one of the pieces I need to complete this exercise.&lt;/p&gt;

&lt;p&gt;As I mentioned this system API allows to process a Base64 image and send it to Amazon Rekognition API, the result of this call is to be able to retrieve the labels generated from this call. &lt;/p&gt;

&lt;p&gt;This same application contains the logic to pull a few tweets using a parameter based on hashtags.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#%RAML 1.0
title: mule-aws-recognition-system-api

/image:
  post:
    body:
      application/json:

    responses:
      200:
        body:
          application/json:

/twitter:
  /tweets:
    get:
      queryParameters:
        q:
          description: "Parameters to filter by hashtag"
      responses:
        200:
          body:
            application/json:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To process the image basically I used the AWS Java SDK to use the API my flow looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwqo8o4yx314mklcwoan0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwqo8o4yx314mklcwoan0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the other hand for the Tweets we have a different endpoint which receives only the GET request and we return all tweets based on the hashtags provided.&lt;/p&gt;

&lt;p&gt;Here how the flow looks like:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fk0et7hkipk2y1fk4jo8a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fk0et7hkipk2y1fk4jo8a.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see this is just a pretty simple HTTP Request to the Twitter API, It's not included in the process API as we are not using a connector to extract the logic of this request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fusrydm9n42zpit7ihdtx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fusrydm9n42zpit7ihdtx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvmlzxwjcj6wbbb0muv6s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvmlzxwjcj6wbbb0muv6s.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can get the code of the system API from here: &lt;a href="https://github.com/emoran/mule-aws-recognition-system-api.git" rel="noopener noreferrer"&gt;https://github.com/emoran/mule-aws-recognition-system-api.git&lt;/a&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Mulesoft mule-aws-recognition-process-api
&lt;/h2&gt;

&lt;p&gt;At this point in the process api, now we are really doing more things and connecting the dots. I will try to explain step by step what happens.&lt;/p&gt;

&lt;p&gt;The process API has this RAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#%RAML 1.0
title: mule-aws-recognition-process-api


/image:
  post:
    body:
      application/json:
    responses:
      200:
        body:
          application/json:
            example:

/sfdc:
  /images:
    get:
      responses:
        200:
          body:
            application/json:
  /contentVersion:
    get:
      queryParameters:
        id:
          description: imageId
          type: string
/tweets:
  get:
    responses:
      200:
        body:
          application/json:


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;After the mobile application saves the picture  we took with our device, Salesforce calls the /images endpoint we exposed in Mulesoft, basically it passes three params imageRecordId (Hackathon Image), contentVersionId (Id of the actual file in Saleforce) and contentDocumentLinkId (Link document to the picture.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mulesoft gets the parameters, then using the Salesforce connector we make a query to Content Version and we download the file (actual image in Base64), then we call the system API to passing the image and wait for the bunch of labels that AWS recognized &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpj22olnpa5hbypfuikrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpj22olnpa5hbypfuikrv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj98g5ifeyts44g0z363k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj98g5ifeyts44g0z363k.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Once AWS responded we also create the labels in Salesforce (Image Labels) for the uploaded image as records and lastly we call the REST Service we exposed in Salesforce in order to notify the person the image has been processed and now it has labels created.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2moeoualvk3isknffn9t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2moeoualvk3isknffn9t.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It was really interesting to check how to call that REST service from the connector, since on older versions of the connector we were able to connect getting the session ID and use the REST endpoint directly. In Mule 4 we are not able to do so, in this case we use the connector capabilities to do it&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgru79gfgrh7ck43wlgvv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgru79gfgrh7ck43wlgvv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now in the last part as user you can use your device to see the labels created per record, but also I created a feature on this process API. I've created a page served on Mulesoft to show the information we saved!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;How I did do it?, in the same process API I placed a new configuration file named "portal", a flow that contains a "Load Static Resource" that serves a page stored in a folder named "web" on src/main/resources &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F06q1vlq2rcex1g5vclen.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F06q1vlq2rcex1g5vclen.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the main page contains a script that uses JQuery in order to show the information of images and tweets.&lt;/p&gt;

&lt;p&gt;This is how the render page looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffp23s1divqsurqxtdiv3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffp23s1divqsurqxtdiv3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Basically the page tells you the labels we got from AWS, shows the image we took the picture and the tweets related to the generated labels.&lt;/p&gt;

&lt;p&gt;you can get this code from here &lt;a href="https://github.com/emoran/mule-aws-recognition-process-api.git" rel="noopener noreferrer"&gt;https://github.com/emoran/mule-aws-recognition-process-api.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Watch the video:&lt;br&gt;
&lt;a href="http://www.youtube.com/watch?v=GWKP4U0o2Ng" rel="noopener noreferrer"&gt;http://www.youtube.com/watch?v=GWKP4U0o2Ng&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="http://www.youtube.com/watch?v=GWKP4U0o2Ng" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fimg.youtube.com%2Fvi%2FGWKP4U0o2Ng%2F0.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>mulesofthackathon</category>
      <category>salesforce</category>
      <category>aws</category>
      <category>twitter</category>
    </item>
  </channel>
</rss>
