<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Upkar Lidder</title>
    <description>The latest articles on DEV Community by Upkar Lidder (@upkarlidder).</description>
    <link>https://dev.to/upkarlidder</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/upkarlidder"/>
    <language>en</language>
    <item>
      <title>Contribute to Call for Code projects as part of Hacktoberfest</title>
      <dc:creator>Upkar Lidder</dc:creator>
      <pubDate>Thu, 14 Oct 2021 20:50:02 +0000</pubDate>
      <link>https://dev.to/ibmdeveloper/contribute-to-call-for-code-projects-as-part-of-hacktoberfest-30nc</link>
      <guid>https://dev.to/ibmdeveloper/contribute-to-call-for-code-projects-as-part-of-hacktoberfest-30nc</guid>
      <description>&lt;h2&gt;
  
  
  Give back through Call for Code open source solutions and hack for racial justice
&lt;/h2&gt;

&lt;p&gt;We're happy to announce that Call for Code is participating in Hacktoberfest! Now in its eighth year, &lt;a href="https://hacktoberfest.digitalocean.com" rel="noopener noreferrer"&gt;Hacktoberfest&lt;/a&gt; is a global online festival meant to drive contribution to and involvement in open source projects. &lt;/p&gt;

&lt;p&gt;Hacktoberfest is a great way for you to contribute to our tech for good open source solutions that address various social and humanitarian issues including racial justice (Call for Code for Racial Justice), climate change, natural disasters and COVID-19. Your contributions to Call for Code projects directly benefit organizations doing grassroots work and local communities that need these solutions. There are many ways to contribute, no matter your area of expertise. &lt;/p&gt;

&lt;p&gt;We have many open issues still available for your participation during Hacktoberfest and year-round. Here are a few ways you can contribute to our projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;updating documentation&lt;/li&gt;
&lt;li&gt;flagging bugs in code.&lt;/li&gt;
&lt;li&gt;creating Github Actions for testing &amp;amp; greeting bots&lt;/li&gt;
&lt;li&gt;evaluating language translations of our websites&lt;/li&gt;
&lt;li&gt;creating templates for issues and pull requests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Start contributing to Call for Code projects as part of Hacktoberfest&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Watch our &lt;a href="https://www.crowdcast.io/e/hacktoberfest-kickoff" rel="noopener noreferrer"&gt;Call for Code for Racial Justice Hacktoberfest Kickoff replay&lt;/a&gt; . Get great tips on contributing to issues during Hacktoberfest.&lt;/li&gt;
&lt;li&gt;Check out our new &lt;a href="https://call-for-code-for-racial-justice.github.io/Hacktoberfest/#/" rel="noopener noreferrer"&gt;Call for Code for Racial Justice Hacktoberfest Handbook&lt;/a&gt; and &lt;a href="https://call-for-code.github.io/Hacktoberfest/#/" rel="noopener noreferrer"&gt;Call for Code Hacktoberfest Handbook&lt;/a&gt; to learn all about how to contribute. &lt;/li&gt;
&lt;li&gt;Get up to speed on &lt;a href="https://hacktoberfest.digitalocean.com/" rel="noopener noreferrer"&gt;Hacktoberfest&lt;/a&gt; and follow the instructions for general participation.&lt;/li&gt;
&lt;li&gt;Join us on Slack to join our office hours and our &lt;a href="http://callforcode.org/slack" rel="noopener noreferrer"&gt;Call for Code Community&lt;/a&gt;. &lt;/li&gt;
&lt;/ol&gt;


&lt;div class="ltag__user ltag__user__id__122556"&gt;
    &lt;a href="/upkarlidder" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F122556%2F5c5d8cbe-b4c5-48ef-9c5d-51095ba9bec9.jpg" alt="upkarlidder image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/upkarlidder"&gt;Upkar Lidder&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/upkarlidder"&gt;Upkar Lidder is a Full Stack Developer and Data Wrangler with a decade of development experience in a variety of roles. Educated in Canada and currently residing in the USA. &amp;lt;3 Python and JS!&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;div class="ltag__user ltag__user__id__724387"&gt;
    &lt;a href="/demiajayi" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F724387%2F97229fd0-37be-4874-aee2-0f2a520ae1fa.png" alt="demiajayi image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/demiajayi"&gt;demilolu&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/demiajayi"&gt;Open Source Community Manager at IBM&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>hacktoberfest</category>
      <category>callforcode</category>
      <category>callforcodeforracialjustice</category>
    </item>
    <item>
      <title>Fedora Diaries - I</title>
      <dc:creator>Upkar Lidder</dc:creator>
      <pubDate>Tue, 21 Sep 2021 00:32:13 +0000</pubDate>
      <link>https://dev.to/upkarlidder/fedora-diaries-i-4ahl</link>
      <guid>https://dev.to/upkarlidder/fedora-diaries-i-4ahl</guid>
      <description>&lt;p&gt;I moved by work development environment to Fedora 34 recently. I want to capture some tools that I found useful on this journey. I use &lt;a href="https://www.gnome.org/" rel="noopener noreferrer"&gt;GNOME 40&lt;/a&gt; flavor. I had used GNOME a few years ago and remember not being very impressed. If you felt the same, try GNOME 40! Least to say that I am very happy with the experience and only use my macbook pro for video/audio production and live streaming.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://apps.gnome.org/en-GB/app/org.gnome.Nautilus/" rel="noopener noreferrer"&gt;Nautilus&lt;/a&gt; is the default &lt;strong&gt;file explorer&lt;/strong&gt; or &lt;strong&gt;finder&lt;/strong&gt; of Gnome. It is also called &lt;strong&gt;Files&lt;/strong&gt;. It provides a simple and integrated way of managing your files and browsing your file system. This is what the default UI looks like: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4h7ae1gordu2o64cynt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4h7ae1gordu2o64cynt.png" alt="default GNOME UI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have &lt;a href="https://ohmyz.sh/" rel="noopener noreferrer"&gt;ohmyzsh&lt;/a&gt; installed in my shell and find the Git information like branch, etc very helpful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrloyuxupqsyr0juk80w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrloyuxupqsyr0juk80w.png" alt="Git information in terminal"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was looking for a similar experience in Nautilus. Yes, real developers use UI sometimes :). I use the CLI heavily for my Git tasks, but it would be really nice to see some Git information in Files when I do have it open for whatever reason, like uploading files. I found the following two extensions for this purpose:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/rabbitvcs/rabbitvcs" rel="noopener noreferrer"&gt;RabbitVCS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/bilelmoussaoui/nautilus-git" rel="noopener noreferrer"&gt;nautilus-git&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  RabbitVCS
&lt;/h3&gt;

&lt;p&gt;This extension adds a menu to Nautilus and provides multiple UI tools for interacting with Git&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdm4qkgd6l9q56tef205.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdm4qkgd6l9q56tef205.png" alt="RabbitCVS in action"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  nautilus-git
&lt;/h3&gt;

&lt;p&gt;This extension is even simpler. It provides a way to switch branches from Nautilus itself. I use it to just see where I am in the branch system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrtony5crwhsu84tkmnt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrtony5crwhsu84tkmnt.png" alt="nautilus-git in action"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What are some of your favorite GNOME tools?&lt;/p&gt;

</description>
      <category>fedora</category>
      <category>linux</category>
      <category>gnome</category>
      <category>developer</category>
    </item>
    <item>
      <title>My favorite development environment: VS Code on Mac + Ubuntu SSH + Podman</title>
      <dc:creator>Upkar Lidder</dc:creator>
      <pubDate>Thu, 31 Dec 2020 01:35:01 +0000</pubDate>
      <link>https://dev.to/upkarlidder/my-favorite-development-environment-vs-code-on-mac-ubuntu-ssh-podman-426i</link>
      <guid>https://dev.to/upkarlidder/my-favorite-development-environment-vs-code-on-mac-ubuntu-ssh-podman-426i</guid>
      <description>&lt;p&gt;I work on Open Source projects that use containers and Kubernetes heavily. I used to run docker locally on my Mac and use &lt;a href="https://skaffold.dev/" rel="noopener noreferrer"&gt;Skaffold&lt;/a&gt; for testing/deploying on a cluster. It all worked great, but every now and then Docker would rev up my Mac ✈️. &lt;/p&gt;

&lt;p&gt;I had heard of &lt;a href="https://podman.io/" rel="noopener noreferrer"&gt;Podman&lt;/a&gt;, a daemonless container engine on Linux, and wanted to give it a try. I installed &lt;a href="https://ubuntu.com/download/desktop" rel="noopener noreferrer"&gt;Ubuntu Desktop 20.04&lt;/a&gt; on my 2013 Mac Mini.  Podman is amazing to use and meets all my needs, without making a fuss (literally). &lt;/p&gt;

&lt;p&gt;I even got it working on my 1GB raspberry pi without any problems. Now, I can remote into the Ubuntu box from VS Code itself as you can see here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fitgqn3kvh2v7krtk0ymg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fitgqn3kvh2v7krtk0ymg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You give it your username@IP and you are good to go!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fb6co1zuymijavuiuqkwq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fb6co1zuymijavuiuqkwq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I copied over my ssh key to the server so it does not ask for a password anymore.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F242hlsewjfq2us2woirj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F242hlsewjfq2us2woirj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am even able to open the remote folders and it seems like I am working on my Mac:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgl7cgbaamlskmbdkj6hx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgl7cgbaamlskmbdkj6hx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I can happily use podman from my Mac and it runs very quietly. I love it 💕&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpi1s0rrn9w65mjx1w1gm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpi1s0rrn9w65mjx1w1gm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>It’s now even easier to get started with Serverless using the new standalone Apache OpenWhisk!</title>
      <dc:creator>Upkar Lidder</dc:creator>
      <pubDate>Sat, 16 Nov 2019 00:01:23 +0000</pubDate>
      <link>https://dev.to/lidderupk/it-s-now-even-easier-to-get-started-with-serverless-using-the-new-standalone-apache-openwhisk-1da2</link>
      <guid>https://dev.to/lidderupk/it-s-now-even-easier-to-get-started-with-serverless-using-the-new-standalone-apache-openwhisk-1da2</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AdgYWPCF8TBKG9om93tF5Lg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AdgYWPCF8TBKG9om93tF5Lg.png"&gt;&lt;/a&gt;Subset of OpenWhisk runtimes&lt;/p&gt;

&lt;p&gt;One of the cool things about OpenWhisk was the ability to start a local copy and get coding very quickly. &lt;a href="https://medium.com/u/604ac4ebfc5f" rel="noopener noreferrer"&gt;James Thomas&lt;/a&gt; has an awesome post on this — &lt;a href="http://jamesthom.as/blog/2018/01/19/starting-openwhisk-in-sixty-seconds/" rel="noopener noreferrer"&gt;Starting OpenWhisk in Sixty Seconds&lt;/a&gt;. This also enabled developers to create and test their Serverless solutions on their machines. The amazing OpenWhisk community have taken this one step further. &lt;a href="https://twitter.com/chetanmeh" rel="noopener noreferrer"&gt;Chetan Mehrotra&lt;/a&gt; recently added code to enable running OpenWhisk as a standalone jar! How cool is that!&lt;/p&gt;

&lt;p&gt;At a high level, they have taken out CouchDB and Kafka and replaced them with in memory persistence layer and a queueing system. The controller and invoker have also been slimmed down. There is obviously more to it and is well captured in the following links.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/apache/openwhisk/pull/4516#4216" rel="noopener noreferrer"&gt;https://github.com/apache/openwhisk/pull/4516&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/apache/openwhisk/pull/4516#4216" rel="noopener noreferrer"&gt;https://github.com/apache/openwhisk/pull/4216&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://lists.apache.org/thread.html/7425131f1fc11a9fd21e9c049be702837841c47004da03b7f215a0d6@%3Cdev.openwhisk.apache.org%3E" rel="noopener noreferrer"&gt;https://lists.apache.org/thread.html/7425131f1fc11a9fd21e9c049be702837841c47004da03b7f215a0d6@%3Cdev.openwhisk.apache.org%3E&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So how do you get started?&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Build the jar file
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;You can build it yourself by following the steps in the &lt;a href="https://github.com/apache/openwhisk/tree/master/core/standalone" rel="noopener noreferrer"&gt;official repo&lt;/a&gt;. The final jar will be available in the /bin folder. This is the preferable approach as you always get the latest features&lt;/li&gt;
&lt;li&gt;Alternatively, if you must insist, you can download the &lt;a href="https://github.com/chetanmeh/incubator-openwhisk/releases/tag/0.14" rel="noopener noreferrer"&gt;pre-built jar file from here&lt;/a&gt;. I am not sure if this will be kept up to date. So try at your own risk!&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Step 2: run the jar file
&lt;/h4&gt;

&lt;p&gt;Once you have the jar file, you can run it as follows&lt;/p&gt;

&lt;p&gt;java -jar bin/openwhisk-standalone.jar&lt;/p&gt;

&lt;p&gt;There are numerous other options available, but this will suffice for now. That’s it! You have a Serverless platform running on your local machine. You can run all your favorite wsk commands just as your would with a production level distributed OpenWhisk installation!&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: create and deploy a simple action
&lt;/h4&gt;

&lt;p&gt;If you have never deployed an action on Apache OpenWhisk or IBM Cloud, you can follow these steps&lt;/p&gt;

&lt;p&gt;3.1 Download the &lt;a href="https://github.com/apache/openwhisk-cli#where-to-download-the-binary-of-openwhisk-cli" rel="noopener noreferrer"&gt;wsk cli&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;3.2 Create your function&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function main(args) {
 if (args &amp;amp;&amp;amp; args.name) {
 console.log(`hello ${args.name}`);
 return { msg: `hello ${args.name}` };
 } else {
 console.log(`hello world`);
 return { msg: `hello world` };
 }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.3 Before you deploy your function/action, you need to set the auth property using the wsk cli. This command was provided to you when you started the jar file. Simply copy paste in terminal!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A8S4Pxu1F8r0uZo15IYdSZg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A8S4Pxu1F8r0uZo15IYdSZg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;That’s it! Let’s deploy the index.js file as an action
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wsk action create hello index.js
ok: created action hello
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now invoke this action&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wsk action invoke hello -r
{
 "msg": "hello world"
}

# with params
$ wsk action invoke hello -r -p name upkar
{
 "msg": "hello upkar"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you were paying close attention you would have noticed that when you run the jar file, it opens a browser with the new OpenWhisk Function Playground! This makes it even easier to write and test your functions if you are new to OpenWhisk. Go ahead and try it out!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AgBsfxZvJqNOPeh9B9ZNleg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AgBsfxZvJqNOPeh9B9ZNleg.png"&gt;&lt;/a&gt;OpenWhisk Function Playground&lt;/p&gt;

&lt;p&gt;How cool was that! If you like this, &lt;a href="https://github.com/apache/openwhisk" rel="noopener noreferrer"&gt;please give the repo some love&lt;/a&gt;! More to come on how this blazingly fast jar can help developers in their Serverless journeys.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>openwhisk</category>
      <category>apacheopenwhisk</category>
      <category>ibm</category>
    </item>
    <item>
      <title>A Gentle Intro to Apache Spark for Developers</title>
      <dc:creator>Upkar Lidder</dc:creator>
      <pubDate>Fri, 23 Aug 2019 16:28:30 +0000</pubDate>
      <link>https://dev.to/upkarlidder/a-gentle-intro-to-apache-spark-for-developers-em5</link>
      <guid>https://dev.to/upkarlidder/a-gentle-intro-to-apache-spark-for-developers-em5</guid>
      <description>&lt;p&gt;I recently hosted an &lt;a href="https://www.crowdcast.io/e/8u3d6q3c" rel="noopener noreferrer"&gt;online meetup on Apache Spark&lt;/a&gt; with &lt;a href="https://medium.com/u/262975298e3a" rel="noopener noreferrer"&gt;IBM Developer&lt;/a&gt;. Spark has been around for a few years, but the &lt;strong&gt;&lt;em&gt;interest in still growing to my surprise&lt;/em&gt;&lt;/strong&gt;. Apache Spark was developed at the University of California, Berkeley’s AMPLab. The Spark codebase was open sourced and donated to the Apache Software Foundation in 2010.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AtP-dw4Oj_42BYbkdtYbjMA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AtP-dw4Oj_42BYbkdtYbjMA.png"&gt;&lt;/a&gt;Apache Spark&lt;/p&gt;

&lt;p&gt;The background of the attendees was quite diverse&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Developer (25%)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Architect (12.5%)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Data Scientist (41.7%)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Other (12.8%)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We looked at the &lt;strong&gt;WHAT&lt;/strong&gt; and &lt;strong&gt;WHY&lt;/strong&gt; of spark and then dove in the three data structures that you might encounter when working with Spark …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F970%2F1%2AKQYeTwCltFJBca0UZtD9-A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F970%2F1%2AKQYeTwCltFJBca0UZtD9-A.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We also looked at some Transformations, Actions, built-in functions and UDF (user defined functions). For example the following function creates a new column called &lt;strong&gt;GENDER&lt;/strong&gt; based on the contents of the column &lt;strong&gt;GenderCode.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ------------------------------
# Derive gender from salutation
# ------------------------------
def deriveGender(col):
    """ input: pyspark.sql.types.Column
        output: "male", "female" or "unknown"
    """    
    if col in ['Mr.', 'Master.']:
        return 'male'
    elif col in ['Mrs.', 'Miss.']:
        return 'female'
    else:
        return 'unknown';

deriveGenderUDF = func.udf(lambda c: deriveGender(c), types.StringType())
customer_df = customer_df.withColumn("GENDER", deriveGenderUDF(customer_df["GenderCode"]))
customer_df.cache()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;withColumn&lt;/strong&gt; creates a new column in the customer_df dataframe with the values from &lt;strong&gt;deriveGenderUDF&lt;/strong&gt; (our user defined function). The deriveGenderUDF is essentially the &lt;strong&gt;deriveGender&lt;/strong&gt; function. If this does not make sense, watch the webinar as we go into a lot more detail.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F981%2F1%2AFTBmSEX9V89ZrG2ya77uYA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F981%2F1%2AFTBmSEX9V89ZrG2ya77uYA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, we created a spark cluster environment on IBM Cloud, and used a Jupyter notebook to explore customer data with the following columns …&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"CUST_ID", 
"CUSTNAME", 
"ADDRESS1", 
"ADDRESS2", 
"CITY", 
"POSTAL_CODE", 
"POSTAL_CODE_PLUS4", 
"STATE", 
"COUNTRY_CODE", 
"EMAIL_ADDRESS", 
"PHONE_NUMBER",
"AGE",
"GenderCode",
"GENERATION",
"NATIONALITY", 
"NATIONAL_ID", 
"DRIVER_LICENSE"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After cleaning the data using in-built and user defined methods, we used PixieDust to visualize the data. The cool thing about pixieDust is that you don’t need to set it up or configure it. You just pass it a Spark DataFrame or a Pandas DataFrame and you are good to go! You can find the complete &lt;a href="https://github.com/IBM/analyze-customer-data-spark-pixiedust" rel="noopener noreferrer"&gt;notebook here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F960%2F1%2AVbfDDknhZAA_mqzSzTb0zw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F960%2F1%2AVbfDDknhZAA_mqzSzTb0zw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you &lt;a href="https://medium.com/u/262975298e3a" rel="noopener noreferrer"&gt;IBM Developer&lt;/a&gt; and &lt;a href="https://medium.com/u/bf01a11701fe" rel="noopener noreferrer"&gt;Max Katz&lt;/a&gt; for the opportunity to present and special thanks to Lisa Jung for being a patient co-presenter!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Polyglot with Open Source and Serverless</title>
      <dc:creator>Upkar Lidder</dc:creator>
      <pubDate>Sat, 03 Aug 2019 18:38:19 +0000</pubDate>
      <link>https://dev.to/upkarlidder/polyglot-with-open-source-and-serverless-8fp</link>
      <guid>https://dev.to/upkarlidder/polyglot-with-open-source-and-serverless-8fp</guid>
      <description>&lt;p&gt;I often mentor at hackathons as part of my job as a developer advocate at IBM and one of the common problems teams face is which language to pick for their solution. Generally the person who originally thought of the idea will decide on a language. That is better than analysis paralysis, however there is a better way! If you can divide your application into multiple services and/or functions, you can use a polyglot framework such as openwhisk.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The OpenWhisk platform supports a programming model in which developers write functional logic (called &lt;a href="https://github.com/apache/incubator-openwhisk/blob/master/docs/actions.md#openwhisk-actions"&gt;&lt;strong&gt;Actions&lt;/strong&gt;&lt;/a&gt;), in any supported programming language, that can be dynamically scheduled and run in response to associated events (via &lt;a href="https://github.com/apache/incubator-openwhisk/blob/master/docs/triggers_rules.md#creating-triggers-and-rules"&gt;&lt;strong&gt;Triggers&lt;/strong&gt;&lt;/a&gt;) from external sources ( &lt;a href="https://github.com/apache/incubator-openwhisk/blob/master/docs/feeds.md#implementing-feeds"&gt;&lt;strong&gt;Feeds&lt;/strong&gt;&lt;/a&gt;) or from HTTP requests.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Using such a framework enables you to&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write multiple functions in parallel increasing the effectiveness of the team.&lt;/li&gt;
&lt;li&gt;Use multiple languages ensuring everybody is able to participate to the best of their ability.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you are new to OpenWhisk architecture, let’s do a quick crash course crash💥!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Action → your code that runs on the serverless platform&lt;/li&gt;
&lt;li&gt;Trigger → fired when an external event occurs&lt;/li&gt;
&lt;li&gt;Rule → connects an trigger to an action&lt;/li&gt;
&lt;li&gt;Package → bundle or modularize your actions, triggers and rules and share with other code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s recreate this simple example provided by &lt;a href="https://medium.com/u/604ac4ebfc5f"&gt;James Thomas&lt;/a&gt; in his fabulous &lt;a href="https://github.com/IBM-Cloud/openwhisk-workshops/tree/master/bootcamp?cm_mmc=OSocial_Blog-_-Developer_IBM+Developer-_-WW_WW-_-OInfluencer-Medium-USL-openwhisk-workshop-sequence&amp;amp;cm_mmca1=000037FD&amp;amp;cm_mmca2=10010797"&gt;tutorial/bootcamp on OpenWhisk&lt;/a&gt;. &lt;a href="https://github.com/IBM-Cloud/openwhisk-workshops/tree/master/bootcamp?cm_mmc=OSocial_Blog-_-Developer_IBM+Developer-_-WW_WW-_-OInfluencer-Medium-USL-openwhisk-workshop-sequence&amp;amp;cm_mmca1=000037FD&amp;amp;cm_mmca2=10010797"&gt;Exercise 1.2&lt;/a&gt; provides an example of action sequence, which simply is an action calling another action, which in turn calls a third action. The results of one action are passed onto the next one. If there is an error, the whole sequence errors out. Instead of implementing the whole sequence in a single language, we will use Javascript, Python and Swift! How cool is that! Here are are requirements&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Split: should take in a string and return an array of words split by space. We will write this function in &lt;strong&gt;Javascript&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Reverse: should take in an array and return a new array that is reverse of the input array. We will write this function in  &lt;strong&gt;Swift&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Join: takes in an array and returns a new array that is the space delimited join of the elements. We will write this function in  &lt;strong&gt;Python&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For example, if I pass in “hello world” to this sequence, I will get back “world, hello”. What do these functions looks like?&lt;/p&gt;

&lt;h4&gt;
  
  
  Create Actions
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;splitjs Javascript action&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function main(params) {
    var text = params.text || "";
    console.log(`incoming text: ${text}`)
    var words = text.split(' ');
    return {words: words}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;reverseswift Swift action&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;struct Input: Codable {
    var words: [String]?
}
struct Output: Codable {
    let message: [String]
}
func main(param: Input, completion: (Output?, Error?) -&amp;gt; Void) -&amp;gt; Void {

    var result = Output(message: [] )

    if let words = param.words{
        result = Output(message: words.reversed() )
    }
    completion(result, nil)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;joinpy Python action&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import sys

import json

def main (dict):

  print ('dict ' + json.dumps(dict))

  if "message" in dict:

     return {'message': ', '.join(dict['message'])}

  else:

     return { 'message': [] }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it! The following three lines &lt;strong&gt;&lt;em&gt;deploy&lt;/em&gt;&lt;/strong&gt; these code snippets as actions on &lt;em&gt;openwhisk.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wsk action create splitjs splitjs.js
wsk action create reverseswift reverseswift.swift
wsk action create joinpy joinpy.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s test these out now.&lt;/p&gt;

&lt;h4&gt;
  
  
  Testing our actions individually
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Test split javascript action&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wsk action invoke splitjs -r -p text "hello world"


{  
 "words": [  
 "hello",  
 "world"  
 ]  
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Test reverse swift action&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wsk action invoke reverseswift -r -p words '["hello","world"]'


{  
 "message": [  
 "world",  
 "hello"  
 ]  
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Test join python action&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wsk action invoke joinpy -r -p message '["world", "hello"]'


{  
 "message": "world, hello"  
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Creating a sequence
&lt;/h4&gt;

&lt;p&gt;We can create a sequence&lt;/p&gt;

&lt;p&gt;splitjs → reverseswift → joinpy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wsk action create split-reverse-join --sequence splitjs,reverseswift,joinpy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, let’s test it out!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wsk action invoke split-reverse-join -r -p text "Hello World"


{  
 "message": "World, Hello"  
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How cool is that! Let’s all celebrate with a Chandler dance!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4M57HT44--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/480/1%2AQCks4cIzllXVqUltOeecpw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4M57HT44--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/480/1%2AQCks4cIzllXVqUltOeecpw.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are around the bay area, join me for a &lt;a href="https://www.meetup.com/IBM-Developer-SF-Bay-Area-Meetup/events/263172151/"&gt;hands-on workshop on OpenWhisk&lt;/a&gt; on 08/08/19. You &lt;a href="https://www.meetup.com/IBM-Developer-SF-Bay-Area-Meetup/events/263172151/"&gt;can register here&lt;/a&gt;. &lt;a href="https://medium.com/u/262975298e3a"&gt;IBM Developer&lt;/a&gt; and the API wizard &lt;a href="https://medium.com/u/bf01a11701fe"&gt;Max Katz&lt;/a&gt; will be around.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>openwhisk</category>
      <category>ibm</category>
      <category>serverless</category>
    </item>
    <item>
      <title>My 3 cents for junior developers/data scientists looking for a job</title>
      <dc:creator>Upkar Lidder</dc:creator>
      <pubDate>Mon, 15 Jul 2019 00:51:52 +0000</pubDate>
      <link>https://dev.to/lidderupk/my-3-cents-for-junior-developers-data-scientists-looking-for-a-job-cbj</link>
      <guid>https://dev.to/lidderupk/my-3-cents-for-junior-developers-data-scientists-looking-for-a-job-cbj</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F667%2F1%2AJ40PyNdDSXEjQIzeOfK1PA%402x.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F667%2F1%2AJ40PyNdDSXEjQIzeOfK1PA%402x.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We get some wonderful interns at IBM every year. These smart and talented young minds range from high school to college. My advise to them is threefold …&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;try new things and learn as much as you can&lt;/strong&gt;. Not to say the learning ever stops. You are always learning in the this career, whether you are a software developer or a data scientist. Whether you are a year in or ten years in. However, now is the time to learn fearlessly, without hesitation. You have more time then you will ever have again. Make mistakes early! And you cannot make mistakes without trying things out! Oh, while you are at it, HAVE FUN!!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;build a portfolio&lt;/strong&gt;. Turn everything you learn into a Github project (while being respectful of company IP). You are learning anyways, use it to your advantage. You will be surprised what projects you rely on years after you have moved on to other things. Companies are less interested in what degrees and certificates you have and more interested in what you have built so far. Do you have an app on the app store? Do you have a glitch site you can show? What about a freecodecamp portfolio?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ask for help and make friends&lt;/strong&gt;. It is hard to find mentors, but if you are lucky enough to find someone, make use of them. I mentor two people at IBM and often I learn more from them than they from me! So trust me, you are doing your mentors a favor! You know what else you are doing? You are building a network. I cannot emphasize enough how important this network is going to be for you.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Alright, so you finished your internship. You went back to school and passed with flying colors! You are now looking for a job. If you ever asked me for advise and actually followed it, you have some projects, you have learned lots of things from your mentors and you have an amazing network you can ask for help. What next? How do you get help? The one mistake I see a lot of recent grads make is write an email or a message on the lines of …&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hello dear mentor! I just graduated. Here is my Github profile. I am looking for a job. Do you have anything?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As a mentor, I am going to put this at the bottom of my TODOs. Seems like a lot of work to look for open opportunities for you, even though I think you are a brilliant developer. It really does not motivate your mentor to help, unless your mentor is your family. Family almost always helps! The trick here to help your network help you. Make it EASY for them. Here is what I propose …&lt;/p&gt;

&lt;h4&gt;
  
  
  I. Meet in person or over the phone, but be very respectful of their time
&lt;/h4&gt;

&lt;p&gt;I might be old school to think that meeting in person over coffee or speaking over the phone instead of communicating via email is more effective. An email gets lost or goes to the bottom of the pile. A face and a conversation stays in memory. I know it’s not always possible to get a face to face, but even a facetime or a zoom meeting is worth while!&lt;/p&gt;

&lt;p&gt;Ask for only the time you need. I start with a 15 minute meeting and it is amazing what you can get done if there is a time constraint. Be respectful of their time! I have tried below before and it has worked great for me.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hello, I got your contact information from xyz. I am looking for an opportunity with your team. I have attached my resume with this email. I would appreciate your time for a small chat. I have sent you an invite for 15 minutes next Monday. I am happy to reschedule as per your convenience.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once you have a meeting arranged, have an agenda. Yes, a 15 minute meeting also need to be planned and thought of in advance. Finally, make sure your ask is clear at the end of the meeting. An ask could be to get some clarification on a role, to get an interview with a team lead or to make some connections for you, introduce you to the right person.&lt;/p&gt;

&lt;h4&gt;
  
  
  II. Go to them with a job posting
&lt;/h4&gt;

&lt;p&gt;Better yet, if you see an open job posting in your network, approach them with the job posting in hand. Remember, the idea is to make it as easy as possible for your contact to get you an interview at their company/team/department.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hello, I got your contact information from xyz. I am attaching an open position (id-xyz) that I found on your career site and I believe myself to be a good fit for this role. I have attached my resume with this email. I would appreciate your time for a small chat. I have sent you an invite for 15 minutes next Monday. I am happy to reschedule as per your convenience.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  III. Do your homework
&lt;/h4&gt;

&lt;p&gt;This is a big one as well. Talk with people in the position you want to join in. Ask for organization charts. Ask for pain points and strategy. Get to know your future role as intimately as possible. Better yet, create a few notes or slides on why you are a good fit for this role and potentially start off your 15 minutes meeting with a presentation. I say potentially, because every meeting is different. You can also go through your notes without actually showing them as a slide. It does not have to be an official presentation to get your point across. This will do two things for you&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It will impress your contact and/or mentor that you did your hw.&lt;/li&gt;
&lt;li&gt;They are not handholding you anymore. You are again making it easier for them to help you. You are empowering them to fight for you.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Finally, keep in touch with your network and mentors even when not looking for a job!&lt;/p&gt;

&lt;p&gt;What are some of your advise for developers and designers new to IT or data science and looking for a job? Calling out some senior devs and mentors who I look up to at IBM— &lt;a href="https://medium.com/u/52ec2bf60ec1" rel="noopener noreferrer"&gt;John Walicki &lt;/a&gt;, &lt;a href="https://medium.com/u/bf01a11701fe" rel="noopener noreferrer"&gt;Max Katz &lt;/a&gt;, &lt;a href="https://medium.com/u/a66cd0d81fb9" rel="noopener noreferrer"&gt;Gabriela de Queiroz &lt;/a&gt;, &lt;a href="https://medium.com/u/f19d3611f01e" rel="noopener noreferrer"&gt;Kim Newman &lt;/a&gt;, &lt;a href="https://medium.com/u/25bb50ccfd0c" rel="noopener noreferrer"&gt;Spencer Krum&lt;/a&gt; , &lt;a href="https://medium.com/u/17d50d1e1c99" rel="noopener noreferrer"&gt;Ryan Anderson&lt;/a&gt; , &lt;a href="https://medium.com/u/5fc0831eb57b" rel="noopener noreferrer"&gt;Vijay Bommireddipalli&lt;/a&gt; and many more who I could not find on Medium and dev.to :(&lt;/p&gt;

</description>
      <category>lifelessons</category>
      <category>career</category>
    </item>
    <item>
      <title>Postman is a delight to demo APIs and services! My top 3 features   ✨</title>
      <dc:creator>Upkar Lidder</dc:creator>
      <pubDate>Fri, 12 Jul 2019 14:11:02 +0000</pubDate>
      <link>https://dev.to/upkarlidder/postman-is-a-delight-to-demo-apis-and-services-my-top-3-features-5k6</link>
      <guid>https://dev.to/upkarlidder/postman-is-a-delight-to-demo-apis-and-services-my-top-3-features-5k6</guid>
      <description>&lt;p&gt;I use Postman extensively for most of my workshops and demos. If you haven’t heard of &lt;a href="https://www.getpostman.com/" rel="noopener noreferrer"&gt;Postman&lt;/a&gt;, it is an API development and management tool. I however use it as a &lt;strong&gt;client to demo external APIs including IBM Cognitive APIs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You can see me using it to &lt;a href="https://www.crowdcast.io/e/introduction-to-apache" rel="noopener noreferrer"&gt;demonstrate different features of CouchDB&lt;/a&gt; in this &lt;a href="https://www.crowdcast.io/e/introduction-to-apache" rel="noopener noreferrer"&gt;webinar&lt;/a&gt;. In fact, most of the workshop was built on top of Postman. I really care about making the workshop straightforward with two goals in mind …&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Is everybody able to finish the workshop within the designated time?&lt;/li&gt;
&lt;li&gt;If they are not, can I provide the end solution that they can import and learn something from?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F925%2F1%2AsbJGngPWbsAb507GkL13XA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F925%2F1%2AsbJGngPWbsAb507GkL13XA.png"&gt;&lt;/a&gt;Introduction to CouchDB on IBM Cloud Webinar&lt;/p&gt;

&lt;p&gt;I use the following features of Postman heavily to make my life easier …&lt;/p&gt;

&lt;h4&gt;
  
  
  Environment variables
&lt;/h4&gt;

&lt;p&gt;The environment variables help in two ways&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I set them up on the folder level so that all the requests in that folder can reuse the variables.&lt;/li&gt;
&lt;li&gt;The participants generally have to use their own API key and this makes it easier to setup in one place.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AAbJF5OPPzQW2VACP-DaU2g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AAbJF5OPPzQW2VACP-DaU2g.png"&gt;&lt;/a&gt;Environment variables in POSTMAN&lt;/p&gt;

&lt;h4&gt;
  
  
  Repeat REST calls
&lt;/h4&gt;

&lt;p&gt;There is a cool feature to run REST calls multiple times. You can even put a Delay between the calls. I am running the document-create call 10 times with a delay of 2ms between calls in the example below. This will create 10 documents in CouchDB. The other way to do this would be to write a script with the 10 POST calls, or use the CouchDB SDK in the language of your choice. All good options, but this seems the simplest for a webinar/tutorial.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AE5Fgm60jNG4tyk2CQWjJzQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AE5Fgm60jNG4tyk2CQWjJzQ.png"&gt;&lt;/a&gt;Collection Runner&lt;/p&gt;

&lt;h4&gt;
  
  
  Collections/Folders and Export/Import
&lt;/h4&gt;

&lt;p&gt;This is the coolest feature. Postman has a concept of &lt;strong&gt;Collections&lt;/strong&gt; and &lt;strong&gt;Folders&lt;/strong&gt; within those collections. You can then package REST calls in different folders&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F473%2F1%2AZZORs75nnmjStjcXDcSHeg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F473%2F1%2AZZORs75nnmjStjcXDcSHeg.png"&gt;&lt;/a&gt;Collections and Folders in POSTMAN&lt;/p&gt;

&lt;p&gt;I also use their export/import feature to split my workshop into multiple phases. So as to cater for all levels of participants. It also gives a good and natural break point if it is a longer workshop. I have seen some folks use github branches and tags to do the same thing.&lt;/p&gt;

&lt;p&gt;So what are some of the things you use Postman for? Do you have other tools you use for tutorials and workshops?&lt;/p&gt;

&lt;p&gt;Thank you &lt;a href="https://medium.com/u/bf01a11701fe" rel="noopener noreferrer"&gt;Max Katz&lt;/a&gt; and &lt;a href="https://medium.com/u/262975298e3a" rel="noopener noreferrer"&gt;IBM Developer&lt;/a&gt; for hosting the webinar on CouchDB.&lt;/p&gt;

</description>
      <category>postman</category>
      <category>api</category>
    </item>
    <item>
      <title>NLP, Visual Recognition and Serverless example — a chatbot moderator</title>
      <dc:creator>Upkar Lidder</dc:creator>
      <pubDate>Thu, 11 Jul 2019 00:25:42 +0000</pubDate>
      <link>https://dev.to/lidderupk/nlp-visual-recognition-and-serverless-example-a-chatbot-moderator-4mbd</link>
      <guid>https://dev.to/lidderupk/nlp-visual-recognition-and-serverless-example-a-chatbot-moderator-4mbd</guid>
      <description>&lt;h3&gt;
  
  
  NLP, Visual Recognition and Serverless example — a chatbot moderator
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--exRKTGpw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/822/1%2ANuawfho5sJsKpxiCoDuoKg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--exRKTGpw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/822/1%2ANuawfho5sJsKpxiCoDuoKg.png" alt=""&gt;&lt;/a&gt;IBM team presenting a webinar&lt;/p&gt;

&lt;p&gt;Lisa Jung and I presented a webinar on creating a chatbot moderator that tells people to be polite when they say something rude on Slack or post an appropriate picture. For the image bit, we changed the code to treat any dog pictures as rude pictures instead of looking for explicit pictures.&lt;/p&gt;

&lt;p&gt;We used a couple of different IBM Services in this demo.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We used &lt;a href="https://ibm.biz/BdzmSV"&gt;&lt;strong&gt;IBM Natural Language Understanding&lt;/strong&gt;&lt;/a&gt; to analyze text for &lt;strong&gt;rudeness&lt;/strong&gt;. You can find more information &lt;a href="https://ibm.biz/BdzmSV"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qaQ9nJnZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A5imqEiN5v4MgNGcBh4xqyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qaQ9nJnZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A5imqEiN5v4MgNGcBh4xqyg.png" alt=""&gt;&lt;/a&gt;Analyzing a speech at the Federal Reserve Bank using IBM Natural Language Understanding service&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We used &lt;a href="https://ibm.biz/BdzmSJ"&gt;&lt;strong&gt;IBM Visual Recognition service&lt;/strong&gt;&lt;/a&gt; to detect explicit pictures (pictures of a dog in this case). Read more &lt;a href="https://ibm.biz/BdzmSJ"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sy0Pcn9o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AmvjfD-qKuFOMak_Vu4Hy9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sy0Pcn9o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AmvjfD-qKuFOMak_Vu4Hy9w.png" alt=""&gt;&lt;/a&gt;Analyzing an image with IBM Watson Visual Recognition Service&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Finally, we used &lt;a href="https://ibm.biz/BdzmSA"&gt;&lt;strong&gt;IBM Cloud Functions&lt;/strong&gt;&lt;/a&gt; as the serverless platform to glue everything together. Learn more &lt;a href="https://ibm.biz/BdzmSA"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7Cv_ZOuR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2At1yS6L1qO-R836jW2DnWIw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7Cv_ZOuR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2At1yS6L1qO-R836jW2DnWIw.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the flow of the use case:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user enters text or an image in a Slack channel.&lt;/li&gt;
&lt;li&gt;Slack pushes this content to the IBM Cloud Function action.&lt;/li&gt;
&lt;li&gt;If the input was text, the action invokes Watson Natural Language Understanding service to determine the rudeness.&lt;/li&gt;
&lt;li&gt;If the input was an image, the action invokes Watson Visual Recognition service to determine if it is not safe for the channel.&lt;/li&gt;
&lt;li&gt;If the input was “bad”, the action removed the image and tells the user to be more polite. It does so by posting directly in the Slack channel.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9J-WpFpt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A15O6_eBtdHoxyhq81UGq8A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9J-WpFpt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A15O6_eBtdHoxyhq81UGq8A.png" alt=""&gt;&lt;/a&gt;a chatbot moderator&lt;/p&gt;

&lt;p&gt;You can view the &lt;a href="https://www.crowdcast.io/e/a-moderator-using-nlp-vr"&gt;full recording on crowdcast.io&lt;/a&gt; [&lt;a href="https://www.crowdcast.io/e/a-moderator-using-nlp-vr"&gt;https://www.crowdcast.io/e/a-moderator-using-nlp-vr&lt;/a&gt;]. You can get the complete code and try it out for yourself in &lt;a href="https://ibm.biz/BdzmSu"&gt;&lt;strong&gt;this github repository&lt;/strong&gt;&lt;/a&gt;. Hope that was fun! Join us for future webinars, in person meetings and workshops!&lt;/p&gt;

&lt;p&gt;Thanks &lt;a href="https://medium.com/u/bf01a11701fe"&gt;Max Katz&lt;/a&gt; and &lt;a href="https://medium.com/u/262975298e3a"&gt;IBM Developer&lt;/a&gt; for hosting!&lt;/p&gt;

</description>
      <category>visualrecognition</category>
      <category>ibm</category>
      <category>ibmwatson</category>
      <category>ai</category>
    </item>
    <item>
      <title>Overview of NLP services on IBM Cloud</title>
      <dc:creator>Upkar Lidder</dc:creator>
      <pubDate>Tue, 25 Jun 2019 23:11:47 +0000</pubDate>
      <link>https://dev.to/lidderupk/overview-of-nlp-services-on-ibm-cloud-m83</link>
      <guid>https://dev.to/lidderupk/overview-of-nlp-services-on-ibm-cloud-m83</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AkozFmCaoSBC6gCunvSK2fQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AkozFmCaoSBC6gCunvSK2fQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I had so much fun presenting NLP on IBM Cloud to a meetup at hackerdojo in Santa Clara. We went through creating a custom Natural Language Classifier on IBM Cloud that differentiates between HAM and SPAM emails. Classic problem that has been solved numerous times, but gives a good understanding of the space. Why is the first sentence more hammy then the second?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AEtrub4NL2l9s8Fh97-rqNA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AEtrub4NL2l9s8Fh97-rqNA.png"&gt;&lt;/a&gt;Ham or Spam?&lt;/p&gt;

&lt;p&gt;You can find the slides and workshop here: &lt;a href="https://github.com/lidderupk/hackerdojo-nlp" rel="noopener noreferrer"&gt;https://github.com/lidderupk/hackerdojo-nlp&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are some of the NLP services on IBM Cloud …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Ay5SPDnufzCVEVp3H.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Ay5SPDnufzCVEVp3H.png"&gt;&lt;/a&gt;NLP on IBM Cloud&lt;/p&gt;

&lt;p&gt;Here is a brief overview of the different NLP services …&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://ibm.biz/BdzM8v" rel="noopener noreferrer"&gt;Watson Assistant&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;Watson Assistant lets you build conversational interfaces into any application, device, or channel. It combines machine learning, natural language understanding, and integrated dialog tools to create conversation flows between your apps and your users.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8m" rel="noopener noreferrer"&gt;Watson Assistant API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8a" rel="noopener noreferrer"&gt;Watson Assistant Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8G" rel="noopener noreferrer"&gt;Code Patterns&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://ibm.biz/BdzM88" rel="noopener noreferrer"&gt;Natural Language Classifier&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;Lets you quickly build custom classifiers for textual data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8n" rel="noopener noreferrer"&gt;NLC API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8e" rel="noopener noreferrer"&gt;NLC Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8b" rel="noopener noreferrer"&gt;NLC Code Patterns&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8p" rel="noopener noreferrer"&gt;NLC Demo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://ibm.biz/BdzM8g" rel="noopener noreferrer"&gt;Natural Language Understanding&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;Helps you extract entities, concepts and relationships between them in text data. The service also performs sentiment analysis giving back emotions and emotions. Additionally, you can create a custom model for some APIs to get specific results that are tailored to your domain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AQY5q1hBbxSqaTjkBDitWCA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AQY5q1hBbxSqaTjkBDitWCA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8h" rel="noopener noreferrer"&gt;NLU API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8V" rel="noopener noreferrer"&gt;NLU Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8J" rel="noopener noreferrer"&gt;NLU Code Patterns&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8A" rel="noopener noreferrer"&gt;NLU Demo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://ibm.biz/BdzM8L" rel="noopener noreferrer"&gt;Text to Speech(tts)&lt;/a&gt; and &lt;a href="https://ibm.biz/BdzM83" rel="noopener noreferrer"&gt;Speech to Text(stt)&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;As the names suggest, these services let you convert text to speech and vice versa. Additionally, they offer a way to train these models on your domain data. That is really powerful. Stay tuned for a post with a demo.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8w" rel="noopener noreferrer"&gt;STT API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8k" rel="noopener noreferrer"&gt;STT Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8T" rel="noopener noreferrer"&gt;STT Demo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8C" rel="noopener noreferrer"&gt;TTS API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM8Q" rel="noopener noreferrer"&gt;TTS Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ibm.biz/BdzM89" rel="noopener noreferrer"&gt;TTS Demo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are some of the black box NLP services on IBM Cloud. They can however be extended and trained on your custom domain data. Additionally, if you have coded a model from scratch using a ML framework like scikit-learn or a DL framework like Keras, you can deploy it on IBM cloud using Watson Machine Learning (WML).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AFpSPw9nv2HlmDmmhvVGu7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AFpSPw9nv2HlmDmmhvVGu7g.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;So … until next time internet peeps!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Thanks &lt;a href="https://medium.com/u/bf01a11701fe" rel="noopener noreferrer"&gt;Max Katz&lt;/a&gt; and &lt;a href="https://medium.com/u/262975298e3a" rel="noopener noreferrer"&gt;IBM Developer&lt;/a&gt; for providing space and delicious pizza at the meetup.&lt;/p&gt;

</description>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Introducing MAX— Model Asset Exchange</title>
      <dc:creator>Upkar Lidder</dc:creator>
      <pubDate>Mon, 17 Jun 2019 17:12:04 +0000</pubDate>
      <link>https://dev.to/lidderupk/introducing-max-model-asset-exchange-4m7k</link>
      <guid>https://dev.to/lidderupk/introducing-max-model-asset-exchange-4m7k</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TM7TkaIV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AE99n0jgH4eL9rpfyFELN5Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TM7TkaIV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AE99n0jgH4eL9rpfyFELN5Q.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Data Science is about learning&lt;/em&gt;&lt;/strong&gt;. I don’t mean supervised learning, unsupervised learning, reinforcement learning or transfer learning ! I mean &lt;strong&gt;&lt;em&gt;LEARNING&lt;/em&gt;&lt;/strong&gt;  ! From books. From code. From your colleagues and friends. From stackoverflow ! You are constantly learning and improving your skill set. New models and breakthroughs are being announced at a faster rate than ever. As data scientists, you need to be aware of what is going on in this field and stay on top of the latest developments. IBM has a few open source projects that I want to write about that will help you keep up to date. We will look MAX or Model Asset Exchange in this post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i4Iv6hyk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/490/1%2AytBdCoJZcGBZ-igTzjvdsw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i4Iv6hyk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/490/1%2AytBdCoJZcGBZ-igTzjvdsw.gif" alt=""&gt;&lt;/a&gt;Learning&lt;/p&gt;

&lt;p&gt;MAX is a repository of open source Machine Learning and Deep Learning models developed by my colleagues and friends at IBM. They include topics ranging from object recognition, pose detector, audio analysis and age estimator. The team puts out a new model every now and then. So what’s the big deal ?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can clone, fork, improve upon, provide feedback and learn from these models ! You LEARN ! You learn how to make one yourself and make it usable ! You learn why it works well. You learn why some it does not work well and more importantly why it does not work with certain inputs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My dear friend and colleague &lt;a href="https://medium.com/u/df9e907db0b1"&gt;Patrick Titzler&lt;/a&gt; gave a talk in San Francisco recently on MAX and showed us how to use Node-RED to quickly test some of these models. I met &lt;a href="https://www.instagram.com/throughshutter/"&gt;Ayush&lt;/a&gt; at the same meetup. Ayush is using two models from MAX to facilitate his everyday hobby — photography. I will save that for another post later. Let’s look at my favorite MAX models …&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://developer.ibm.com/exchanges/models/all/max-object-detector?cm_mmc=OSocial_Blog-_-Developer_IBM+Developer-_-WW_WW-_-ibmdev-OInfluencer-Medium-USL-ibm-max-object-detector&amp;amp;cm_mmca1=000037FD&amp;amp;cm_mmca2=10010797"&gt;Object Detector&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This model recognizes the objects present in an image from the 80 different high-level classes of objects in the COCO Dataset. This problem has been solved for most part and if the framework/platform you are using does not do well with the default results, try creating a custom classifier (insert url) with Watson Visual Recognition. As an example, I gave the leading visual recognition APIs an image of Raspberry Pi and they mostly came back with “computer chip”, “mother board” or “electronic”. Those are all amazing results, but I wanted more. I was able to train a custom classifier using Watson to tell me with high confidence if the picture was “Raspberry Pi Model A” or “Raspberry Pi Model B” 😳. Pretty cool stuff ! You can read about the customer classifier in this &lt;a href="https://dev.to/lidderupk/watson-visual-recognition--custom-models-15go"&gt;medium post&lt;/a&gt;. That being said, this open source model still makes for a great learning resource. You can also try these patterns in Node-RED, more coming on this option in the next post ! Pretty cool that it got the bicycle just with what was visible in the picture ! The bicycle is hanging upside down in my room 🆒 !&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZvI9lTcz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AVIdcctKY4glS0IEWGMQJ4g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZvI9lTcz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AVIdcctKY4glS0IEWGMQJ4g.png" alt=""&gt;&lt;/a&gt;Node-RED for MAX&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://developer.ibm.com/exchanges/models/all/max-facial-age-estimator?cm_mmc=OSocial_Blog-_-Developer_IBM+Developer-_-WW_WW-_-ibmdev-OInfluencer-Medium-USL-ibm-max-age-estimator&amp;amp;cm_mmca1=000037FD&amp;amp;cm_mmca2=10010797"&gt;Facial Age Estimator&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This model detects faces in an image, extracts facial features for each face detected and finally predicts the age of each face.&lt;/p&gt;

&lt;p&gt;This is my favorite by far because it gets me younger every time :). At first, I was quite annoyed that IBM would put out a model that did not do as well as I expected it to. Sometimes I shave and other times I have a stubble. It gives me a different result in day light vs inside a dimly lit conference hall. But then I realized that these are all learning opportunities for me. Questions that we need to ask of each model. I started looking at the training data and quickly realized that unless I am in Hollywood and have five lbs of makeup on, it wont be able to guess my age. The training data is IMDB imageset that you can &lt;a href="https://developer.ibm.com/exchanges/models/all/max-facial-age-estimator?cm_mmc=OSocial_Blog-_-Developer_IBM+Developer-_-WW_WW-_-ibmdev-OInfluencer-Medium-USL-ibm-max-age-estimator&amp;amp;cm_mmca1=000037FD&amp;amp;cm_mmca2=10010797"&gt;find here&lt;/a&gt;. We will look at this dataset closely in an upcoming blog post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/"&gt;IMDB-WIKI - 500k+ face images with age and gender labels&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We Now have an &lt;a href="https://developer.ibm.com/patterns/estimate-ages-for-detected-human-faces?cm_mmc=OSocial_Blog-_-Developer_IBM+Developer-_-WW_WW-_-ibmdev-OInfluencer-Medium-USL-max-estimate-age&amp;amp;cm_mmca1=000037FD&amp;amp;cm_mmca2=10010797"&gt;IBM code pattern&lt;/a&gt; that will help you create a web application that uses this model ! These world leaders are awfully young !&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vSoHqZDq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Anos_L2nOT37mr1snpxLh7Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vSoHqZDq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Anos_L2nOT37mr1snpxLh7Q.png" alt=""&gt;&lt;/a&gt;G-7 leaders&lt;/p&gt;

&lt;p&gt;It does fairly well with my image, only off by two … I will let you guess in which direction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bkxmpjxi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A8u0DswIGrLub0HC_wQEqJQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bkxmpjxi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A8u0DswIGrLub0HC_wQEqJQ.png" alt=""&gt;&lt;/a&gt;Upkar Lidder&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://developer.ibm.com/exchanges/models/all/max-image-caption-generator?cm_mmc=OSocial_Blog-_-Developer_IBM+Developer-_-WW_WW-_-ibmdev-OInfluencer-Medium-USL-ibm-max-image-caption&amp;amp;cm_mmca1=000037FD&amp;amp;cm_mmca2=10010797"&gt;Image Caption Generator&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This is the model Ayush was using to generate captions for his Instagram account. By the way, you can follow him here: &lt;a href="https://www.instagram.com/throughshutter/"&gt;https://www.instagram.com/throughshutter&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This model generates captions from a fixed vocabulary that describe the contents of images in the COCO Dataset. Even though the training dataset is limited (80 object categories), it does pretty well. Here is an example&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xyqgSzJk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/950/1%2ASqfWTFfphEaOn2SjU4Y4AA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xyqgSzJk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/950/1%2ASqfWTFfphEaOn2SjU4Y4AA.png" alt=""&gt;&lt;/a&gt;&lt;strong&gt;&lt;em&gt;a man in a suit talking on a cell phone&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;A man in a suit talking on a cell phone&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The caption says &lt;strong&gt;&lt;em&gt;a man in a suit talking on a cell phone.&lt;/em&gt;&lt;/strong&gt; That is not bad at all ! There are plenty of other models you can try on MAX. But let’s look at how you can use Node-RED to quickly test a few of these. The MAX team has created some assets to help us out. If you have never used Node-RED before, get ready to fall in love 💕.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://ibm.biz/BdzPh9"&gt;Audio Classifier&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This one is so much fun. &lt;em&gt;This model recognizes a signed 16-bit PCM wav file as an input, generates embeddings, applies&lt;/em&gt; &lt;a href="https://github.com/tensorflow/models/tree/master/research/audioset#output-embeddings"&gt;&lt;em&gt;PCA transformation/quantization&lt;/em&gt;&lt;/a&gt;&lt;em&gt;, uses the embeddings as an input to a multi-attention classifier and outputs top 5 class predictions and probabilities as output.&lt;/em&gt; WHAT 🤯 !! Basically, it identifies sounds and noise. The data is coming from &lt;a href="https://research.google.com/audioset/ontology/index.html"&gt;AudioSet&lt;/a&gt;. I am building some cool demos and games to use this model. Stay tuned !&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7Xiw-oNQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Am7St2oLWP1Gm2HuZoOPtQg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7Xiw-oNQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Am7St2oLWP1Gm2HuZoOPtQg.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://ibm.biz/BdzPhC"&gt;Human Pose Estimator&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;I had a lot of fun with this one at a recent data science workshop ! This model detects humans and their poses in a given image. The model first detects the humans in the input image and then identifies the body parts, including nose, neck, eyes, shoulders, elbows, wrists, hips, knees, and ankles. Next, each pair of associated body parts is connected by a pose line.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VZ1jxN7F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ax69Jto9UfnwN9MFk_fnCzw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VZ1jxN7F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ax69Jto9UfnwN9MFk_fnCzw.png" alt=""&gt;&lt;/a&gt;Human Pose Estimator&lt;/p&gt;

&lt;p&gt;Those are some of my favorite models. There are lots more in the Model Asset Exchange. &lt;a href="https://ibm.biz/BdzPhV"&gt;Give them a try and let me know which ones you like&lt;/a&gt; ! I have some more posts coming up on how to use/deploy &lt;a href="https://ibm.biz/BdzPhV"&gt;some of these models&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/u/bf01a11701fe"&gt;Max Katz&lt;/a&gt;, sorry for the confusion in advance :). I will write a separate post to introduce you ! &lt;a href="https://medium.com/u/df9e907db0b1"&gt;Patrick Titzler&lt;/a&gt;, thank you for the wonderful meetup ! &lt;a href="https://medium.com/u/a66cd0d81fb9"&gt;Gabriela de Queiroz&lt;/a&gt; and Simon, thank you for demoing MAX at the &lt;a href="https://aiml-developer-summit.splashthat.com/"&gt;IBM AI-ML Summit&lt;/a&gt; in San Francisco.&lt;/p&gt;

</description>
      <category>ibm</category>
      <category>machinelearning</category>
      <category>modelassetexchange</category>
    </item>
    <item>
      <title>Another simplest serverless function 🤷‍♀️ ‍— but seriously 💖</title>
      <dc:creator>Upkar Lidder</dc:creator>
      <pubDate>Thu, 13 Jun 2019 00:54:19 +0000</pubDate>
      <link>https://dev.to/upkarlidder/another-simplest-serverless-function-but-seriously-4c7b</link>
      <guid>https://dev.to/upkarlidder/another-simplest-serverless-function-but-seriously-4c7b</guid>
      <description>&lt;p&gt;I have come across a couple of posts with the title &lt;strong&gt;&lt;em&gt;simplest serverless functions&lt;/em&gt;&lt;/strong&gt; and my initial thoughts are …&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It is not that simple if you have to write up a config file to post a &lt;strong&gt;&lt;em&gt;hello world&lt;/em&gt;&lt;/strong&gt; function 🤦‍♂ !&lt;/li&gt;
&lt;li&gt;How easy is it to now extend this function to something more usable ?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So here we go again … this example is based on &lt;strong&gt;&lt;em&gt;Apache OpenWhisk&lt;/em&gt;&lt;/strong&gt; , &lt;strong&gt;&lt;em&gt;an open source distributed serverless computing platform&lt;/em&gt;&lt;/strong&gt;. However, a serverless system on your desktop/laptop is no different than spinning up a nodejs express app or a flask app. &lt;strong&gt;&lt;em&gt;You are missing out on scale, performance and distributed nature of OpenWhisk&lt;/em&gt;&lt;/strong&gt;. In order to make it even simpler, we will use IBM Cloud Functions, a managed version of Apache OpenWhisk.&lt;/p&gt;

&lt;h4&gt;
  
  
  Check this out …
&lt;/h4&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/jpUr2DRQvPU"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;In the next few posts, we will learn to extend this function to do something more usable. But wasn’t that easy !&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gyI54B9h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/480/1%2AOJvehEDxKyjCMzvxsdiaMg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gyI54B9h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/480/1%2AOJvehEDxKyjCMzvxsdiaMg.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;See you next time ! In the meantime, read up on some &lt;a href="https://ibm.biz/BdzPiS"&gt;serverless code patterns&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/u/bf01a11701fe"&gt;Max Katz&lt;/a&gt;, thanks for this post idea !&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>ibmcloudfunctions</category>
      <category>ibm</category>
    </item>
  </channel>
</rss>
