<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Clarifai Team</title>
    <description>The latest articles on DEV Community by Clarifai Team (@clarifai_team).</description>
    <link>https://dev.to/clarifai_team</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/clarifai_team"/>
    <language>en</language>
    <item>
      <title>#12DaysOfHacks – WIN OODLES OF SWAG AND A CHANCE TO WIN A FORCE1 DRONE</title>
      <dc:creator>Clarifai Team</dc:creator>
      <pubDate>Wed, 06 Dec 2017 20:29:15 +0000</pubDate>
      <link>https://dev.to/clarifai/12daysofhacks--win-oodles-of-swag-and-a-chance-to-win-a-force1-drone-927</link>
      <guid>https://dev.to/clarifai/12daysofhacks--win-oodles-of-swag-and-a-chance-to-win-a-force1-drone-927</guid>
      <description>&lt;p&gt;&lt;em&gt;’Twas December, the holidays and a time to give back, so this year get ready for #12DaysofHacks! Every day from December 11th to 20th, Clarifai will be giving you the chance to win big.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5JpNGlgZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.clarifai.com/wp-content/uploads/2017/12/Clarifai_12daysofHacks_1200x628_NoCTA-720x405.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5JpNGlgZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.clarifai.com/wp-content/uploads/2017/12/Clarifai_12daysofHacks_1200x628_NoCTA-720x405.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the month of December, Clarifai gave to thee! 12 Days of Hacking, chances for swagging aaaand one new Force1 &lt;a href="https://www.amazon.com/Force1-Camera-Return-Brushless-Quadcopter/dp/B073RRK5LL/ref=sr_1_1_sspa?s=toys-and-games&amp;amp;ie=UTF8&amp;amp;qid=1511977538&amp;amp;sr=1-1-spons&amp;amp;keywords=drone+with+camera&amp;amp;refinements=p_72%3A1248963011&amp;amp;psc=1"&gt;droooone&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Every day from December 11th to December 20th we will be showing off an application built on our powerful visual recognition technology. Come check out how people are tangibly using AI in the world today!&lt;/p&gt;

&lt;p&gt;We’ll also be giving away a &lt;em&gt;sack&lt;/em&gt; load of Clarifai swag each day to the first three people to repost our hacks. Follow @clarifai and retweet, repost or share our featured hack as soon as you see it published on Twitter, Instagram, and Facebook. The first three people to share our hacks will win! Make sure you tag @clarifai and #12daysofHacks.&lt;/p&gt;

&lt;p&gt;Finally, take a break from decking the halls and use our API to build your own app. Submit a &lt;a href="https://blog.clarifai.com/bounties/"&gt;bounty&lt;/a&gt; through our bounties page, and one lucky hacker will win a &lt;a href="https://www.amazon.com/Force1-Camera-Return-Brushless-Quadcopter/dp/B073RRK5LL/ref=sr_1_1_sspa?s=toys-and-games&amp;amp;ie=UTF8&amp;amp;qid=1511977538&amp;amp;sr=1-1-spons&amp;amp;keywords=drone+with+camera&amp;amp;refinements=p_72%3A1248963011&amp;amp;psc=1"&gt;drone&lt;/a&gt;! Contest ends at 11:59 pm ET on 12/20/17, so make like the elves and get to building! The winner will be contacted after 1/2/18.&lt;/p&gt;

&lt;p&gt;For full contest info and details from the legal eagles, &lt;a href="https://blog.clarifai.com/clarifai-holiday-giveaway-contest-rules/"&gt;click here&lt;/a&gt;. Good luck and have a fun and festive holiday season!&lt;/p&gt;

&lt;p&gt;Ready for the #12DAYSOFHACKS? Go to Twitter and tweet about it now!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/clarifai"&gt;Tweet us now!&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://blog.clarifai.com/12daysofhacks-win-oodles-of-swag-and-a-chance-to-win-a-force1-drone/"&gt;#12DaysOfHacks – WIN OODLES OF SWAG AND A CHANCE TO WIN A FORCE1 DRONE&lt;/a&gt; appeared first on &lt;a href="https://blog.clarifai.com"&gt;Clarifai Blog | Artificial Intelligence in Action&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>giveaway</category>
      <category>hacks</category>
      <category>ai</category>
      <category>drone</category>
    </item>
    <item>
      <title>Introducing Landscape Quality, Portrait Quality, and Textures &amp; Patterns Visual Recognition Models</title>
      <dc:creator>Clarifai Team</dc:creator>
      <pubDate>Tue, 14 Nov 2017 17:22:26 +0000</pubDate>
      <link>https://dev.to/clarifai/introducing-landscape-quality-portrait-quality-and-textures--patterns-visual-recognition-models-6cp</link>
      <guid>https://dev.to/clarifai/introducing-landscape-quality-portrait-quality-and-textures--patterns-visual-recognition-models-6cp</guid>
      <description>&lt;p&gt;Being in the business of computer vision, we deal a lot with photos – good and bad. But what makes a photo “good” vs. “bad” – the composition? The lighting? The way it makes you feel? We decided to try and distill those elements into an algorithm to help photographers and media managers sort through large volumes of content to find the highest quality images and video.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://clarifai.com/models/" rel="noopener noreferrer"&gt;Show me the models&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Being in the business of computer vision, we deal a lot with photos. These photos can range from selfies using a cell phone camera, to computer-generated images created by designers, to professional photographs using high-end DSLR cameras. Our broad range of &lt;a href="https://clarifai.com/models/" rel="noopener noreferrer"&gt;models&lt;/a&gt; helps computers understand “what’s in an image” and “where the object is located in an image”. For the first time, we’re releasing models that help computers understand image quality, or “is this image good or bad?” We are happy to release &lt;strong&gt;Landscape Quality Model&lt;/strong&gt; and &lt;strong&gt;Portrait&lt;/strong&gt;  &lt;strong&gt;Quality Model&lt;/strong&gt; into Beta which understands the quality of an image, and responds back with the confidence level of whether an image is “high quality” or “low quality”.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Good quality photo attributes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Good lighting&lt;/li&gt;
&lt;li&gt;Sharp and in focus&lt;/li&gt;
&lt;li&gt;If retouching is present, it is not obvious (no completely airbrushed skin)&lt;/li&gt;
&lt;li&gt;Not too much grain/noise (**unless it’s the artist’s intention)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Poor quality photo attributes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Severe chromatic aberration&lt;/li&gt;
&lt;li&gt;Red eyes&lt;/li&gt;
&lt;li&gt;Extremely backlit&lt;/li&gt;
&lt;li&gt;Unnatural vignetting, often digitally added&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;h3&gt;
  
  
  “With our computer vision capabilities, we want photographers to focus on what they do best: capture amazing moments.”Week
&lt;/h3&gt;
&lt;/blockquote&gt;

&lt;p&gt;Professional photographers and even photography enthusiasts can take thousands (if not tens of thousands) of photos on a daily basis. They would then go through each photo and decide on whether or not that picture is worth post-processing or not. Assuming a photographer takes 5,000 photos in a day, and spends 10 seconds to figure out whether or not the photo should be post processed or not, this filtration process could take over 13 hours for one day’s worth of photos. With our computer vision capabilities, we want photographers to focus on what they do best: capture amazing moments.&lt;/p&gt;

&lt;p&gt;Speaking from personal experience, I was overwhelmed by the number of photos I had on my camera after my wildlife photography trip to Nairobi a few years ago. To this date, I still haven’t had the chance to go through every single image to filter high quality vs. low quality photo. Speaking to some professional fashion photographers, they spend hours manually going through each image that they’ve captured during a ramp show or a photoshoot. “Having computers make the initial pass at filtering would save me tens of hours on a weekly basis”, said Kimal Lloyd-Phillip, a photographer for Toronto Women’s and Men’s Fashion Week.&lt;/p&gt;

&lt;blockquote&gt;
&lt;h3&gt;
  
  
  “Having computers make the initial pass at filtering would save me tens of hours on a weekly basis.” – Kimal Lloyd-Phillip, photographer for Toronto Women’s and Men’s Fashion Week
&lt;/h3&gt;
&lt;/blockquote&gt;

&lt;p&gt;Our Developer Evangelism team [need link] hacked away at these models and created a tool that would group your photos within a folder into two separate folders: good and bad.&lt;/p&gt;

&lt;p&gt;In addition to the Landscape and Portrait Quality Models, we are also introducing a &lt;strong&gt;Textures &amp;amp; Patterns&lt;/strong&gt;  &lt;strong&gt;Model&lt;/strong&gt; that helps photographers, and designers identify common textures (feathers, woodgrain), unique/fresh texture concepts (petrified wood, glacial ice), and overarching descriptive texture concepts (veined, metallic).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F11%2Ftexturesandpatternsai.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F11%2Ftexturesandpatternsai.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have partnered with a global consumer apparel manufacturer to integrate the Textures &amp;amp; Patterns Model into their design workflow. They are using Textures &amp;amp; Patterns model to inspire creativity amongst their designers, and further develop their design ideas. They indexed their design database (internal and external images) using our model; they then inputted any new, raw design ideas into our platform, ran our &lt;a href="https://clarifai.com/visual-search" rel="noopener noreferrer"&gt;Visual Search&lt;/a&gt; tool to explore and discover various ways the design could evolve.&lt;/p&gt;

&lt;p&gt;We’re excited to apply artificial intelligence to the arts and provide tools that would empower creators to be more effective at their work. We hope our broader customers enjoy using the new set of models as much as our initial testers did. If there’s any feedback and/or additional requests, feel free to shoot us a message at &lt;a href="//mailto:feedback@clarifai.com"&gt;feedback@clarifai.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://clarifai.com/models/" rel="noopener noreferrer"&gt;Show me the models!&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://blog.clarifai.com/introducing-landscape-quality-portrait-quality-textures-patterns-visual-recognition-models/" rel="noopener noreferrer"&gt;Introducing Landscape Quality, Portrait Quality, and Textures &amp;amp; Patterns Visual Recognition Models&lt;/a&gt; appeared first on &lt;a href="https://blog.clarifai.com" rel="noopener noreferrer"&gt;Clarifai Blog | Artificial Intelligence in Action&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>companynews</category>
      <category>productannouncement</category>
      <category>imagerecognition</category>
      <category>ai</category>
    </item>
    <item>
      <title>How Clarifai builds accurate and unbiased AI technology</title>
      <dc:creator>Clarifai Team</dc:creator>
      <pubDate>Thu, 09 Nov 2017 23:08:59 +0000</pubDate>
      <link>https://dev.to/clarifai/how-clarifai-builds-accurate-and-unbiased-ai-technology-9hg</link>
      <guid>https://dev.to/clarifai/how-clarifai-builds-accurate-and-unbiased-ai-technology-9hg</guid>
      <description>&lt;p&gt;&lt;em&gt;At Clarifai, we have a team called the Data Strategy Team, a group of conscientious, diverse people who train our visual recognition models. They ensure we’re building accurate, unbiased AI – learn about what they do and how you can apply their best practices to your own model building!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Back in the early ages of computing, companies used to rely on teams of people doing complex calculations by hand. These operations now take a fraction of a second for a computer to do. Today, it seems like every company wants to build or incorporate artificial intelligence that can make tasks faster and more accurate than when they were done by humans. However, there are things about the human mind that we are not able to fully replicate with a computer alone (yet!).&lt;/p&gt;

&lt;p&gt;We have a fantastic team of minds at Clarifai we call the Data Strategy Team that helps us curate and assess the quality of data for creating robust AI models. Along with research, engineering, and our client success teams, the team distills all the feedback they get from every side on custom models, and work to constantly improve the API in a way that best reflects the big, beautiful world. The team’s diverse backgrounds allow for them to see something that others may not. When building an AI model, the team has to ask, “What are we supposed to see? Is what we are asking to find visible and distinguishable? If we aren’t able to answer these questions ourselves, does it make sense for us to ask a computer to do this?” Here are some of our Data Strategy Team’s tips to consider when building out a model!&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Break down the visual components&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;AI models receive inputs so we need to make sure that our inputs have the correct elements for the model to understand. What if we were to make a model and we wanted it to recognize a leaf on a plant? When we give it several images of various species, we are educating the model on different shapes, colors and textures leaves can take on. We have these visually tangible aspects for it to recognize. What if we wanted to train our model to identify an emotion like anger? Anger is expressed differently by different cultures and people.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F11%2Femotion.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F11%2Femotion.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When trying to teach a more metaphysical concept, you need to be sure that your input represents those variations as well. Determine the things that represent your concept and make sure that examples of them get incorporated to the training set. This will achieve higher accuracy for what you want your model to focus on. You’ll be able to refine the accuracy after you evaluate your input.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Incorporate relevant training data&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;One of the biggest misconceptions about AI models is that they recognize everything correctly each time. A model is only as good as the data that is used to train it. A model could fail to make accurate predictions due to training data that doesn’t look like what it will be tested on. Imagine if you wanted to build a model that could detect different items for recycling. Yet, all of your training data is of stock photography of objects that are on tables and being held by people either drinking or eating. When you want someone to use this model, is this the way that we intend to use it? Would the model detect photos of people’s trash bins out in the world? Probably not.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F11%2FScreen-Shot-2017-11-09-at-12.21.22-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F11%2FScreen-Shot-2017-11-09-at-12.21.22-PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not only should you make sure the data you incorporate is relevant but also you should make sure that your training data has the same visual aspects as the intended test data. Will your test data be inverted or blurry? Will it be on grayscale versus colorized? These can impact a model’s precision and accuracy too. A model is simply a block of clay and it is your job to make its shape as effective as possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Remove biases at all costs&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Just like human beings, the artificial intelligence model is susceptible to what it is taught. To the model, the inputs are a source of truth that describe its world and it can only understand the world from its teachings. We have seen this to be true in the extreme cases of &lt;a href="https://www.theverge.com/2015/7/1/8880363/google-apologizes-photos-app-tags-two-black-people-gorillas" rel="noopener noreferrer"&gt;Google’s misprediction of tags for photos&lt;/a&gt; or &lt;a href="https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist" rel="noopener noreferrer"&gt;Microsoft’s chatbot Tay&lt;/a&gt;. When we are shaping the models, we want to make sure that we aren’t introducing any of our human biases.&lt;/p&gt;

&lt;p&gt;When you are giving concepts that describe a profession, you may want to be representative of all the demographics involved rather than merely the most prominent. Even well established datasets can be biased to the culture that they were found in. Look at &lt;a href="http://vintage.winklerbros.net/facescrub.html" rel="noopener noreferrer"&gt;FaceScrub&lt;/a&gt;, a popular dataset for celebrity face detection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F11%2Ffacescrub.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F11%2Ffacescrub.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This dataset contains white-majority celebrities. We could increase its effectiveness by incorporating more celebrities from other parts of the world. If we don’t acknowledge our biases when we  gather a set of data, we only build for what we know rather than looking beyond that.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Where to go from here?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Machine Learning models are often trained on data blindly scraped from the Internet. After all, it’s easy to use search terms on thousands of images and then upload them as training data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F11%2FScreen-Shot-2017-11-09-at-12.32.49-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F11%2FScreen-Shot-2017-11-09-at-12.32.49-PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, this doesn’t reflect on how diverse our world is. With these tips, you are equipped with realizing these nuances and building models that give meaningful results. Know that at Clarifai, we are aware of these possible influencers and we use our Data Strategy Team to carefully improve our neural net models. The team collaborates with our enterprise customers to make sure to address their needs and iterate on building models that can enhance a platform’s experience. If you have any questions or want to learn more about building effective models, reach out to us at &lt;a href="//mailto:hackers@clarifai.com"&gt;hackers@clarifai.com&lt;/a&gt;!&lt;/p&gt;



&lt;p&gt;The post &lt;a href="https://blog.clarifai.com/data-strategy-team-builds-accurate-unbiased-ai-technology/" rel="noopener noreferrer"&gt;How Clarifai builds accurate and unbiased AI technology&lt;/a&gt; appeared first on &lt;a href="https://blog.clarifai.com" rel="noopener noreferrer"&gt;Clarifai Blog | Artificial Intelligence in Action&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>datastrategy</category>
      <category>bias</category>
      <category>visualrecognition</category>
    </item>
    <item>
      <title>How Visual Similarity and Custom Metadata can Enhance Your Search</title>
      <dc:creator>Clarifai Team</dc:creator>
      <pubDate>Tue, 07 Nov 2017 17:00:05 +0000</pubDate>
      <link>https://dev.to/clarifai/how-visual-similarity-and-custom-metadata-can-enhance-your-search-bno</link>
      <guid>https://dev.to/clarifai/how-visual-similarity-and-custom-metadata-can-enhance-your-search-bno</guid>
      <description>

&lt;p&gt;&lt;em&gt;Data can be thought of simply as a thing we want to remember for future use. Metadata helps describe the data we’re trying to remember. So when we are putting it in context of visual similarity, we are adding metadata to describe inputs (data) that may not be visually distinguishable. Here’s what that means for searching images!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When we say metadata at Clarifai, this describes some attribute relating to a particular image that may or may not be visibly tangible. Think about it as where we could store all of the information we want the image to hold onto that is valuable to our project or business. Here are a few examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We could have users searching through a catalog and we want to record which product they viewed. We would be able to reference metadata of the product’s ID without having to make a callback to our database. &lt;/li&gt;
&lt;li&gt;We could add metadata to our inputs so that when a user comes along and wants to find an item nearby, we filter based on a zip code or region. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can craft Custom Metadata to suit any need.&lt;/p&gt;

&lt;p&gt;Let’s go through an example in-depth. We’re going to look at the case of a shoe store and how we can search using an image and our metadata. We will be able to find items that are visually similar to what we want and also filter items based upon them being on sale. If you want to see all of the code already written up, you can &lt;a href="https://github.com/maxcell/custom-metadata"&gt;check out this GitHub repo&lt;/a&gt; and its README.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Requirements&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you haven’t already, make sure to &lt;a href="https://clarifai.com/developer/account/signup"&gt;sign up for a free account on Clarifai&lt;/a&gt; and &lt;a href="https://clarifai.com/developer/account/applications"&gt;create an application&lt;/a&gt;. We will also need to be sure to have &lt;a href="https://nodejs.org/en/download"&gt;NodeJS&lt;/a&gt; installed.&lt;/p&gt;

&lt;p&gt;We have a rather short list of data for our little footwear shop. It is a CSV with several columns for each of the data points we want to represent: Product ID, Type, Color, Brand, Price, On Sale, In Stock, Image Source. Let’s use the prebuilt data &lt;a href="https://raw.githubusercontent.com/maxcell/custom-metadata/master/shoe-data.csv"&gt;here&lt;/a&gt; for our example and save it to our project folder as &lt;code&gt;shoe-data.csv&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Convert CSV and upload to Clarifai
&lt;/h3&gt;

&lt;p&gt;The team over at &lt;a href="http://www.adaltas.com/en/home/"&gt;Adaltas&lt;/a&gt; decided to share their code to &lt;a href="http://csv.adaltas.com/"&gt;help people parse data from spreadsheets&lt;/a&gt; in Node. It allows for flexibility of our data, so give them some kudos. We will install it with &lt;code&gt;npm install --save csv-parse&lt;/code&gt; in Terminal. Before this data is useable we need to convert each row over to a JSON Object. Open up a new file &lt;code&gt;upload.js&lt;/code&gt; and we will write all of our actions here:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* upload.js */
const parse = require('csv-parse')
const fs = require('fs')

// ~Hidden magic we will come back to~

fs.readFile(__dirname+'/shoe-data.csv', 'utf8', (err, data) =&amp;gt; {
  if(err) { return console.log(err) }
  parse(data, { columns: true }, (err, output) =&amp;gt; { 
    if(err) { return err; }
    shoeData = output.map((shoe) =&amp;gt; { return convertData(shoe) })
    uploadInputs(shoeData);
  });
})
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We still have the hidden magic to fill in that explains our &lt;code&gt;convertData()&lt;/code&gt; and &lt;code&gt;uploadInputs()&lt;/code&gt;! We have to start with adding in the code that will let us use Clarifai. To install the Clarifai JavaScript client, go to Terminal and write &lt;code&gt;npm install clarifai --save&lt;/code&gt;. Include the client with &lt;code&gt;const Clarifai = require('clarifai')&lt;/code&gt; right below the other modules. Be sure to place our API Key for the application:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* upload.js */
const parse = require('csv-parse')
const fs = require('fs')
const clarifai = require('clarifai')

const app = new Clarifai.App({ apiKey: 'YOUR_API_KEY' })

// ...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we want to be sure to write out the &lt;code&gt;convertData()&lt;/code&gt; and &lt;code&gt;uploadInputs()&lt;/code&gt; functions. &lt;code&gt;convertData()&lt;/code&gt; will take our results from reading the CSV and then convert the data into inputs with metadata. &lt;code&gt;uploadInputs()&lt;/code&gt; then takes that data and sends it to Clarifai for it to store.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* upload.js */
const parse = require('csv-parse')
const fs = require('fs')
const Clarifai = require('clarifai')

const app = new Clarifai.App({ apiKey: 'YOUR_API_KEY' })

const uploadInputs = (inputs) =&amp;gt; {
    app.inputs.create(inputs).then(
      // Success
      (response) =&amp;gt; { console.log('Successful Upload!') },
      // Error
      (error) =&amp;gt; { console.error(error) }
    )
}

const convertData = (data) =&amp;gt; {
  return {
    url: data.imageSrc,
    metadata: data
  }
}
// ...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With us running &lt;code&gt;node upload.js&lt;/code&gt; we have everything handled for taking a CSV and uploading it onto Clarifai.&lt;/p&gt;

&lt;h3&gt;
  
  
  Searching on Clarifai
&lt;/h3&gt;

&lt;p&gt;Let’s write another small script that will perform our search which we will cleverly name &lt;code&gt;search.js&lt;/code&gt;. We will use this &lt;a href="https://farm4.staticflickr.com/3370/3344620504_b547190891_o_d.jpg"&gt;image&lt;/a&gt; and then apply metadata for items on sale as &lt;code&gt;TRUE&lt;/code&gt;. Our image would filter what item a user would want to have. This is how we incorporate visual similarity into our search. However, there is no visual cue in the image alone to find out which items are on sale. That’s a great reason why we need to apply the metadata.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* search.js */
const Clarifai = require('clarifai')

let app = new Clarifai.App({ apiKey: 'YOUR_API_KEY' })

// Searching by visual similarity
app.inputs.search([
    {
      input: {
        url: 'https://farm4.staticflickr.com/3370/3344620504_b547190891_o_d.jpg'
      }
    },
    {
      input: {
        metadata: {
          sale: 'TRUE'
        }
      }
    }])
.then((response) =&amp;gt; {
  response.hits.map(
    (hit) =&amp;gt; {
      console.log(`Price: $${hit.input.data.metadata.price}USD; URL: ${hit.input.data.image.url}`)
  })
})
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We can run it with &lt;code&gt;node search.js&lt;/code&gt; in Terminal. We get a list to all of our prices and the URLs associated with the items. An important note is that our metadata field is sensitive to how the data is searched. If we have capitals in any of our keys or values, we need to be sure that they match &lt;strong&gt;exactly&lt;/strong&gt; when we go to search. Otherwise, they will consider the object you put in and the object you wanted to find as different things.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;We’ve just played around with one way to use metadata. Remember, we added a bunch of different labels onto our inputs. Some metadata may be more valuable than others but it is entirely freeform. If you are curious about some more search features such as by &lt;a href="https://clarifai.com/developer/guide/searches#by-geo-location"&gt;geo location&lt;/a&gt;, &lt;a href="https://clarifai.com/developer/guide/searches#by-public-concepts"&gt;public concepts&lt;/a&gt; or anything else, read more about it all in our &lt;a href="https://clarifai.com/developer/guide/searches#searches"&gt;guide&lt;/a&gt;. Let us know if you need any help or how you make use of custom metadata at &lt;a href="mailto:hackers@clarifai.com"&gt;hackers@clarifai.com&lt;/a&gt;!&lt;/p&gt;



&lt;p&gt;The post &lt;a href="https://blog.clarifai.com/how-visual-similarity-and-custom-metadata-can-enhance-your-search/"&gt;How Visual Similarity and Custom Metadata can Enhance Your Search&lt;/a&gt; appeared first on &lt;a href="https://blog.clarifai.com"&gt;Clarifai Blog | Artificial Intelligence in Action&lt;/a&gt;.&lt;/p&gt;


</description>
      <category>tutorials</category>
      <category>ai</category>
      <category>visualsimilarity</category>
      <category>visualsearch</category>
    </item>
    <item>
      <title>Clarifai Featured Hack: Val.ai is a parking app for your self-driving car</title>
      <dc:creator>Clarifai Team</dc:creator>
      <pubDate>Tue, 24 Oct 2017 20:06:25 +0000</pubDate>
      <link>https://dev.to/clarifai/clarifai-featured-hack-valai-is-a-parking-app-for-your-self-driving-car-2ga9</link>
      <guid>https://dev.to/clarifai/clarifai-featured-hack-valai-is-a-parking-app-for-your-self-driving-car-2ga9</guid>
      <description>

&lt;p&gt;&lt;em&gt;What are self-driving cars supposed to do after they’re done driving you? It’s not a trick question, it’s a real problem that a team at TechCrunch Disrupt solved using Clarifai. Val.ai is an app that lets self-driving cars self-bid for self-parking spots.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One of the worst parts about driving is finding a place to park, especially if you’re a city-dweller. Even self-driving cars need to solve this problem at the end of the day. Val.ai is a way for autonomous vehicles to bid for nearby parking spaces … autonomously! When a self-driving car needs to park itself, it can submit real-time bids for local spots occupied by other autonomous cars. If a currently parked car knows it needs to pick someone up soon, it can accept a bid and relinquish its parking spot and earn some money. The winning bidder vehicle will then get directions to the vacated spot and secure a place to rest.&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kBDjDj9I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.clarifai.com/wp-content/uploads/2017/10/val-ai.png" alt=""&gt;&lt;/strong&gt;
&lt;/h5&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;WHY WE ❤ IT&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;It takes special talent to foresee the problems of the future and solve for them today. Val.ai addresses something that most people don’t think about when they imagine a future with self-driving cars and solves a problem while monetizing it as well! &lt;a href="https://techcrunch.com/2017/05/14/self-parking-vehicle/"&gt;Read more about Val.ai on TechCrunch&lt;/a&gt;.&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;HOW YOU DO IT&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;We caught up with Val.ai creator Gabriel Ortiz, CEO and Co-Founder of Nimblestack (a product shop that specializes in applying AI and automation to use everyday products), to ask him to share his inspiration for Val.ai.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clarifai: What inspired your idea for Val.ai?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Gabriel: We were inspired by ThingSpace and Mapquest and the power of managing traffic with A.I. we decided to combine this technology with Clarifai’s image recognition API and Autonomous vehicles to create a new style of business. We built Val.ai (Valet) with the ability to sell its parking space to other drivers. So owners of Autonomous vehicles can make money from their very special cars.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How did you build the app?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We used Clarifai, Verizon Thingspace, Mapquest, HTTP, JavaScript, JQuery, Geolocation, HTML5, CSS3. A challenge we ran into was the short amount of time to build the software, but we think we came up with a clever way to make money from autonomous vehicles while managing to complete our goals in the allotted time!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What was the best part about working with the Clarifai API?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Clarifai API was actually really easy to integrate. We loved how much functionality we got for so little programming. Well done guys!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thanks for sharing, Gabriel!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To learn more, check out our &lt;a href="https://developer.clarifai.com/docs/"&gt;documentation&lt;/a&gt; and &lt;a href="https://developer.clarifai.com/signup/"&gt;sign-up for a free Clarifai account&lt;/a&gt; to start using our API – all it takes is a few lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet &lt;a href="https://www.twitter.com/Clarifai"&gt;@Clarifai&lt;/a&gt; to show us your apps.&lt;/p&gt;

&lt;p&gt;And give the Val.ai team (&lt;a href="https://twitter.com/nimblestack"&gt;@nimblestack&lt;/a&gt; &lt;a href="https://twitter.com/aarongfranco"&gt;@aarongfranco&lt;/a&gt; &lt;a href="https://twitter.com/nothinggrinder"&gt;@nothinggrinder&lt;/a&gt; &lt;a href="https://twitter.com/nimblechat"&gt;@nimblechat&lt;/a&gt;) some props in the comments below. Until next time!&lt;/p&gt;



&lt;p&gt;The post &lt;a href="https://blog.clarifai.com/val-ai-is-a-parking-app-for-your-self-driving-car/"&gt;Clarifai Featured Hack: Val.ai is a parking app for your self-driving car&lt;/a&gt; appeared first on &lt;a href="https://blog.clarifai.com"&gt;Clarifai Blog | Artificial Intelligence in Action&lt;/a&gt;.&lt;/p&gt;


</description>
      <category>hack</category>
      <category>selfdrivingcar</category>
      <category>future</category>
      <category>imagerecognition</category>
    </item>
    <item>
      <title>Clarifai Featured Hack: Recyclodroid is a recycling robot made of recycled materials</title>
      <dc:creator>Clarifai Team</dc:creator>
      <pubDate>Thu, 05 Oct 2017 18:52:43 +0000</pubDate>
      <link>https://dev.to/clarifai/clarifai-featured-hack-recyclodroid-is-a-recycling-robot-made-of-recycled-materials-1mk</link>
      <guid>https://dev.to/clarifai/clarifai-featured-hack-recyclodroid-is-a-recycling-robot-made-of-recycled-materials-1mk</guid>
      <description>&lt;p&gt;&lt;em&gt;The Recyclodroid is an advanced robotic device that uses image recognition as it moves around to determine if objects in its path are recyclable. Not only does the robot have an environmentally-friendly mission, it’s also made out of recycled materials. Basically, you’re looking at a real-life Wall-E!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Climate change is real, despite what non-scientists would have us believe. That’s why the Recyclodroid is on a mission to help the environment, one recyclable at a time. The Recyclodroid is a robot that can identify recyclable objects in its path as it navigates the world. It’s made up of a USB webcam mounted on a robotic car built from household materials like toothpicks, Gatorade caps, and a broken calculator case. The webcam captures video of its surroundings and uses the Clarifai API to “see whether the object in its path is recyclable or not.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F10%2Frecyclodroid.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F10%2Frecyclodroid.png" alt="recyclodroid"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;WHY WE â¤ IT&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;We’ve got a soft spot for fellow do-gooders, so the environmentally-friendly mission of the Recylodroid was right up our alley. We also love it when developers use Clarifai’s software with their own hardware, and bonus points to the Recyclodroid for being made of common household items! Try it out – here’s Recyclodroid’s &lt;a href="https://github.com/allai5/byteRecyclodroid" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;!&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;HOW YOU DO IT&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;We caught up with Alice Lai, a rising senior at High Technology High School who loves robotics, hardware hacking, and using technology for social good, to talk about her inspiration for Recyclodroid.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clarifai: What inspired your idea for Recyclodroid?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alice:&lt;/strong&gt; I was trying to come up with an idea for my project for the ByteHacks hackathon and I had brought with me a DIY robotic car I had made out of household materials. I started to play around with ideas and thought it would be cool to make it a self-driving car through computer vision. Another hacker commented that it was super cool that I was reusing household materials to create my project, and that gave me the idea to make a robotic car focused on recycling and environmental sustainability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How did you build the app?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Recyclodroid is a robotic car built out of household materials (i.e. toothpicks, Gatorade caps, broken plastic case of a calculator) with a focus on recycling. A USB webcam is mounted on this robot, which then uses the Clarifai API to “see whether the object in front of it is recyclable or not based on a long array of things that are recyclable.&lt;/p&gt;

&lt;p&gt;I used the Javascript client for the Clarifai API to program the computer vision aspect of my project. I also used the Photon hardware development kit to control the movement of the robotic car (to move forward based on if the object was recyclable or not) and wrote a shell script to automate the process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What was the best part about working with the Clarifai API?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Clarifai API was super easy to use and it was really nice how all the tags/concepts that the image recognition outputted came in a large JSON file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thanks for sharing, Alice!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To learn more, check out our &lt;a href="https://clarifai.com/developer/docs/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; and &lt;a href="https://www.clarifai.com/developer/signup/" rel="noopener noreferrer"&gt;sign-up for a free Clarifai account&lt;/a&gt; to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet &lt;a href="http://www.twitter.com/Clarifai" rel="noopener noreferrer"&gt;@Clarifai&lt;/a&gt; to show us your apps.&lt;/p&gt;

&lt;p&gt;And give &lt;a href="https://twitter.com/allai4396" rel="noopener noreferrer"&gt;Alice&lt;/a&gt; some props in the comments below. Until next time!&lt;/p&gt;

</description>
      <category>visualrecognition</category>
      <category>ai</category>
      <category>hack</category>
      <category>api</category>
    </item>
    <item>
      <title>Clarifai is Hiring a Senior iOS Engineer</title>
      <dc:creator>Clarifai Team</dc:creator>
      <pubDate>Wed, 04 Oct 2017 18:44:54 +0000</pubDate>
      <link>https://dev.to/clarifai/clarifai-senior-ios-engineer-c09</link>
      <guid>https://dev.to/clarifai/clarifai-senior-ios-engineer-c09</guid>
      <description>&lt;h1&gt;
  
  
  Senior iOS Engineer
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;New York, NY&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Clarifai is an artificial intelligence company that excels at visual recognition. We do not sell an abstract, futuristic technology - we sell a solution that people can use today to solve real-world problems. We believe that the same AI technology that gives big tech companies a competitive edge should be available to developers or businesses of any size or budget. That’s why we build products to make it easy, quick, and inexpensive for developers and businesses to innovate with AI, go to market faster, and build better user experiences. We make “teaching” AI just as accessible as we make using AI, which is why our technology is the most personalizable, unbiased, accurate solution in the market.&lt;/p&gt;

&lt;p&gt;We have secured a $30M Series B round of funding and are backed by Menlo Ventures, Google Ventures, USV, NVIDIA, Qualcomm, Osage, Lux Capital, LDV Capital, and Corazon Capital.  To continue to succeed, we need people like you to join the team here in NYC!&lt;/p&gt;

&lt;p&gt;Clarifai is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse workforce.&lt;/p&gt;

&lt;h2&gt;
  
  
  About
&lt;/h2&gt;

&lt;p&gt;The deep learning revolution has given computers a new fundamental sense: sight. Clarifai makes it easy for developers to teach computers how to see.&lt;/p&gt;

&lt;p&gt;We are excited for you to bring your mobile engineering expertise to help make our computer vision technology available on mobile devices and analyze the massive amounts of visual content users generate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We are looking for computer scientists and software engineers who love:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Developing libraries, frameworks, and SDKs for iOS, tvOS, watchOS, and macOS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Discussing architecture and implementation of SDKs embedding Clarifai's A.I. technology.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Building reusable objects to be shared among several layers and architectures (think of the many operating systems published by Apple.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implementing interoperability between languages such as Swift and Objective-C with C++.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Crafting consumer-facing UI components encapsulating the complexity of the underlying layers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Developing showcase apps utilizing Clarifai's technology and their potential.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The mobile team is lean and very focused; and we collaborate directly with the research, design, data, and marketing teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Objectives
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;In your first month, you will start off by learning the ropes. You will:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Learn our technology stack. Whether you are passionate about mobile, backend, frontend, or all-of-the-above, we want you to get familiar with what we've built so far.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Understand the progress that we have made on integrating our cloud-based API's with our mobile applications and SDK.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Refine, iterate, and improve upon what has been built already.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3 months later, you start putting yourself out there. You will implement new features in our existing SDKs, libraries, and tools.&lt;/p&gt;

&lt;p&gt;6 months down the road, you will be on your way to making sure that when people think of Clarifai, they think of mobile. You will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Have a deep understanding of our tech stack, our code base, and what we have added to it since you've started.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make meaningful contributions to the code base and major progress on our priority products.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Propose and implement new and exciting applications of our technology.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In 12 months, you'll be bringing mobile to the forefront of Clarifai's product line, and in the future, you will strive to have Clarifai on every mobile device in some capacity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Impact
&lt;/h2&gt;

&lt;p&gt;As a Senior iOS Engineer at Clarifai, you will help bring A.I. into the physical world. You are going to help us make it possible to understand images and video in real-time, which enables brand new types of applications never built before.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://boards.greenhouse.io/clarifai/jobs/713800#app"&gt;Apply For This Position&lt;/a&gt;
&lt;/h2&gt;

</description>
      <category>hiring</category>
    </item>
    <item>
      <title>Clarifai is Hiring a Senior Frontend Engineer</title>
      <dc:creator>Clarifai Team</dc:creator>
      <pubDate>Wed, 04 Oct 2017 18:44:48 +0000</pubDate>
      <link>https://dev.to/clarifai/clarifai-senior-frontend-engineer-79h</link>
      <guid>https://dev.to/clarifai/clarifai-senior-frontend-engineer-79h</guid>
      <description>&lt;h1&gt;
  
  
  Senior Frontend Engineer
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;New York, NY&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Clarifai is an artificial intelligence company that excels at visual recognition. We do not sell an abstract, futuristic technology - we sell a solution that people can use today to solve real-world problems. We believe that the same AI technology that gives big tech companies a competitive edge should be available to developers or businesses of any size or budget. That’s why we build products to make it easy, quick, and inexpensive for developers and businesses to innovate with AI, go to market faster, and build better user experiences. We make “teaching” AI just as accessible as we make using AI, which is why our technology is the most personalizable, unbiased, accurate solution in the market.&lt;/p&gt;

&lt;p&gt;We have secured a $30M Series B round of funding and are backed by Menlo Ventures, Google Ventures, USV, NVIDIA, Qualcomm, Osage, Lux Capital, LDV Capital, and Corazon Capital.  To continue to succeed, we need people like you to join the team here in NYC!&lt;/p&gt;

&lt;p&gt;Clarifai is proud to be an equal opportunity workplace dedicated to pursuing, hiring, and retaining a diverse workforce.&lt;/p&gt;

&lt;h2&gt;
  
  
  About
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You are passionate about using your frontend engineering expertise to broaden the reach of our artificial intelligence technology and make it easily accessible to all.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You collaborate closely with other members of our team, include the backend, infrastructure, and applied machine learning engineers, designers, and product managers. You work together to craft and implement new product features.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You build intuitive web applications that will give users hands-on access to our machine learning platform and custom training, which allows users to train their own models without using any code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You will develop reusable modules, components, and build tools for both internal and external use cases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You mentor engineers that join the frontend team. We want you to have the opportunity to teach others what you've learned.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our current stack is react with redux, babel and webpack for our build and less for CSS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Objectives
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;In your first month, you will start off by learning the ropes. You will:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Learn our front end tech stack. We want to give you the time to get familiar with our code base (and our team!).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Become familiar with the concepts behind artificial intelligence and learn about the API’s that we’ve already built to bring these concepts to life.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Work on fixes and new features to our existing applications and libraries.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3 months down the road and beyond, you'll be moving at full speed and will:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Make significant contributions to the code base and major progress on our priority products.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Propose new applications of our technology and begin the work necessary to put those ideas into production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Have a deep understanding of our tech stack, code base, and how it's developed since you've started.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Beyond a year, you will strive to develop your frontend engineering skills, build Clarifai's future products, and help grow the team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Impact
&lt;/h2&gt;

&lt;p&gt;With an eye for accessibility, performance, usability, and the future of web standards, you will make using our artificial intelligence products an amazing and intuitive experience for all.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://boards.greenhouse.io/clarifai/jobs/492226#app"&gt;Apply For This Position&lt;/a&gt;
&lt;/h2&gt;

</description>
      <category>hiring</category>
    </item>
    <item>
      <title>Clarifai is Hiring a Senior Backend Engineer</title>
      <dc:creator>Clarifai Team</dc:creator>
      <pubDate>Wed, 04 Oct 2017 18:44:44 +0000</pubDate>
      <link>https://dev.to/clarifai/clarifai-senior-backend-engineer-8f</link>
      <guid>https://dev.to/clarifai/clarifai-senior-backend-engineer-8f</guid>
      <description>&lt;h1&gt;
  
  
  Senior Backend Engineer
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;New York, NY&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Clarifai is an artificial intelligence company that excels in visual recognition, solving real-world problems for businesses and developers alike. Founded in 2013 by Matthew Zeiler, a foremost expert in machine learning, Clarifai has been a market leader since winning the top five places in image classification at the ImageNet 2013 competition, and predicts more than 1.2 billion concepts in photos and videos every month. Clarifai’s powerful image and video recognition technology is built on the most advanced machine learning systems and made easily accessible by a clean API, empowering developers all over the world to build a new generation of intelligent applications.&lt;/p&gt;

&lt;p&gt;Clarifai raised a $30 million Series B in 2016 led by Menlo Ventures, Union Square Ventures, and Lux Capital. Existing investors include Google Ventures, Qualcomm Ventures, NVIDIA Ventures, Corazon Capital, LDV Capital, Osage University Partners, and New York University.&lt;/p&gt;

&lt;p&gt;Clarifai is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse workforce.&lt;/p&gt;

&lt;h2&gt;
  
  
  About
&lt;/h2&gt;

&lt;p&gt;As a senior engineer, you collaborate with your colleagues on the backend team to set a technical vision for our AI systems, as well as train and mentor engineers to develop their skills and technical understanding.&lt;br&gt;
You architect our AI web services in addition to improving existing features, reliability, flexibility, and scalability as usage increases.&lt;/p&gt;

&lt;p&gt;We are looking for someone comfortable in several programming languages and excited about building new features in Go and Python. You should care about software design and have built systems that other people love to use and work with, and have experience building and scaling distributed, highly-available systems.&lt;/p&gt;

&lt;p&gt;Our Backend Tech Stack includes Python, Go, Postgres, Docker, Redis, REST, AWS, and Kubernetes (but don't worry if you haven't used some of these- we will teach you anything you don't know!).&lt;/p&gt;

&lt;h2&gt;
  
  
  Objectives
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;In the first month, you’ll start off by learning the ropes. You will:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Get familiar with our code base (as well as the backend and infrastructure teams). We would like you to take this time to get comfortable working with what we’ve built and who has helped build it so far, and give us the feedback only a fresh perspective can bring.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Learn about the distinctive challenges of machine learning systems using GPUs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Identify and resolve production bugs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Within 3 months, you will have gained confidence in the code and will:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Help to plan feature development, requirements, and our technical road map.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Accelerate development of our machine learning API feature set.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improve user management and refining API permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build and measure benchmarking and stress test tools.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Within 6 months, you’ll:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Measure and optimize of the customer facing custom training API service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Design, deploy and run web-scale distributed storage systems of various flavors, both relational (mysql, postgres) and nosql (redis, elasticsearch, etc.).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Expand on quality assurance infrastructure and continuous deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Identify web security risks and write tools to improve security issues.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Work closely and communicate with product managers on hiring and timelines.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Impact
&lt;/h2&gt;

&lt;p&gt;You build the systems and services behind the Clarifai magic. Neural networks are data-hungry beasts, and you keep them well fed!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://boards.greenhouse.io/clarifai/jobs/492228#app"&gt;Apply For This Position&lt;/a&gt;
&lt;/h2&gt;

</description>
      <category>hiring</category>
    </item>
    <item>
      <title>Introducing our new Usage Dashboard – check your real-time and historic API usage data</title>
      <dc:creator>Clarifai Team</dc:creator>
      <pubDate>Thu, 28 Sep 2017 14:09:16 +0000</pubDate>
      <link>https://dev.to/clarifai/introducing-our-new-usage-dashboard--check-your-real-time-and-historic-api-usage-data-a0m</link>
      <guid>https://dev.to/clarifai/introducing-our-new-usage-dashboard--check-your-real-time-and-historic-api-usage-data-a0m</guid>
      <description>&lt;p&gt;&lt;em&gt;We’re excited to show off our new Usage Dashboard feature, which you can use to  check your real-time and historic usage data for our visual recognition API. With the dashboard, you’ll be able to learn more about your usage pattern, identify any inconsistency in the historic trends, and reconcile your monthly bill more easily!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://clarifai.com/developer/login/" rel="noopener noreferrer"&gt;Show me the dashboard!&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wondering where you can go to explore your past usage information? Or where you can see how many operation calls you’re making as you test out our API? Well, dear Clarifai users, we heard you loud and clear — and today, we’re announcing the launch of our shiny new Usage Dashboard!&lt;/p&gt;

&lt;p&gt;The new Usage Dashboard allows you to check your real-time and historic usage data up to the last 90 days. With the dashboard, you’ll be able to learn more about your usage pattern, identify any inconsistency in the historic trends, and reconcile your monthly bill more easily.&lt;/p&gt;

&lt;p&gt;To check out the new feature, log into your Clarifai account, and locate the &lt;strong&gt;Usage&lt;/strong&gt; section on the left-hand side menu.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F09%2Fimage5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F09%2Fimage5.png" alt="image5"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, you’ll be able to access your own usage dashboard. The top panels indicate your real-time usage metric, broken down in 3 billed categories — Billed Operation, Custom Concepts, and Stored Inputs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F09%2Fimage3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F09%2Fimage3.png" alt="image3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under the real-time usage are the historic usage graphs for the same categories. You can select the billing cycle to refresh the displayed data on these graphs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F09%2Fimage4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F09%2Fimage4.png" alt="image4"&gt;&lt;/a&gt; &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F09%2Fimage2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F09%2Fimage2.png" alt="image2"&gt;&lt;/a&gt; &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F09%2Fimage1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F09%2Fimage1.png" alt="image1"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;How do you like our new Usage Dashboard? Let us know at &lt;a href="//mailto:feedback@clarifai.com"&gt;feedback@clarifai.com&lt;/a&gt; — we’d love to hear from you!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://clarifai.com/developer/login/" rel="noopener noreferrer"&gt;Show me the dashboard!&lt;/a&gt;&lt;/p&gt;

</description>
      <category>companynews</category>
      <category>productannouncement</category>
    </item>
    <item>
      <title>Getting Started with Search by Geo Location</title>
      <dc:creator>Clarifai Team</dc:creator>
      <pubDate>Tue, 12 Sep 2017 18:47:57 +0000</pubDate>
      <link>https://dev.to/clarifai/getting-started-with-search-by-geo-location-bgc</link>
      <guid>https://dev.to/clarifai/getting-started-with-search-by-geo-location-bgc</guid>
      <description>

&lt;p&gt;&lt;em&gt;Fact: Clarifai’s Search API allows you to search your images and video by visual similarity. Lesser known fact: it also lets you search your media by geolocation data! Learn more about this feature and how to put it into action for your own developer application.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The Search API allows you to send images to Clarifai and have them indexed using concepts and their visual representation. After they are indexed, you can search over your inputs using concepts (e.g. dog) or images (e.g. visually similar dogs). Clarifai can extend this search by adding extra data points onto our inputs such as geolocation data. A search by geolocation acts as a filter of inputs so you get only results within a specified range.&lt;/p&gt;

&lt;p&gt;We’ll look at attaching geolocation data to your inputs and then query that data with different measurements of distance to see the change in results. For this tutorial, we are going to use &lt;a href="https://nodejs.org"&gt;Node.js&lt;/a&gt;. Let’s get started by installing the official &lt;a href="https://github.com/Clarifai/clarifai-javascript"&gt;Clarifai JavaScript client&lt;/a&gt; with&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install clarifai
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Adding Inputs
&lt;/h3&gt;

&lt;p&gt;You need to &lt;a href="https://clarifai.com/developer/signup"&gt;sign up&lt;/a&gt; for Clarifai and &lt;a href="https://clarifai.com/developer/guide/applications#applications"&gt;create an application&lt;/a&gt; before you can get started. Inputs are added using either a &lt;a href="https://clarifai.com/developer/guide/search#add-images-to-search-index"&gt;URL or bytes&lt;/a&gt;. Along with your input source we’ll add a geo object that will contain keys for GPS coordinates (longitude and latitude). Remember that the coordinate system is using the cardinal directions: North, South, East, and West. North and East will be positive numbers and South and West will be negative numbers.&lt;/p&gt;

&lt;p&gt;Below we will add &lt;a href="https://upload.wikimedia.org/wikipedia/commons/thumb/a/a1/Statue_of_Liberty_7.jpg/1200px-Statue_of_Liberty_7.jpg"&gt;an image of the Statue of Liberty using a URL&lt;/a&gt; from Wikipedia:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const Clarifai = require('clarifai')
const app = new Clarifai.App({ apiKey: 'YOUR_API_KEY_HERE' })

app.inputs.create({
  url: "https://upload.wikimedia.org/wikipedia/commons/thumb/a/a1/Statue_of_Liberty_7.jpg/1200px-Statue_of_Liberty_7.jpg",
  geo: {
    latitude: 40.689247,
    longitude: -74.044502
  }
})
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;and use a &lt;a href="http://cdn.history.com/sites/2/2015/02/golden-gate-bridge-iStock_000019197672Large-H.jpeg"&gt;local image of the Golden Gate Bridge&lt;/a&gt; from HISTORY and a function for file-to-base64 conversion:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const Clarifai = require('clarifai')
const app = new Clarifai.App({ apiKey: 'YOUR_API_KEY_HERE' })

const convert_bytes = (img) =&amp;gt; {
  const img_file = fs.readFileSync(img)
  return new Buffer(img_file).toString('base64')
}

app.inputs.create({
  base64: convert_bytes('./golden_gate.jpg')
  geo: {
    latitude: 37.807812,
    longitude: -122.475164
  }
})
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Searching for Images
&lt;/h3&gt;

&lt;p&gt;Once your images are uploaded, you will be able to use them with search. When searching by geolocation, you can refine your results using a single point and some radius given ‘withinMiles’, ‘withinKilometers’, ‘withinDegrees’, or ‘withinRadians’.&lt;/p&gt;

&lt;p&gt;Let’s say we only want results of images within a mile of the Empire State Building in New York City (because it’s the best city in the world, naturally). Our search would look like this:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.inputs.search({
  input: {
    geo: {
      latitude: 40.748817,
      longitude: -73.985428,
      type: 'withinMiles',
      value: 1.0
    }
  }
}).then((response) =&amp;gt; { console.log(response.hits)})

// Response
[]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The above example returned no results due to how small the range is. If we were to increase this value we will get our hit.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.inputs.search({
  input: {
    geo: {
      latitude: 40.748817,
      longitude: -73.985428,
      type: 'withinMiles',
      value: 7.0
    }
  }
}).then((response) =&amp;gt; { console.log(response.hits)})

// Response
[{ 
  score: 1,
  input: { 
    id: 'd7b80aac52f14399b98a9472fc201e64',
    data: [ 
      { 
        score: 1,
        input: { 
          id: 'd7b80aac52f14399b98a9472fc201e64',
          data: { 
            image: { 
              url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/a/a1/Statue_of_Liberty_7.jpg/1200px-Statue_of_Liberty_7.jpg' },
              concepts: [
                { 
                id: 'statue',
                  name: 'statue',
                  value: 1,
                  app_id: 'fb70b904750c4891aecddf82082181c2' 
                },
                { 
                  id: 'bridge',
                  name: 'bridge',
                  value: 0,
                  app_id: 'fb70b904750c4891aecddf82082181c2' 
                }
              ],
              metadata: {},
              geo: { 
                geo_point: { 
                  longitude: -74.0445, 
                  latitude: 40.689247 
                }
              }
          },
          created_at: '2017-09-05T17:52:38.616686Z',
          modified_at: '2017-09-05T17:52:39.029363Z',
          status: [Object] 
        } 
      } 
    ],
    created_at: '2017-09-05T17:52:38.616686Z',
    modified_at: '2017-09-05T17:52:39.029363Z',
    status: [Object] 
  } 
}]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The search result will give us an array of matches. Each object within the array is one of our inputs that matches the search criteria. This also includes a score that will indicate how much of a match this the result is to our query. Whenever you want to access any of the data related to a specific object you would need to access its input.data key. You can see what the custom concepts, geolocation data, or custom metadata it has attached to it. You can also find the input’s original url or bytes in this object.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;The Search API is a powerful tool and searching using geolocation is just one of many useful features for developers. You can do much more with the Search API like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Search using &lt;a href="https://clarifai.com/developer/guide/searches#by-public-concepts"&gt;Public&lt;/a&gt; or &lt;a href="https://clarifai.com/developer/guide/searches#by-custom-concepts"&gt;Custom&lt;/a&gt; concepts&lt;/li&gt;
&lt;li&gt;Search using your own &lt;a href="https://clarifai.com/developer/guide/searches#by-custom-metadata"&gt;custom metadata&lt;/a&gt; on inputs&lt;/li&gt;
&lt;li&gt;Combine search options using&lt;a href="https://clarifai.com/developer/guide/searches#search-anding"&gt;ANDing&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have any questions, concerns, or even friendly notes feel free to reach out to us over at &lt;a href="mailto:hackers@clarifai.com"&gt;hackers@clarifai.com&lt;/a&gt; or comment below!&lt;/p&gt;




</description>
      <category>tutorials</category>
      <category>search</category>
      <category>api</category>
      <category>visualsimilarity</category>
    </item>
    <item>
      <title>One thousand captcha photos organized with a neural network</title>
      <dc:creator>Clarifai Team</dc:creator>
      <pubDate>Fri, 18 Aug 2017 17:22:15 +0000</pubDate>
      <link>https://dev.to/clarifai/one-thousand-captcha-photos-organized-with-a-neural-network-84n</link>
      <guid>https://dev.to/clarifai/one-thousand-captcha-photos-organized-with-a-neural-network-84n</guid>
      <description>&lt;p&gt;In this post, we’ll dive deeper into organizing photos by visual similarity in three steps: embedding via a neural net, further dimension reduction via t-SNE, and snapping things to a grid by solving an assignment problem. Then we’ll walk you through doing this yourself by calling one of our endpoints on your own Clarifai application.&lt;/p&gt;

&lt;p&gt;The below image shows 1024 of the captcha photos used in “I’m not a human: Breaking the Google reCAPTCHA” by Sivakorn, Polakis, and Keromytis arranged on a 32×32 grid in such a way that visually-similar photos appear in close proximity to each other on the grid.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F08%2F1000captcha.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fblog.clarifai.com%2Fwp-content%2Fuploads%2F2017%2F08%2F1000captcha.jpeg" alt="1000captcha"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How did we do this?
&lt;/h2&gt;

&lt;p&gt;To get from the collection of captcha photos to the grid above we take three steps: embedding via a neural net, further dimension reduction via t-SNE, and finally snapping things to a grid by solving an assignment problem. Images are naturally very high-dimensional objects, even a “small” 224×224 image requires 224*224*3=150,528 RGB values. When represented naively as huge vectors of pixels visually-similar images may have enormous vector distances between them. For example, a left/right flip will generate a visually-similar image but can easily lead to a situation where each pixel in the flipped version has an entirely different value from the original.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Remark: Code for all of this is available here: &lt;a href="https://github.com/Clarifai/public-notebooks/blob/master/gridded_tsne_blog_public.ipynb" rel="noopener noreferrer"&gt;https://github.com/Clarifai/public-notebooks/blob/master/gridded_tsne_blog_public.ipynb&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fs3.amazonaws.com%2Fimtagco%2Fblog%2F2x2captcha.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fs3.amazonaws.com%2Fimtagco%2Fblog%2F2x2captcha.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Reducing from 150528 to 1024 dimensions with a neural net
&lt;/h3&gt;

&lt;p&gt;Our photos begin as 224x224x3 arrays of RGB values. We pass each image through an existing pre-trained neural network, Clarifai’s &lt;a href="https://developer.clarifai.com/models/general-embedding-image-recognition-model/bbb5f41425b8468d9b7a554ff10f8581" rel="noopener noreferrer"&gt;general embedding model&lt;/a&gt; which provides us with the activations from one of the top layers of the net. Using the higher layers from a neural net provides us with representations of our images which are rich in semantic information – the vectors of visually similar images will be close to each other in the 1024-dimensional space.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Reducing from 1024 to 2 dimensions with t-SNE
&lt;/h3&gt;

&lt;p&gt;In order to bring things down to a space where we can start plotting, we must reduce dimensions again. We have lots of options here. Some examples:&lt;/p&gt;

&lt;h4&gt;
  
  
  Inductive methods for embedding learning
&lt;/h4&gt;

&lt;p&gt;Techniques such as the remarkably hard-to-Google &lt;a href="http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf" rel="noopener noreferrer"&gt;Dr. LIM&lt;/a&gt; or Siamese Networks with triplet losses learn a function that can embed new images to fewer dimensions without any additional retraining. These techniques perform extremely well on benchmark datasets and are a great fit for online systems which must index previously-unseen images. For our application, we only need to get a fixed set of vectors reduced to 2D in one large, slow, step.&lt;/p&gt;

&lt;h4&gt;
  
  
  Transductive methods for dimensionality reduction
&lt;/h4&gt;

&lt;p&gt;Rather than learning a function which can new points to few dimensions we can attack our problem more directly by learning a mapping from the high-dimensional space to 2D which preserves distances in the high-dimensional space as much as possible. Several techniques are available: &lt;a href="https://distill.pub/2016/misread-tsne/" rel="noopener noreferrer"&gt;t-SNE&lt;/a&gt; and &lt;a href="https://github.com/lferry007/LargeVis" rel="noopener noreferrer"&gt;largeVis&lt;/a&gt; to name a few. Other methods, such as PCA, are not optimized for distance preservation or visualization and tend to produce less interesting plots. t-SNE, even during convergence, can produce very interesting plots (cf. this demonstration by &lt;a href="https://twitter.com/genekogan" rel="noopener noreferrer"&gt;@genekogan&lt;/a&gt; &lt;a href="https://vimeo.com/191187346" rel="noopener noreferrer"&gt;here&lt;/a&gt; ).&lt;/p&gt;

&lt;p&gt;We use t-SNE to map our 1024D vectors down to 2D and generate the first entry in the above grid. Recall that our high-dimensional space here are 1024D vector embeddings from a neural net, so proximal vectors show correspond to visually similar photos. Without the neural net t-SNE would be a poor choice as distances between the initial 224x224x3 vectors are uninteresting.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Snapping to a grid with the Jonker-Volgenant algorithm
&lt;/h3&gt;

&lt;p&gt;One problem with t-SNE’d embeddings is that if we displayed the images directly over their corresponding 2D points we’d be left with swaths of empty white space and crowded regions where images overlap each other. We remedy this by building a 32×32 grid and moving the t-SNE’d points to the grid in such a way that total distance traveled is optimal.&lt;/p&gt;

&lt;p&gt;It turns out that this operation can be incredibly sophisticated. There is an entire field of mathematics, &lt;a href="https://en.wikipedia.org/wiki/Transportation_theory_(mathematics)" rel="noopener noreferrer"&gt;transportation theory&lt;/a&gt;, concerned with solutions to problems in optimal transport under various circumstances. For example, if one’s goal is to minimize the sum of the squares of all distances traveled rather than simply the sum of the distances traveled (ie the l2 Monge-Kantorovitch mass transfer problem) an optimal mapping can be found by recasting the assignment problem as one in computational fluid dynamics and &lt;a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.7.6791&amp;amp;rep=rep1&amp;amp;type=pdf" rel="noopener noreferrer"&gt;solving the corresponding PDEs&lt;/a&gt;. &lt;a href="https://en.wikipedia.org/wiki/C%C3%A9dric_Villani" rel="noopener noreferrer"&gt;Cedric Villani&lt;/a&gt;, who won a Fields medal in 2010, wrote a great &lt;a href="http://cedricvillani.org/wp-content/uploads/2012/08/preprint-1.pdf" rel="noopener noreferrer"&gt;book&lt;/a&gt; on optimal transportation theory which is worth taking a look at when you get tired of corporate machine learning blogs.&lt;/p&gt;

&lt;p&gt;In our setting, we just want the t-SNE’d points to snap to the grid in a way that makes this look visually appealing and be as simple as possible. Thus, we search for a mapping that minimizes the sum of the distances traveled via a &lt;a href="https://en.wikipedia.org/wiki/Assignment_problem" rel="noopener noreferrer"&gt;linear assignment problem&lt;/a&gt;. The textbook solution here is to use the &lt;a href="https://en.wikipedia.org/wiki/Hungarian_algorithm" rel="noopener noreferrer"&gt;Hungarian algorithm&lt;/a&gt;, however, this can be also be solved quite easily and much faster using &lt;a href="https://blog.sourced.tech/post/lapjv/" rel="noopener noreferrer"&gt;Jonker-Volgenant&lt;/a&gt; and &lt;a href="https://github.com/src-d/lapjv" rel="noopener noreferrer"&gt;open source tools&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How easy can we make this?
&lt;/h2&gt;

&lt;p&gt;Pretty easy. In addition to the notebook listed above, we’ve also set up an API endpoint that will generate an image similar to the one above for an existing Clarifai application. Here we assume you already have created an application by visiting &lt;a href="https://clarifai.com/developer/account/applications/" rel="noopener noreferrer"&gt;https://clarifai.com/developer/account/applications&lt;/a&gt; and added your favorite images to it by calling the resource &lt;em&gt;&lt;a href="https://api.clarifai.com/v2/inputs" rel="noopener noreferrer"&gt;https://api.clarifai.com/v2/inputs&lt;/a&gt;&lt;/em&gt;. Then all you have to do is this:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Kick off an asynchronous gridded t-SNE visualization
&lt;/h3&gt;

&lt;p&gt;Since generating a visualization takes a while, we generate one asynchronously. We kick off a visualization by calling&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POST https://api.clarifai.com/v2/visualizations/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should get a response like below informing us a “pending” visualization is scheduled to be computed.&lt;/p&gt;

&lt;p&gt; &lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "output": {
        "id": "ca69f34d53c742e1b4a1b71d7b4b4586",
        ...
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note the id &lt;em&gt;ca69f34d53c742e1b4a1b71d7b4b4586&lt;/em&gt;. We will use that id to get the visualization we just kicked off.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Check to see if the visualization is done
&lt;/h3&gt;

&lt;p&gt;Call&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /v2/visualizations/ca69f34d53c742e1b4a1b71d7b4b4586
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The returned visualization will be “pending” for a while, but eventually, we should get a response like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "output": {
        "data": {
            "image": {
                "url": "https://s3.amazonaws.com/clarifai-visualization/gridded-tsne/staging/your-visualization.jpg"
            }
        },
        ...
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At last, the &lt;code&gt;output.data.image.url&lt;/code&gt; contains your gridded t-SNE visualization.&lt;/p&gt;

&lt;p&gt;If you have any questions on the post you can reach out to &lt;a href="//mailto:hackers@clarifai.com"&gt;hackers@clarifai.com&lt;/a&gt;. Also, send us your t-SNE visualizations if you want them shared!&lt;/p&gt;

</description>
      <category>neuralnetwork</category>
      <category>visualsimilarity</category>
      <category>ai</category>
      <category>captcha</category>
    </item>
  </channel>
</rss>
