DEV Community

Cover image for Computer vision API - Using Microsoft Azure Cognitive services
Gourav Singh Rawat
Gourav Singh Rawat

Posted on

Computer vision API - Using Microsoft Azure Cognitive services

Cognitive services

Cognitive Services are a set of machine learning algorithms that Microsoft has developed to solve problems in the field of Artificial Intelligence (AI). The goal of Cognitive Services is to democratise AI by packaging it into discrete components that are easy for developers to use in their own apps.

I recently created an Application - Azura with same method.

GitHub logo Seek4samurai / Azura

Yes.! Azura Play with it. Powered by Microsoft's @Azure-cognitive-service-computer-vision. It's available in both as web application and as a browser extension.


Yes.! Azura

What is Azura?πŸš€

This is an extension just like those we put on our browsers and also a sort of searching tool, that takes an Image url as input and processes it using Microsoft Azure's Computer vision and describes what the image is about. This is basically a tool that exists to describe the one use of Computer vision

Live demo 🌏

Website is live at
But do check the extension as well with even better user experience and with text to speech feature that reads out the description of the image

How to use is as extension πŸ§‘πŸΌβ€πŸ’»

Clone or download it as zip, the following repository :

Adding to your browser πŸ“

To add this extension, go to your browser >> extensions

First you need to turn on the Developer mode: On.


Once this is done, you can now import extensions

Click on…

If you're familiar with Computer vision, you must know how it works. This is a technique in which we train a machine's vision to recognise real world objects and similar things, which could either be some objects or even living things like human face or recognising animals.

Microsoft Azure provides with some free to use cognitive service APIs to create such computer vision powered applications.

Getting started

Creating Azure resource

Select Computer vision from resource and then create a resource.
Azure Home

After you've created a resource.

Azure resource

Using API client
Once you did all before steps correctly, you can get started with your workspace.

Server setup
Get started with creating a server using, we are using nodejs npm init -y. Once you've initialised, you've to install following packages and libraries.

  "name": "azura-backend",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "dev": "nodemon ./src/index.js",
    "start": "node ./src/index.js"
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "@azure/cognitiveservices-computervision": "^8.1.0",
    "cors": "^2.8.5",
    "dotenv": "^16.0.0",
    "express": "^4.17.2"
  "devDependencies": {
    "nodemon": "^2.0.15"
Enter fullscreen mode Exit fullscreen mode

Here, we are using Express for creating server. And to use the Azure-cognitive services we install
npm i @azure/cognitiveservices-computervision

Create a src folder and index.js file to start a server and handle basic routes in it.

const express = require("express");
const dotenv = require("dotenv");
const cors = require("cors");


const imageController = require("./controller");

const app = express();
  origin: "*"

// Routes
app.use("/", imageController);

const PORT = process.env.PORT || 5000;

app.listen(PORT, () => {
  console.log(`App running or port ${PORT}`);
Enter fullscreen mode Exit fullscreen mode

Once this is done, create controller.js file, where we will use computer vision client for our application.

const express = require("express");
const ComputerVisionClient =
const ApiKeyCredentials = require("@azure/ms-rest-js").ApiKeyCredentials;

const router = express.Router();"/describe", async (req, res) => {
  const KEY = process.env.KEY;
  const ENDPOINT = process.env.ENDPOINT;

  // Create a new Client
  const computerVisionClient = new ComputerVisionClient(
    new ApiKeyCredentials({ inHeader: { "Ocp-Apim-Subscription-Key": KEY } }),

  if (!req.body.imageUrl) {
    return res.send("Image url is not set! Please provide an image!");

module.exports = router;
Enter fullscreen mode Exit fullscreen mode

Remember you've to create .env file in your local workspace and paste your API keys and endpoint, and to use them I'm using dotenv package (hope that is understandable). We've initialised the client and when we hit the post request to redirect to /describe, it should hit our client. You can try using postman to check this API call.
And in the last line we are just checking if the request is empty, then simply return that empty url message.

After all this we can go ahead and create a try-catch block and use the

  try {
    // Describe and Image Url
    const descUrl = req.body.imageUrl;
    const caption = (await computerVisionClient.describeImage(descUrl))

      `This maybe ${caption.text} (confidence ${caption.confidence.toFixed(2)})`
  } catch (error) {
Enter fullscreen mode Exit fullscreen mode

Here, we are getting the req.body.imageUrl from our frontend and using that URL for our client. And it will return and send response back to frontend.

Frontend overview

Since frontend is not the point of focus in this tutorial, we can take a quick overview of it.
We take input from user and that URL is sent to our backend. I'm using Axios for that purpose.

const res = await
Enter fullscreen mode Exit fullscreen mode

In place of YourURL, paste your server's URL.

You can check to print the response or log it in console. This will accept image's URL and return it's description, what the image is about.

Thank you for reading.

Top comments (0)