DEV Community

Daniel Gomez
Daniel Gomez

Posted on

2

Azure Custom Vision: Object recognition in .NET application

Hey, there! In this tutorial, we'll learn how to call an Azure Custom Vision object recognition model from a .NET application.

As an example, we will recognize these objects in an image: hats, faces, guitars, and bottles.

Note: In this tutorial we can see the process to create the Custom Vision model: Object detection with Azure Custom Vision.

Source code of this example: GitHub - Azure Object Recognition.

Custom Vision API

As a first point, in our .NET project we must install the following NuGet packages:

  • Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training
  • Microsoft.Azure.CognitiveServices.Vision.CustomVision.Prediction

These packages will allow us to call our Custom Vision model using their access and information methods.

CustomVision.cs class

With the NuGet packages installed we can create a new class, and here as a first instance we will need to establish the variables that will allow us to refer to our specific Azure Custom Vision project:

private string Key = "YOUR_CUSTOMVISION_KEY";
private string Endpoint = "YOUR_ENDPOINT";
private string ProjectID = "YOUR_PROJECT_ID";
private string publishedModelName = "YOUR_MODEL_NAME";
private double minProbability = 0.75;
Enter fullscreen mode Exit fullscreen mode

These values can be found in the general settings of your project:

With these attributes we can generate the respective endpoint for authentication of our project, and finally define a DetectObjects method, sending as a parameter the path of the image to be analyzed, and receiving as a response the JSON with the detected objects.

This would be the source code of the entire C# class:

using Microsoft.Azure.CognitiveServices.Vision.CustomVision.Prediction;
using Microsoft.Azure.CognitiveServices.Vision.CustomVision.Prediction.Models;
using Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training;
using Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training.Models;
using ApiKeyServiceClientCredentials = Microsoft.Azure.CognitiveServices.Vision.CustomVision.Prediction.ApiKeyServiceClientCredentials;

namespace CognitiveServices;

public class CustomVision
{
    private string Key = "YOUR_CUSTOMVISION_KEY";
    private string Endpoint = "YOUR_ENDPOINT";
    private string ProjectID = "YOUR_PROJECT_ID";
    private string publishedModelName = "YOUR_MODEL_NAME";
    private double minProbability = 0.75;

    private CustomVisionTrainingClient trainingApi;
    private CustomVisionPredictionClient predictionApi;
    private Project project;

    public CustomVision()
    {
        trainingApi = AuthenticateTraining(Endpoint, Key);
        predictionApi = AuthenticatePrediction(Endpoint, Key);
        project = trainingApi.GetProject(new Guid(ProjectID));
    }

    private CustomVisionTrainingClient AuthenticateTraining(string endpoint, string trainingKey)
    {
        return new CustomVisionTrainingClient(new Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training.ApiKeyServiceClientCredentials(trainingKey))
        {
            Endpoint = endpoint
        };
    }

    private CustomVisionPredictionClient AuthenticatePrediction(string endpoint, string predictionKey)
    {
        return new CustomVisionPredictionClient(new ApiKeyServiceClientCredentials(predictionKey))
        {
            Endpoint = endpoint
        };
    }

    public List<PredictionModel> DetectObjects(string imageFile)
    {
        using (var stream = File.Open(imageFile, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
        {
            var result = predictionApi.DetectImage(project.Id, publishedModelName, stream);
            return result.Predictions.Where(x => x.Probability > minProbability).ToList();
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Program.cs / Call example

In a new class we can define a collection of type List<PredictionModel> to store the recognized objects of a particular image, after calling the previously defined DetectObjects method.

This would be the source code of this new class:

using CognitiveServices;

var customVision = new CustomVision();
var imagePath = "image.jpg";

var objects = customVision.DetectObjects(imagePath);

Console.WriteLine("Object - Probability - Position(X,Y)");
foreach (var obj in objects)
{
    Console.WriteLine("{0}: {1} - ({2},{3})",
        obj.TagName,
        obj.Probability,
        obj.BoundingBox.Left,
        obj.BoundingBox.Top);
}
Enter fullscreen mode Exit fullscreen mode

And this is the result in the console for the previous example:

Thanks for reading!

If you have any questions or ideas in mind, it will be a pleasure to be able to be in communication with you, and together exchange knowledge with each other.

See you on Twitter / esDanielGomez.com.

Sentry image

Hands-on debugging session: instrument, monitor, and fix

Join Lazar for a hands-on session where you’ll build it, break it, debug it, and fix it. You’ll set up Sentry, track errors, use Session Replay and Tracing, and leverage some good ol’ AI to find and fix issues fast.

RSVP here →

Top comments (0)

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs

👋 Kindness is contagious

Please leave a ❤️ or a friendly comment on this post if you found it helpful!

Okay