<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matteo Pagani</title>
    <description>The latest articles on DEV Community by Matteo Pagani (@qmatteoq).</description>
    <link>https://dev.to/qmatteoq</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/qmatteoq"/>
    <language>en</language>
    <item>
      <title>Semantic Kernel - Native plugins</title>
      <dc:creator>Matteo Pagani</dc:creator>
      <pubDate>Tue, 07 Nov 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/qmatteoq/semantic-kernel-native-plugins-1m48</link>
      <guid>https://dev.to/qmatteoq/semantic-kernel-native-plugins-1m48</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yYZsHuxw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-native-plugins/cover.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yYZsHuxw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-native-plugins/cover.png" alt="Featured image of post Semantic Kernel - Native plugins" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.developerscantina.com/p/semantic-kernel-functions/"&gt;In the previous post&lt;/a&gt;, we learned how to create a semantic function plugin for Semantic Kernel. A semantic function is just a prompt, however, by including it in a function inside a plugin, we made it easier to reuse and test it.&lt;/p&gt;

&lt;p&gt;In this post, we’re going to explore another type of plugins: native functions. As the name suggests, these plugins are native to the platform you’re using, so they can be written in C#. Python or Java. Since they support native code, they can perform more than just executing a prompt; we can virtually execute any operation which is supported by the platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a native plugin
&lt;/h2&gt;

&lt;p&gt;For this sample, we’re going to build a plugin which is able to retrieve the information about the United States population in a given year. This might be a peculiar scenario, but there’s a reason why we’re using it: a platform called &lt;a href="https://datausa.io/"&gt;DataUSA&lt;/a&gt; provides a set of free APIs that we can use to retrieve various data about the United States, like the population number, the number of universities, etc. The APIs are free to use and they don’t require any authentication, so they’re perfect for our sample.&lt;/p&gt;

&lt;p&gt;These APIs behave like any other REST APIs. As such, this is the type of operation that requires using native APIs: we need to perform a GET request to a given URL, then we need to parse the response and extract the information we need. We can’t perform this operation just with a prompt.&lt;/p&gt;

&lt;p&gt;The first step to create the native function is the same as we have seen &lt;a href="https://www.developerscantina.com/p/semantic-kernel-functions/"&gt;in the previous post&lt;/a&gt;: we create a folder, called &lt;strong&gt;Plugins&lt;/strong&gt; and, inside it, we create a subfolder for our plugin. We’re going to call it &lt;strong&gt;UnitedStatesPlugin&lt;/strong&gt;. Inside this folder, we’re going to add a new class, which will host our function, called &lt;strong&gt;UnitedStatesPlugin.cs&lt;/strong&gt;. This is how the project looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4OH2C45u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-native-plugins/plugin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4OH2C45u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-native-plugins/plugin.png" alt="A project with a plugin with a native function" width="253" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Inside the class, we can define one or more functions, which will be exposed as Semantic Kernel functions. From a code perspective, these functions are just normal C# functions, so we can use any C# feature we want. The only requirement is that we must decorate the function with the &lt;code&gt;[SKFunction]&lt;/code&gt; attribute, which is defined in the &lt;code&gt;Microsoft.SemanticKernel&lt;/code&gt; namespace. This way, Semantic Kernel will know that the plugin exposes a function, whose name will match the name of the method. Let’s take a look at the following sample:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class UnitedStatesPlugin
{
    [SKFunction, Description("Get the United States population for a specific year")]
    public async Task&amp;lt;string&amp;gt; GetPopulation([Description("The year")]int year)
    {
        string request = "https://datausa.io/api/data?drilldowns=Nation&amp;amp;measures=Population";
        HttpClient client = new HttpClient();
        var result = await client.GetFromJsonAsync&amp;lt;UnitedStatesResult&amp;gt;(request);
        var populationData = result.data.FirstOrDefault(x =&amp;gt; x.Year == year.ToString());
        string response = $"The population number in the United States in {year} was {populationData.Population}";
        return response;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the function is just a normal C# method, which returns a &lt;code&gt;string&lt;/code&gt; (using a &lt;code&gt;Task&lt;/code&gt;, since the method is asynchronous). The only difference is that it’s decorated with the &lt;code&gt;SKFunction&lt;/code&gt; attribute. This attribute accepts a parameter, which is the description of the function. We have also added a &lt;code&gt;[Description]&lt;/code&gt; attribute also to the &lt;code&gt;year&lt;/code&gt; parameter. These two attributes replace the information that, in a semantic function, we were storing in the &lt;strong&gt;config.json&lt;/strong&gt; file. We’ll come back to the importance of providing this information when we’re going to introduce the planner.&lt;/p&gt;

&lt;p&gt;The rest of the code is easy to understand: we use the &lt;code&gt;HttpClient&lt;/code&gt; class to perform a GET request to the DataUSA API. By using the &lt;code&gt;GetFromJsonAsync&amp;lt;T&amp;gt;()&lt;/code&gt; method, we can deserialize the response into a C# class, which maps the content of the JSON response (you can see it &lt;a href="https://datausa.io/api/data?drilldowns=Nation&amp;amp;measures=Population]"&gt;here&lt;/a&gt;. Finally, we filter the resulting collection to extract only the data for the year we’re interested in and we return the result.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing the plugin
&lt;/h2&gt;

&lt;p&gt;Now that we have created the plugin, we can test it. To do so, we need to add it to the Semantic Kernel configuration. First, we create a new instance of the kernel as usual:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;string apiKey = "my-api-key";
string deploymentName = "deployment-name";
string endpoint = "endpoint-url";

var kernelBuilder = new KernelBuilder();
kernelBuilder.
    WithAzureChatCompletionService(deploymentName, endpoint, apiKey);

var kernel = kernelBuilder.Build();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, we can add the plugin to the kernel, this time using the &lt;code&gt;ImportFunctions()&lt;/code&gt; method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kernel.ImportFunctions(new UnitedStatesPlugin(), "UnitedStatesPlugin");

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As parameters, we supply first a new instance of the class we have previously created, then we provide a name for the plugin.&lt;/p&gt;

&lt;p&gt;Finally, we can retrieve the function defined in the plugin in the same way we did for the semantic function plugin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var function = kernel.Functions.GetFunction("UnitedStatesPlugin", "GetPopulation");

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first parameter is the name of the plugin, the second one is the name of the method declared in the &lt;code&gt;UnitedStatesPlugin&lt;/code&gt; class.&lt;/p&gt;

&lt;p&gt;Finally, we execute the plugin in the usual way: we create a &lt;code&gt;ContextVariables&lt;/code&gt; collection with the input, then we call &lt;code&gt;RunAsync()&lt;/code&gt; on the kernel passing the collection and the function as parameter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ContextVariables variables = new ContextVariables
{
    { "input", "2015" }
};

var result = await kernel.RunAsync(
    variables,
    function
);

Console.WriteLine(result.GetValue&amp;lt;string&amp;gt;());
Console.ReadLine();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, we are supplying as input 2015, so we’re expecting to get back the population number of United States in 2015. If we did everything correct, this is exactly the information we’re going to get back:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The population number in the United States in 2015 was 316515021

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;In this post, we have seen another type of plugins: native ones. Compared to the semantic functions, they are slightly more complex to define, since we need to create a real class with real code. On the other side, however, they are more powerful, since they can perform any operation we want, not just executing a prompt.&lt;/p&gt;

&lt;p&gt;In the next post, we’re going to explore the last type of plugins: OpenAI plugins. Stay tuned!&lt;/p&gt;

&lt;p&gt;In the meantime, you can find the full source code of the sample on &lt;a href="https://github.com/qmatteoq/SemanticKernel-Demos"&gt;GitHub&lt;/a&gt;, in the &lt;strong&gt;SemanticKernel.NativeFunction&lt;/strong&gt; project.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Semantic Kernel - The basics</title>
      <dc:creator>Matteo Pagani</dc:creator>
      <pubDate>Fri, 03 Nov 2023 08:24:06 +0000</pubDate>
      <link>https://dev.to/qmatteoq/semantic-kernel-the-basics-7eh</link>
      <guid>https://dev.to/qmatteoq/semantic-kernel-the-basics-7eh</guid>
      <description>&lt;p&gt;The advent of Generative AI is changing the tech industry and more and more applications are adding AI powered features. If you just think to what Microsoft is doing these days in this space, they are integrating Copilot experiences powered by AI and LLMs almost in every product: Microsoft 365, Windows, Defender, Power Platform. From a technical perspective, integrating generative AI services from the most popular AI providers (like OpenAI) is fairly simple. They all offer a set of REST APIs to cover the most common scenarios, like generating a response starting from a prompt, supporting a chat conversation or creating an image out of a prompt.&lt;/p&gt;

&lt;p&gt;However, when you start to build more complex enterprise applications, like a customized Copilot experience, you hit multiple challenges in using these APIs directly, like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The need of changing the communication layer in case you need to use different services and models, which are based on a different set of APIs.&lt;/li&gt;
&lt;li&gt;Managing prompts and requests in a scalable way.&lt;/li&gt;
&lt;li&gt;Managing complex operations that might requires multiple prompts to be completed.&lt;/li&gt;
&lt;li&gt;Integrating vector search to support &lt;a href="https://www.promptingguide.ai/techniques/rag"&gt;Retrieval Augmented Generation (RAG)&lt;/a&gt;, so that you can use the LLM to perform operations on private data, like the organizational data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Of course, it’s possible to build all these features just by calling the REST APIs, but it requires lot of work. To support developers in this journey, the tech market saw the rise of AI orchestration libraries and SDKs, that abstract many of the AI concepts into a set of easy-to-use APIs. You might already have heard of &lt;a href="https://www.langchain.com/"&gt;LangChain&lt;/a&gt;, which is one of the most popular available libraries for Python developers.&lt;/p&gt;

&lt;p&gt;Microsoft is investing a lot in the AI ecosystem, as such they decided to join the AI orchestration space, with the dual goal of supporting developers who are using Microsoft tools and platforms (like .NET and Azure OpenAI) and having a platform that the company can use to infuse AI in their app ecosystem in an easier way. Please welcome &lt;a href="https://learn.microsoft.com/en-us/semantic-kernel/overview/"&gt;Semantic Kernel&lt;/a&gt;, &lt;a href="https://github.com/microsoft/semantic-kernel"&gt;an open-source&lt;/a&gt; AI orchestration library which can be used by C#, Python and Java developers.&lt;/p&gt;

&lt;p&gt;Semantic Kernel supports the following features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Built-in support for multiple AI providers, like OpenAI, Azure OpenAI and Hugging Face.&lt;/li&gt;
&lt;li&gt;Support for plugins (built-in and build your own ones)&lt;/li&gt;
&lt;li&gt;Built-in support for many vector databases to store history and context&lt;/li&gt;
&lt;li&gt;Automatic orchestration with planner&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This post is the first one in a series, in which I’m going to cover all the most interesting features of Semantic Kernel, starting form the simplest to the most advanced ones. You will find all the samples that will be part of this series &lt;a href="https://github.com/qmatteoq/SemanticKernel-Demos"&gt;in the following GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s start the journey!&lt;/p&gt;

&lt;h2&gt;
  
  
  Building your first Semantic Kernel app
&lt;/h2&gt;

&lt;p&gt;For this series of posts, I will use a .NET C# terminal application. However, the same tasks can be performed in Python and Java. As such, I’m going to open Visual Studio 2022, choose &lt;strong&gt;Create new project&lt;/strong&gt; and select &lt;strong&gt;Console App&lt;/strong&gt;. After creating the project, your first step should be to install Semantic Kernel. Right click on the project, choose Manage NuGet packages and look for a package called &lt;a href="https://www.nuget.org/packages/Microsoft.SemanticKernel/"&gt;Microsoft.SemanticKernel&lt;/a&gt;. At the time of writing, the package is still published as prerelease, so you will have to make sure the option &lt;strong&gt;Include prerelease&lt;/strong&gt; is turned on to find it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i-IBVM02--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.qmatteoq.com/p/semantic-kernel-basics/nuget.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i-IBVM02--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.qmatteoq.com/p/semantic-kernel-basics/nuget.png" alt="nuget" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we are ready to set up the kernel object, which is the one that we’re going to use across our application to orchestrate AI operations. To create a kernel, we use the &lt;code&gt;KernelBuilder&lt;/code&gt; class, which offers a series of extension methods to initialize the object based on the AI service and the model we’re going to use. The C# version currently supports OpenAI and Azure OpenAI, with the following extension methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;WithChatCompletionService()&lt;/code&gt;, to use chat models like &lt;code&gt;gpt3.5-turbo&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;WithImageGenerationService()&lt;/code&gt;, to use AI models to generate images, like DALL-E, rather than text.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;WithTextCompletionService()&lt;/code&gt;, to use text completion models like &lt;code&gt;davinci&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;WithTextEmbeddingGenerationService()&lt;/code&gt;, to use models like &lt;code&gt;text-embedding-ada-002&lt;/code&gt; to convert text into embeddings, to be used with vector databases.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each method has two different versions, based on the service. For example, if you want to use a chat model, you can use &lt;code&gt;WithAzureChatCompletionService()&lt;/code&gt; with Azure OpenAI or &lt;code&gt;WithOpenAIChatCompletionService()&lt;/code&gt; with OpenAI. Depending on the service you want to use, you will have to provide two different set of credentials.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using OpenAI
&lt;/h3&gt;

&lt;p&gt;To use OpenAI, you must provide:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The API key&lt;/li&gt;
&lt;li&gt;The model name&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both information can be retrieved from your OpenAI account:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--868ZR5iZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.qmatteoq.com/p/semantic-kernel-basics/openai-credentials.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--868ZR5iZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.qmatteoq.com/p/semantic-kernel-basics/openai-credentials.png" alt="openai credentials" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have them, you can pass them as parameters to the initialization method. For example, the following code shows how to initialize the kernel to use the chat completion service from OpenAI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;string apiKey = "my-api-key";
string model = "gpt3.5-turbo";

var kernelBuilder = new KernelBuilder().
    WithOpenAIChatCompletionService(model, apiKey);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Please note that this is just a sample to keep the code simple. In a real application, the API key would be retrieved from a secure service, like Azure Key Vault or a user secret.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Using Azure OpenAI
&lt;/h3&gt;

&lt;p&gt;To use Azure OpenAI, you must provide:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The URL endpoint&lt;/li&gt;
&lt;li&gt;The deployment name&lt;/li&gt;
&lt;li&gt;The API key&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This information can be retrieved from the Azure portal, when you access to the Azure OpenAI service you have deployed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TuZMcznG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.qmatteoq.com/p/semantic-kernel-basics/azureopenai-credentials.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TuZMcznG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.qmatteoq.com/p/semantic-kernel-basics/azureopenai-credentials.png" alt="azureopenai credentials" width="739" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have them, you can use them with one of the various &lt;code&gt;WithAzureOpenAI...()&lt;/code&gt; extension methods provided by the &lt;code&gt;KernelBuilder&lt;/code&gt; class. For example, the following code shows how to initialize the kernel to use the chat completion service from Azure OpenAI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;string apiKey = "your-api-key";
string deploymentName = "your-deployment-name";
string endpoint = "your-endpoint";

var kernelBuilder = new KernelBuilder().
    WithAzureChatCompletionService(deploymentName, endpoint, apiKey);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Regardless of the service you’re using, you can get a kernel object by calling the &lt;code&gt;Build()&lt;/code&gt; method of the &lt;code&gt;KernelBuilder&lt;/code&gt; class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var kernel = kernelBuilder.Build();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we’re ready to start orchestrating your AI workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  A simple scenario: sending a prompt
&lt;/h2&gt;

&lt;p&gt;We’ll start from the simplest scenario you can achieve with AI services: submitting a prompt and getting a response back. For the moment, we’re going to hard code the prompt inside the code, as in the following example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;string prompt = """
Rewrite the text between triple backticks into a business mail. Use a professional tone, be clar and concise.
Sign the mail as AI Assistant.

Text: {{$input}}
""";

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The purpose of the prompt is very simple: turning a sentence into a business mail, written in a professional tone. We can already see one interesting feature provided by Semantic Kernel: &lt;strong&gt;prompt templates&lt;/strong&gt;. Instead of hard coding the sentence we want to turn into an e-mail, we define a placeholder using the keyword &lt;code&gt;$input&lt;/code&gt;. This way, we can reuse the same prompt easily, we just need to change the provided input.&lt;/p&gt;

&lt;p&gt;Now we can use the prompt to generate a semantic function, which is the way Semantic Kernel calls functions which are represented by a prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var mailFunction = kernel.CreateSemanticFunction(prompt);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we want, we have the option to customize the OpenAI parameters by supplying an &lt;code&gt;OpenAIRequestSetting&lt;/code&gt; object as second parameter. For example, in the following sample we’re creating the same function, but changing the temperature (which controls the randomness of the response; the higher the value, the more random the result will be) and the maximum number of tokens to generate for the response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var mailFunction = kernel.CreateSemanticFunction(prompt, new OpenAIRequestSettings
{
    Temperature = 0.5,
    MaxTokens = 1000
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we need to supply the input variables for the prompt. In this case we have just one, called &lt;code&gt;input&lt;/code&gt;. We supply them through the &lt;code&gt;ContextVariables&lt;/code&gt; dictionary, which is a key-value pair collection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ContextVariables variables = new ContextVariables
{
    { "input", "Tell David that I'm going to finish the business plan by the end of the week." }
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For each entry, we supply as key the placeholder we have included in the prompt, while as value the text we want to provide.&lt;/p&gt;

&lt;p&gt;Finally, we execute the function using the &lt;code&gt;RunAsync()&lt;/code&gt; method offered by the kernel, passing as input the variables and the semantic function we want to execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var output = await kernel.RunAsync(
    variables,
    mailFunction);

Console.WriteLine(output.GetValue&amp;lt;string&amp;gt;());

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the result, we can use the &lt;code&gt;GetValue&amp;lt;T&amp;gt;()&lt;/code&gt; method to extract the response, where &lt;code&gt;T&lt;/code&gt; is the data type we’re expecting back. In this case, we’re using a chat model, so we know that the output will be a string.&lt;/p&gt;

&lt;p&gt;The response will be similar to the following one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Dear David,

I am writing to inform you that the business plan will be completed by the end of this week.

Thank you for your patience and understanding. If you have any further questions or concerns, please do not hesitate to contact me.

Best regards,

AI Assistant

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;In this first post, we have learned the basics of how to add Semantic Kernel to our applications and to perform a basic operation: submitting a prompt to the LLM and get a response back. We’re still far from having explored the most powerful features of Semantic Kernel, but we have already seen a few advantages compared to using directly the REST APIs provided by OpenAI or Azure OpenAI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We don’t need to learn different ways to interact with the API, based on the service we need to use.&lt;/li&gt;
&lt;li&gt;If we need to swap the service (or use different services based on the task), we don’t need to rewrite the entire communication layer. We just need to setup the kernel with a different extension method, and we’ll continue to use it in the same way.&lt;/li&gt;
&lt;li&gt;With prompt templates, we can easily reuse the same prompt with multiple inputs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the next post, we’re going to introduce one of the most interesting features of Semantic Kernel: plugins.&lt;/p&gt;

&lt;p&gt;Meantime, remember that you can find this sample in the &lt;a href="https://github.com/qmatteoq/SemanticKernel-Demos"&gt;dedicated GitHub repository&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Semantic Kernel - Semantic functions plugins</title>
      <dc:creator>Matteo Pagani</dc:creator>
      <pubDate>Fri, 03 Nov 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/qmatteoq/semantic-kernel-semantic-functions-plugins-4ec0</link>
      <guid>https://dev.to/qmatteoq/semantic-kernel-semantic-functions-plugins-4ec0</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_F-hbLuR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-functions/cover.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_F-hbLuR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-functions/cover.png" alt="Featured image of post Semantic Kernel - Semantic functions plugins" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s continue our journey to discover how we can infuse AI in our applications with Semantic Kernel, an open-source SDK created by Microsoft for C#, Python, and Java developers. &lt;a href="https://www.developerscantina.com/p/semantic-kernel-basics/"&gt;In the previous post&lt;/a&gt;, we saw some of the first useful features offered by Semantic Kernel: the ability to easily setup multiple AI providers and prompt templates. In this post, we will explore the first type of plugins: semantic functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are semantic functions?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.developerscantina.com/p/semantic-kernel-basics/"&gt;In the previous post&lt;/a&gt;, we have already seen what is a semantic function in the context of Semantic Kernel: it’s basically just a prompt, that is sent to the LLM for processing. The sample we have seen the last time, however, wasn’t very scalable in the context of building an enterprise application. The prompt was hard coded and, as such, it’s hard to share it with other components of our application.&lt;/p&gt;

&lt;p&gt;In this post, we’re going to improve the process by moving the semantic function into a plugin, so that it can easily be reused across multiple apps and scenarios. Additionally, this feature enables us to fine tune and change the prompt without having to recompile the application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a semantic function plugin
&lt;/h2&gt;

&lt;p&gt;Let’s start from the same project we used the last time and let’s create a new folder, called &lt;strong&gt;Plugins&lt;/strong&gt;. Inside this folder, we’re going to create a new sub-folder dedicated to our plugin. We’re going to call it &lt;strong&gt;MailPlugin&lt;/strong&gt; , since it’s going to include our prompt to take a sentence and turn it into a business mail. A plugin can offer multiple functions, each of them represented by a different prompt. Let’s create a function to host our business mail problem, so let’s add another subfolder called &lt;strong&gt;WriteBusinessMail&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Inside the &lt;strong&gt;WriteBusinessMail&lt;/strong&gt; folder, we can now add the files we need to define our function. The first one is the file which contains the prompt itself, which is simply a text file called &lt;strong&gt;skprompt.txt&lt;/strong&gt;. Inside it, we just need it copy and paste the same prompt we included in code &lt;a href="https://www.developerscantina.com/p/semantic-kernel-basics/"&gt;in the previous post&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Rewrite the text between triple backticks into a business mail. Use a professional tone, be clar and concise.
Sign the mail as AI Assistant.

Text: {{$input}}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second file you must add is called &lt;strong&gt;config.json&lt;/strong&gt; and it’s used to setup the same LLM parameters that, &lt;a href="https://www.developerscantina.com/p/semantic-kernel-basics/"&gt;in the previous post&lt;/a&gt;, we were setting using the OpenAIRequestSetting class. The content of the file is the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "schema": 1,
  "type": "completion",
  "description": "Write a business mail",
  "completion": {
    "max_tokens": 500,
    "temperature": 0.0,
    "top_p": 0.0,
    "presence_penalty": 0.0,
    "frequency_penalty": 0.0
  },
  "input": {
    "parameters": [
      {
        "name": "input",
        "description": "The text to convert into a business mail.",
        "defaultValue": ""
      }
    ]
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice how we aren’t using this file only to setup the LLM parameters, but also to describe what the prompt does and how it works. Using the description property, we’re specifying that this prompt is used to write a business mail. In the input section, we’re specifying that the prompt accepts one parameter called input, which contains the text to convert. The importance of these properties will become clear in the future, when we will talk about the planner and its ability to automatically orchestrate AI operations.&lt;/p&gt;

&lt;p&gt;This is how the structure of a plugin that contains a semantic function looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZRXWnOMD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-functions/plugin-structure.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZRXWnOMD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-functions/plugin-structure.png" alt="The Plugin structure in a Semantic Kernel project" width="260" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the semantic function plugin
&lt;/h2&gt;

&lt;p&gt;Now that we’ve created a plugin with a semantic function, we can start using it. First we have to bootstrap the kernel like we did the last time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;string apiKey = "your-api-key";
string deploymentName = "your-deployment-name";
string endpoint = "your-endpoint";

var kernelBuilder = new KernelBuilder().
    WithAzureChatCompletionService(deploymentName, endpoint, apiKey);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we need to import the plugin into the kernel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var pluginsDirectory = Path.Combine(Directory.GetCurrentDirectory(), "Plugins");

kernel.ImportSemanticFunctionsFromDirectory(pluginsDirectory, "MailPlugin");

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, we retrieve the full path of the Plugins folder within the project, using the &lt;code&gt;Path.Combine()&lt;/code&gt; and the &lt;code&gt;Directory.GetCurrentDirectory()&lt;/code&gt; methods. Then, we import all the functions included in the plugin (in this case, it’s just one) in the kernel by calling the &lt;code&gt;ImportSemanticFunctionsFromDirectory()&lt;/code&gt; method, passing as parameters the path of the plugins folder and the name of the plugin.&lt;/p&gt;

&lt;p&gt;Now we can reference any function included in the plugin using the Functions collection. The following code shows how we can get a reference to the &lt;code&gt;WriteBusinessMail&lt;/code&gt; function included in the plugin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var function = kernel.Functions.GetFunction("MailPlugin", "WriteBusinessMail");

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can use the function in the same way we have previously used the hardcoded prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ContextVariables variables = new ContextVariables
{
    { "input", "Tell do David that I'm going to finish the business plan by the end of the week." }
};

var result = await kernel.RunAsync(
    variables,
    function
);

Console.WriteLine(result.GetValue&amp;lt;string&amp;gt;());
Console.ReadLine();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We use the &lt;code&gt;ContextVariables&lt;/code&gt; collection to define the value of the input variable and we pass it to the &lt;code&gt;RunAsync()&lt;/code&gt; method, together with the function we want to execute. The result is the same as the one we got the last time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Dear David,

I am writing to inform you that the business plan will be completed by the end of this week.

Thank you for your patience and understanding. If you have any further questions or concerns, please do not hesitate to contact me.

Best regards,

AI Assistant

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing the semantic functions in a prompt
&lt;/h2&gt;

&lt;p&gt;Testing the effectiveness of a prompt in an application isn’t always easy. Our console app is very simple, so it’s really easy to tweak the prompt and launch the application again. Enterprise applications aren’t that simple and they might be composed by multiple layers, so recompiling and relaunching everything might be a very time consuming task.&lt;/p&gt;

&lt;p&gt;To help you testing your prompts, Microsoft has released a very useful extension for Visual Studio Code. You can get if from &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-semantic-kernel.semantic-kernel"&gt;the marketplace&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once it’s installed and activated, you will find a new icon in the toolbar:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--plw3luFO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-functions/vscode-icon.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--plw3luFO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-functions/vscode-icon.png" alt="The Semantic Kernel extension in Visual Studio Code" width="59" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you click on it, the first step is to expand the AI Endpoints section and click on the &lt;strong&gt;Switch endpoint&lt;/strong&gt; provider button to select the provider you want to use between OpenAI, Azure OpenAI and Hugging Face. Based on the provider you select, you will be asked to provide the required credentials, similar to what you did when you set the kernel up in code.&lt;/p&gt;

&lt;p&gt;Once you have configured the AI service you want to use, click on &lt;strong&gt;File&lt;/strong&gt; in Visual Studio Code and choose &lt;strong&gt;Open folder&lt;/strong&gt;. Now open the folder which contains your Semantic Kernel based project. Once you do that, the &lt;strong&gt;Functions&lt;/strong&gt; panel of the Semantic Kernel extension will display all the functions that have been found in the project. You should see your &lt;strong&gt;MailPlugin&lt;/strong&gt; , with the &lt;strong&gt;WriteBusinessMail&lt;/strong&gt; function nested inside:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8Vwd_O5h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-functions/vscode-functions.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8Vwd_O5h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-functions/vscode-functions.png" alt="The list of available functions in Visual Studio Code" width="241" height="118"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you hover the mouse near the name of the function, you will see a &lt;strong&gt;Run&lt;/strong&gt; button appear. Click on it and Visual Studo Code will ask you to provide a value for the variables included in the prompt, in this case the input variable:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8yHjQMMo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-functions/vscode-input.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8yHjQMMo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-functions/vscode-input.png" alt="The input prompt when you execute a function" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the operation is completed, in the Output window in Visual Studio Code you will see the result of the operation: the duration, the number of tokens used, the response from the LLM. Even better, if you open again the Semantic Kernel panel, you will see the result of the operation in the &lt;strong&gt;Results&lt;/strong&gt; section:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zpZ6Z5_J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-functions/vscode-results.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zpZ6Z5_J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-functions/vscode-results.png" alt="The results of the function" width="242" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you click on the results, you will see the full details of the response in a table format:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nrKBKPVN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-functions/vscode-skresults.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nrKBKPVN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.developerscantina.com/p/semantic-kernel-functions/vscode-skresults.png" alt="The detailed view of the results" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;In this post, we learned how to build our first plugin that includes a semantic function. This is a major improvement compared to having the prompt hard coded, since it allows us to reuse it across different apps. Additionally, if we want to make some changes to the prompt, we can do it without having to recompile the application.&lt;/p&gt;

&lt;p&gt;As usual, you will find the sample used in this post &lt;a href="https://github.com/qmatteoq/SemanticKernel-Demos"&gt;in the GitHub repository&lt;/a&gt;. The sample used in this post is the project called &lt;strong&gt;SemanticKernel.SemanticFunction&lt;/strong&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Build a web app to manage a custom provider in Viva Learning with Blazor</title>
      <dc:creator>Matteo Pagani</dc:creator>
      <pubDate>Tue, 29 Nov 2022 15:57:21 +0000</pubDate>
      <link>https://dev.to/qmatteoq/build-a-web-app-to-manage-a-custom-provider-in-viva-learning-with-blazor-ha5</link>
      <guid>https://dev.to/qmatteoq/build-a-web-app-to-manage-a-custom-provider-in-viva-learning-with-blazor-ha5</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/qmatteoq/adding-a-custom-content-provider-to-viva-learning-with-microsoft-graph-3a50-temp-slug-5748805"&gt;In the previous post&lt;/a&gt; we have learned the basic concepts behind the Viva Learning integration offered by the Microsoft Graph. However, the scenario we implemented wasn't very realistic. We have learned which APIs to use and how to use them but, in a real scenario, you won't use Postman to manage your custom catalog of learning content, but you would rely on a more robust solution.&lt;/p&gt;

&lt;p&gt;In this post, we're going to reuse the concepts we have learned to build a better experience: a web application, that we can use to manage our custom learning provider and its contents. We'll focus on how to implement in a real application some of the peculiar features we have learned about these APIs, like the fact that a different set of permissions is required based on the type of content you're working with.&lt;/p&gt;

&lt;p&gt;Since I'm a .NET developer, I'm going to build the web application using &lt;a href="https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blazor" rel="noopener noreferrer"&gt;Blazor&lt;/a&gt;, the latest addition to the .NET family which enables us to build client-side web apps using C# instead of JavaScript as a programming language. In this post, I'm going to assume you already have a basic knowledge of Blazor and basic ASP.NET Core concepts, like Razor components and dependency injection.&lt;/p&gt;

&lt;p&gt;Let's start from the basics: adding authentication and authorization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up the Blazor app
&lt;/h3&gt;

&lt;p&gt;Blazor comes into two different flavors: &lt;a href="https://learn.microsoft.com/en-us/aspnet/core/blazor/?WT.mc_id=dotnet-35129-website&amp;amp;view=aspnetcore-7.0#blazor-webassembly" rel="noopener noreferrer"&gt;fully client side&lt;/a&gt;, using Web Assembly, and &lt;a href="https://learn.microsoft.com/en-us/aspnet/core/blazor/?WT.mc_id=dotnet-35129-website&amp;amp;view=aspnetcore-7.0#blazor-server" rel="noopener noreferrer"&gt;server side&lt;/a&gt;. The final experience is exactly the same and both approaches enables us to build client side applications using C#. In the first case, however, the application is truly client-side only, since it runs directly in the browser. In the second case, instead, the application is still client-side, but it's backed by a server side application, which takes care of "translating" the C# code into JavaScript through a Signal R channel. For our project, we're going to use the server side approach, since it enables us to fully use the Microsoft Identity platform capabilities and features.&lt;/p&gt;

&lt;p&gt;Open Visual Studio, create a new project and look for the template called  &lt;strong&gt;Blazor Server App&lt;/strong&gt;. When you reach the  &lt;strong&gt;Additional information&lt;/strong&gt;  step, feel free to pick .NET 6 or .NET 7 based on your requirements (.NET 7 is newer, but .NET 6 is marked as LTS, so it will be supported until 2024). What's important is that, in the  &lt;strong&gt;Authentication Type&lt;/strong&gt;  dropdown, you pick  &lt;strong&gt;Microsoft Identity Platform&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422030i3BDD9772CE55A75A%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422030i3BDD9772CE55A75A%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" title="new-blazor-project.png" alt="new-blazor-project.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you hit the  &lt;strong&gt;Create&lt;/strong&gt;  button, Visual Studio will scaffold the project. However, before starting to work on the code, you'll be asked first to set up the connection to the Microsoft Identity platform. As the first step, the wizard will install the required msidentity tool, which is a .NET extension that simplifies the integration of Azure Active Directory into your application. Once the tools are installed, you will be asked to pick the account where your app registration on Azure Active Directory lives (or where you want to create a new one). If you have followed the previous post, we have already had one app registration: it's the one we have used with Postman. It's essential that you use it also for the Blazor app, otherwise things won't work properly. If you remember, one of the requirements to work with the Viva Learnings APIs is that all the operations must be performed using the same application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422031iE043F968B84C000C%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422031iE043F968B84C000C%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" title="app-registration.png" alt="app-registration.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you hit  &lt;strong&gt;Next&lt;/strong&gt; , you'll be able to configure Microsoft Identity not just to authenticate the user, but also to connect to additional APIs. In our case, we need to support the Microsoft Graph, so enable the  &lt;strong&gt;Add Microsoft Graph&lt;/strong&gt;  check:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422033iF2BFFEDEF2AF22D7%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422033iF2BFFEDEF2AF22D7%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" title="microsoft-graph.png" alt="microsoft-graph.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next and last step is to import the client secret which is required to connect to the Azure AD app. The tool will create a key called &lt;code&gt;ClientSecret&lt;/code&gt; in the &lt;code&gt;AzureAd&lt;/code&gt; section of the configuration file and it will store the secret automatically retrieved from the portal. Additionally, it won't store the information in the plain configuration file, but in the local user secrets file, called &lt;code&gt;secrets.json&lt;/code&gt;. &lt;a href="https://learn.microsoft.com/en-us/aspnet/core/security/app-secrets?view=aspnetcore-7.0&amp;amp;tabs=windows" rel="noopener noreferrer"&gt;This feature&lt;/a&gt; helps you to protect your sensitive information on your development machine, since the &lt;code&gt;secrets.json&lt;/code&gt; file, even if it's read at runtime and merged into the standard configuration like a regular &lt;code&gt;appsettings.json&lt;/code&gt; file, lives outside the project's folder. This means that you don't risk storing the client secret in your repository when you commit the solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422034i69FAF8F69250215E%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422034i69FAF8F69250215E%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" title="client-secret.png" alt="client-secret.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks to the Visual Studio integration with the Microsoft Identity platform, at the end of the wizard you will have all the required building blocks in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inside the &lt;code&gt;appsettings.json&lt;/code&gt; file, you will find a section called &lt;code&gt;AzureAd&lt;/code&gt; with all the information needed to connect to your Azure AD app, like client id, tenant id, etc.&lt;/li&gt;
&lt;li&gt;Inside the &lt;code&gt;Program.cs&lt;/code&gt; file, you will see the code required to register the different services needed to authenticate your application and authorize the different operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is the full startup code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using Microsoft.AspNetCore.Authentication.OpenIdConnect;
using Microsoft.Identity.Web;
using Microsoft.Identity.Web.UI;
using VivaLearningApp.Data;
using VivaLearningApp.Services;

var builder = Microsoft.AspNetCore.Builder.WebApplication.CreateBuilder(args);

var initialScopes = builder.Configuration["DownstreamApi:Scopes"]?.Split(' ') ?? builder.Configuration["MicrosoftGraph:Scopes"]?.Split(' ');

// Add services to the container.
builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
    .AddMicrosoftIdentityWebApp(builder.Configuration.GetSection("AzureAd"))
    .EnableTokenAcquisitionToCallDownstreamApi(initialScopes)
    .AddMicrosoftGraph(builder.Configuration.GetSection("MicrosoftGraph"))
    .AddInMemoryTokenCaches();

builder.Services.AddControllersWithViews().
    AddMicrosoftIdentityUI();

builder.Services.AddAuthorization(options =&amp;gt;
{
    // By default, all incoming requests will be authorized according to the default policy
    options.FallbackPolicy = options.DefaultPolicy;
});

builder.Services.AddRazorPages();
builder.Services.AddServerSideBlazor()
    .AddMicrosoftIdentityConsentHandler();
builder.Services.AddSingleton&amp;lt;WeatherForecastService&amp;gt;();

var app = builder.Build();

// Configure the HTTP request pipeline.
if (!app.Environment.IsDevelopment())
{
    app.UseExceptionHandler("/Error");
    // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
    app.UseHsts();
}

app.UseHttpsRedirection();

app.UseStaticFiles();

app.UseRouting();

app.UseAuthentication();
app.UseAuthorization();

app.MapControllers();
app.MapBlazorHub();
app.MapFallbackToPage("/_Host");

app.Run();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This integration is made easier by the &lt;a href="https://learn.microsoft.com/en-us/azure/active-directory/develop/microsoft-identity-web" rel="noopener noreferrer"&gt;Microsoft.Identity.Web&lt;/a&gt; library which, as the name says, simplifies the integration of the Microsoft Identity platform in a web application. In the first lines, in fact, you can see how the integration is simply enabled by the &lt;code&gt;AddMicrosoftIdentityWebApp()&lt;/code&gt; method which receives, as input, the section of our configuration called &lt;code&gt;AzureAd&lt;/code&gt;, which contains all the information about our Azure AD app. A few lines below you can notice also how the authorization is configured to use the default policy whenever the user isn't authenticated. This means that your application can't be used in anonymous mode. As soon as they launch the web application, they will be asked to login to move forward.&lt;/p&gt;

&lt;p&gt;You can easily test this by simply launching the debugger by pressing F5. The Blazor web application will be launched locally and, as a first thing, you will be asked to login with a work account from your Microsoft 365 tenant. Only if you login, you'll land on the home page and you'll be able to see your account:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422036i9F0F85B85AD19454%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422036i9F0F85B85AD19454%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" title="user-logged-in.png" alt="user-logged-in.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we start building our page to work with the custom content provider, however, we must make a change to the project's configuration. Since, during the wizard, we have enabled the integration with the Microsoft Graph, Visual Studio automatically added for us the package &lt;code&gt;Microsoft.Identity.Web.MicrosoftGraph&lt;/code&gt;, which simplifies getting an authenticated Microsoft Graph client. However, in the previous post, we have learned that the Viva Learning APIs are part of the beta endpoint, as such we must make a couple of changes to our .csproj file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Replace the &lt;code&gt;Microsoft.Identity.Web.MicrosoftGraph&lt;/code&gt; library with the equivalent beta version, which is &lt;code&gt;Microsoft.Identity.Web.MicrosoftGraphBeta&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add the &lt;code&gt;Microsoft.Graph.Beta&lt;/code&gt; package to the project, which is the beta version of the &lt;a href="https://github.com/microsoftgraph/msgraph-sdk-dotnet" rel="noopener noreferrer"&gt;Microsoft Graph .NET SDK&lt;/a&gt;:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now we have all the basic building blocks to start the integration of the Microsoft Graph APIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Listing the custom learning providers
&lt;/h3&gt;

&lt;p&gt;As the first step, we're going to build a Razor component that displays the list of available learning providers. However, the list will contain only one item: as we have learned in the previous post, currently the Viva Learning APIs support having only a single custom provider.&lt;/p&gt;

&lt;p&gt;Before starting to build the page, however, let's create a class we're going to use to centralize all our operations with the Microsoft Graph. Right click on your project, choose  &lt;strong&gt;Add → Class&lt;/strong&gt;  and give it a meaningful name, like &lt;code&gt;CustomGraphService&lt;/code&gt;. As the first step, we're going to add the method we need to get the list of custom providers. Thanks to the Microsoft Graph integration provided by the Microsoft Identity Web library, we don't need to create our own instance of the Graph client, but it's automatically injected inside the DI container of our Blazor web app. All we need to do is add two dependencies to the public constructor of our class, as in the following example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class CustomGraphService
{
    private readonly GraphServiceClient delegatedClient;
    private readonly MicrosoftIdentityConsentAndConditionalAccessHandler consentHandler;

    public CustomGraphService(GraphServiceClient delegatedClient, MicrosoftIdentityConsentAndConditionalAccessHandler consentHandler)
    {
        this.delegatedClient = delegatedClient;
        this.consentHandler = consentHandler;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;GraphServiceClient&lt;/code&gt; is the Graph client provided the Microsoft Graph .NET SDK. We can use it to perform any operation against the Microsoft Graph in a object oriented way. Instead of manually building the HTTP requests and manually parsing JSON responses, we can directly use classes and objects which represent the various endpoints and entities in the Microsoft Graph. Thanks to the Microsoft Identity Web library, the &lt;code&gt;GraphServiceClient&lt;/code&gt; object we get is already authenticated: it's already using the proper access token retrieved when we logged in with our work account. &lt;code&gt;MicrosoftIdentityConsentAndConditionalAccessHandler&lt;/code&gt;, instead, is a helper class provided by the Microsoft Identity Web library that we can use to wrap our Microsoft Graph operations. It makes sure that the proper consent is requested in case of issues with the access token.&lt;/p&gt;

&lt;p&gt;Now that we have these blocks, we can create a method which returns the list of custom providers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public async Task&amp;lt;IList&amp;lt;LearningProvider&amp;gt;&amp;gt; GetLearningProvidersAsync()
{
    try
    {
        var result = await delegatedClient.EmployeeExperience.LearningProviders.Request().GetAsync();
        return result.CurrentPage;
    }
    catch (Exception ex)
    {
        consentHandler.HandleException(ex);
        return null;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the Microsoft Graph .NET SDK makes it easier to consume the Graph from a .NET application. The &lt;code&gt;https://graph.microsoft.com/beta/employeeExperience/learningProviders&lt;/code&gt; endpoint is mapped with the &lt;code&gt;EmployeeExperience.LearningProviders.Request()&lt;/code&gt; object. By calling the &lt;code&gt;GetAsync()&lt;/code&gt; method, we are performing a HTTP GET request. The response is a collection of &lt;code&gt;LearningProvider&lt;/code&gt; objects, which have the same properties we have seen in the JSON response in the previous post, like id, name and logos. In this method you can also see the &lt;code&gt;MicrosoftIdentityConsentAndConditionalAccessHandler&lt;/code&gt; helper in action. We wrap the Graph operation in a &lt;code&gt;try / catch&lt;/code&gt; statement and, in case it fails, we use the helper to try again the request with the proper permissions.&lt;/p&gt;

&lt;p&gt;Now we're ready to use our class. First, however, we need to register it into the Blazor dependency injection container, so that in our Razor component we can retrieve an instance with all the dependencies already satisfied. First, let's create an interface that describes our class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface ICustomGraphService
{
    Task&amp;lt;IList&amp;lt;LearningProvider&amp;gt;&amp;gt; GetLearningProvidersAsync();
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's change our class to inherit from this interface:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class CustomGraphService : ICustomGraphService
{
    // class implementation
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Please note!&lt;/strong&gt;  During the post we're going to add additional methods to the &lt;code&gt;CustomGraphService&lt;/code&gt; class. Remember to declare them also in the &lt;code&gt;ICustomGraphService&lt;/code&gt; interface!&lt;/p&gt;

&lt;p&gt;Finally, let's move to the &lt;code&gt;Program.cs&lt;/code&gt; file and, before the &lt;code&gt;builder.Build()&lt;/code&gt; statement, add the following line of code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;builder.Services.AddScoped&amp;lt;ICustomGraphService, CustomGraphService&amp;gt;();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now our &lt;code&gt;CustomGraphService&lt;/code&gt; class can be easily consumed by every other class and Razor component in our application. Let's put this into action immediately by creating a new component to display the list of custom providers. Right click on the  &lt;strong&gt;Pages&lt;/strong&gt;  folder in Solution Explorer, choose  &lt;strong&gt;Add → Razor Component&lt;/strong&gt;  and give it a meaningful name, like &lt;code&gt;LearningProviders.razor&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As first step, let's add the following code at the top of the component:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;page "/learningProviders"
@inject ICustomGraphService graphService
@using Microsoft.Graph

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first one is used to enable routing, so that we can treat this component like a page that will be available through the &lt;code&gt;/learningProviders&lt;/code&gt; endpoint of the web application. The second line is used to inject our custom Microsoft Graph wrapper in the component. The third one is required to access all the types included in the Microsoft Graph .NET SDK, which belongs to the &lt;code&gt;Microsoft.Graph&lt;/code&gt; namespace.&lt;/p&gt;

&lt;p&gt;Then, in the &lt;code&gt;@code&lt;/code&gt; block, let's override the &lt;code&gt;OnInitializedAsync()&lt;/code&gt; method, which gets called when the component is rendered:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@code {
    private IList&amp;lt;LearningProvider&amp;gt; providers;

    protected async override Task OnInitializedAsync()
    {
        providers = await graphService.GetLearningProvidersAsync();
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We call the &lt;code&gt;GetLearningProvidersAsync()&lt;/code&gt; method we have previously created in our custom class, and we store the result into a variable called &lt;code&gt;providers&lt;/code&gt;, which is a collection of &lt;code&gt;LearningProvider&lt;/code&gt; objects. Then, we can iterate this collection to generate a table with the list of custom providers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@if (providers != null)
{
    &amp;lt;table class="table"&amp;gt;
        &amp;lt;thead&amp;gt;
            &amp;lt;tr&amp;gt;
                &amp;lt;th&amp;gt;Id&amp;lt;/th&amp;gt;
                &amp;lt;th&amp;gt;Name&amp;lt;/th&amp;gt;
                @*&amp;lt;th&amp;gt;Contents&amp;lt;/th&amp;gt;*@
            &amp;lt;/tr&amp;gt;
        &amp;lt;/thead&amp;gt;
        &amp;lt;tbody&amp;gt;
            @foreach (var provider in providers)
            {
                &amp;lt;tr&amp;gt;
                    &amp;lt;td&amp;gt;@provider.Id&amp;lt;/td&amp;gt;
                    &amp;lt;td&amp;gt;@provider.DisplayName&amp;lt;/td&amp;gt;
                &amp;lt;/tr&amp;gt;
            }
        &amp;lt;/tbody&amp;gt;
    &amp;lt;/table&amp;gt;
}
else
{
    &amp;lt;h3&amp;gt;Loading...&amp;lt;/h3&amp;gt;
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For each provider returned by the Microsoft Graph, we display its &lt;code&gt;Id&lt;/code&gt; and &lt;code&gt;DisplayName&lt;/code&gt; properties. To test our work, press F5 to launch the debugger and then point your browser to the URL &lt;code&gt;https://localhost:7073/learningProviders&lt;/code&gt;. If you did everything correctly, you should see the custom provider we have created with Postman in the previous post:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422039iA82E850757FBB98C%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422039iA82E850757FBB98C%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" title="blazor-learning-providers.png" alt="blazor-learning-providers.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we have our learning provider, let's move forward and also display the list of contents available for this provider.&lt;/p&gt;

&lt;h3&gt;
  
  
  Listing the learning content
&lt;/h3&gt;

&lt;p&gt;For this demo, we're going to enhance the component we have built to automatically display the list of contents which are available right below the custom provider table. The reason of this choice is that, if you remember, we can't have more than one custom learning provider per tenant, so it wouldn't make sense to give users the options to choose which learning provider they want to see: there will be only one.&lt;/p&gt;

&lt;p&gt;Let's continue working on our &lt;code&gt;CustomGraphService&lt;/code&gt; class to add a new method to retrieve the list of learning contents. Before doing that, however, we have some additional work to do. If you remember the learnings from the previous posts, one of the challenges you must deal with when you work with Viva Learning is that you must use different AAD permissions based on the scenario: delegated permissions to work with learning providers, application permissions to work with learning content. The Microsoft Graph client that is injected by the Microsoft Identity Web library is authenticated with a token retrieved with the &lt;a href="https://learn.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-auth-code-flow" rel="noopener noreferrer"&gt;Authorization code flow&lt;/a&gt;: when we start the web app, we are asked to login with a work account from our Microsoft 365 tenant. This means that the token is valid to perform delegated operations, thus we were able to use it just fine to retrieve the list of learning providers.&lt;/p&gt;

&lt;p&gt;When we want to get the list of learning content, instead, we must use a token which supports performing application operations. As such, we can't use the injected Microsoft Graph client, but we'll need to get a valid access token using the &lt;a href="https://learn.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow" rel="noopener noreferrer"&gt;client credential flow&lt;/a&gt; and create a new Graph client based on it. As first step, let's create a method to perform the authentication against our AAD application but, this time, using the client credential flow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using Microsoft.Graph;
using Microsoft.Identity.Client;
using Microsoft.Identity.Web;
using System.Net.Http.Headers;

public class CustomGraphService : ICustomGraphService
{
    private readonly IConfiguration configuration;
    private readonly GraphServiceClient delegatedClient;
    private readonly MicrosoftIdentityConsentAndConditionalAccessHandler consentHandler;
    private GraphServiceClient? applicationClient;

    public CustomGraphService(IConfiguration configuration, GraphServiceClient delegatedClient, MicrosoftIdentityConsentAndConditionalAccessHandler consentHandler)
    {
        this.configuration = configuration;
        this.delegatedClient = delegatedClient;
        this.consentHandler = consentHandler;
    }

    public async Task AcquireAccessTokenAsync()
    {
        var scopes = new string[] { ".default" };

        var aadConfig = configuration.GetSection("AzureAd");
        var client = ConfidentialClientApplicationBuilder.Create(aadConfig["ClientId"])
            .WithTenantId(aadConfig["TenantId"])
            .WithClientSecret(aadConfig["ClientSecret"]).Build();

        var token = await client.AcquireTokenForClient(scopes).ExecuteAsync();

        var authProvider = new DelegateAuthenticationProvider(async (request) =&amp;gt;
        {
            request.Headers.Authorization =
                new AuthenticationHeaderValue("Bearer", token.AccessToken);
        });

        applicationClient = new GraphServiceClient(authProvider);
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside the &lt;code&gt;AcquireAccessTokenAsync()&lt;/code&gt; method, we use the &lt;code&gt;ConfidentialClientApplicationBuilder&lt;/code&gt; class included in the &lt;code&gt;Microsoft.Identity&lt;/code&gt; namespace, which allows us to authenticate to our AAD application using the client credentials flow. We set it up by using the following methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Create()&lt;/code&gt;, which requires the Client Id of our AAD app.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;WithTenantId()&lt;/code&gt;, which requires the Tenant Id of our AAD app.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;WithClientSecret&lt;/code&gt;, which requires the Client Secret of our AAD app.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All the information have already been stored in the &lt;code&gt;appsettings.json&lt;/code&gt; file when we have connected our application to the Microsoft Identity platform. As such, we can just retrieve the section called &lt;code&gt;AzureAd&lt;/code&gt; from the configuration file, using the &lt;code&gt;configuration&lt;/code&gt; object. Since this object is registered inside the DI container, we just need to add the &lt;code&gt;IConfiguration&lt;/code&gt; interface as parameter of the public constructor of our &lt;code&gt;CustomGraphService&lt;/code&gt; class. As the last step, we call the &lt;code&gt;Build()&lt;/code&gt; method to create our authentication client.&lt;/p&gt;

&lt;p&gt;Finally, we can use the &lt;code&gt;AcquireTokenForClient()&lt;/code&gt; method with the scopes we need, followed by the &lt;code&gt;ExecuteAsync()&lt;/code&gt; method, to effectively retrieve the access token we need to interact with the Microsoft Graph using application permissions. Once we have it, we can use it to create a &lt;code&gt;DelegateAuthenticationProvider&lt;/code&gt; object, which we can pass as parameter to create a new &lt;code&gt;GraphServiceClient&lt;/code&gt; object.&lt;/p&gt;

&lt;p&gt;If we did everything properly, now our &lt;code&gt;CustomGraphService&lt;/code&gt; class should offer two different &lt;code&gt;GraphServiceClient&lt;/code&gt; objects: one called &lt;code&gt;delegatedClient&lt;/code&gt;, which supports delegated permissions and that we can use to work with learning providers; one called &lt;code&gt;applicationClient&lt;/code&gt;, which supports application permissions and that we can use to work with learning content.&lt;/p&gt;

&lt;p&gt;Thanks to this second client implementation, now it's quite easy to add a new method to our &lt;code&gt;CustomGraphService&lt;/code&gt; class that we can use to get the list of learning contents for a given learning provider:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public async Task&amp;lt;IList&amp;lt;LearningContent&amp;gt;?&amp;gt; GetLearningContentAsync(string id)
{
    var response = await applicationClient.EmployeeExperience.LearningProviders[id].LearningContents.Request().GetAsync();
    return response.CurrentPage;
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we can go back to our Razor component and add a couple of additional statements in the &lt;code&gt;OnInitializedAsync()&lt;/code&gt; method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@code {
    private IList&amp;lt;LearningProvider&amp;gt; providers;
    private IList&amp;lt;LearningContent&amp;gt; contents;

    protected async override Task OnInitializedAsync()
    {
        await graphService.AcquireAccessTokenAsync();
        providers = await graphService.GetLearningProvidersAsync();
        if (providers != null)
        {
            contents = await graphService.GetLearningContentAsync(providers?.FirstOrDefault().Id);
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, we are calling our new &lt;code&gt;AcquireAccessTokenAsync()&lt;/code&gt; to make sure to get the proper access token before performing any operation. Then, after getting the list of providers, we call the new &lt;code&gt;GetLearningContentAsync()&lt;/code&gt; method to get the list of contents for the first provider in the list (which will be also the only available one). The list of contents is stored in the &lt;code&gt;contents&lt;/code&gt; variable, which is a collection of &lt;code&gt;LearningContent&lt;/code&gt; objects. We use it in a similar way we did for the learning providers to build a table which displays the list:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@if (contents != null)
{
    &amp;lt;table class="table"&amp;gt;
        &amp;lt;thead&amp;gt;
            &amp;lt;tr&amp;gt;
                &amp;lt;th&amp;gt;Title&amp;lt;/th&amp;gt;
                &amp;lt;th&amp;gt;Url&amp;lt;/th&amp;gt;
            &amp;lt;/tr&amp;gt;
        &amp;lt;/thead&amp;gt;
        &amp;lt;tbody&amp;gt;
            @foreach (var content in contents)
            {
                &amp;lt;tr&amp;gt;
                    &amp;lt;td&amp;gt;@content.Title&amp;lt;/td&amp;gt;
                    &amp;lt;td&amp;gt;@content.ContentWebUrl&amp;lt;/td&amp;gt;
                &amp;lt;/tr&amp;gt;
            }
        &amp;lt;/tbody&amp;gt;
    &amp;lt;/table&amp;gt;

}
else
{
    &amp;lt;h3&amp;gt;Loading...&amp;lt;/h3&amp;gt;
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For each learning content, we display its &lt;code&gt;Title&lt;/code&gt; and &lt;code&gt;ContentWebUrl&lt;/code&gt; properties. If you have implemented everything properly, when you press F5 and you go to the &lt;code&gt;https://localhost:7073/learningProviders&lt;/code&gt; page, you should see, below the customer provider, the articles from this blog we have added in the previous post with Postman:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422040i3FAD6681CC36AF3E%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422040i3FAD6681CC36AF3E%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" title="blazor-learning-contents.png" alt="blazor-learning-contents.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's move now to the last step of our demo: providing a form to add new learning content.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding the learning content
&lt;/h3&gt;

&lt;p&gt;Unfortunately, at the time of writing, we can't use the &lt;code&gt;applicationClient&lt;/code&gt; object we have created in the previous section, due to the way the API works. If you remember what we have learned in the previous post, this API has a couple of specific requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It must be a PATCH operation.&lt;/li&gt;
&lt;li&gt;You must include in the URL the external id of your content, like &lt;code&gt;https://graph.microsoft.com/beta/employeeExperience/learningProviders/303da5b7-dce3-4998-a76f-7a18849fc697/learningContents(externalId='3617543')&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Microsoft Graph .NET SDK, unfortunately, doesn't support this scenario. Additionally, the learning content request we have previously built using the &lt;code&gt;applicationClient.EmployeeExperience.LearningProviders[id].LearningContents.Request()&lt;/code&gt; method, supports only two methods: &lt;code&gt;GetAsync()&lt;/code&gt;, which is mapped with a GET request, and &lt;code&gt;AddAsync()&lt;/code&gt;, which is mapped with a POST request. We don't have any &lt;code&gt;UpdateAsync()&lt;/code&gt; method, which would be mapped with the PATCH request we need.&lt;/p&gt;

&lt;p&gt;But don't worry, we can easily support our requirements by manually building our request using the standard &lt;code&gt;HttpClient&lt;/code&gt; class in .NET.&lt;/p&gt;

&lt;p&gt;First, let's change a bit our &lt;code&gt;AcquireAccessTokenAsync()&lt;/code&gt; method so that we can store the access token as a global property of the &lt;code&gt;CustomGraphService&lt;/code&gt; class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class CustomGraphService : ICustomGraphService
{
   private string? accessToken;

   public async Task AcquireAccessTokenAsync()
   {
       var scopes = new string[] { ".default" };

       var aadConfig = configuration.GetSection("AzureAd");
       var client = ConfidentialClientApplicationBuilder.Create(aadConfig["ClientId"])
           .WithTenantId(aadConfig["TenantId"])
           .WithClientSecret(aadConfig["ClientSecret"]).Build();

       var token = await client.AcquireTokenForClient(scopes).ExecuteAsync();
       accessToken = token.AccessToken;

       // initialization of the application Graph client
   }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can use the token to create a custom authenticated &lt;code&gt;HttpClient&lt;/code&gt; and submit the proper request to the Microsoft Graph:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public async Task AddLearningContent(string providerId, string contentId, string title, string contentUrl, string language)
{
    string baseUrl = applicationClient.EmployeeExperience.LearningProviders.RequestUrl;

    string url = $"{baseUrl}/{providerId}/learningContents(externalId='{contentId}')";
    HttpClient client = new HttpClient();
    client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);

    LearningContentModel content = new()
    {
        Title = title,
        ExternalId = contentId,
        ContentWebUrl = contentUrl,
        LanguageTag= language
    };

    JsonContent jsonContent = JsonContent.Create(content);

    var result = await client.PatchAsync(url, jsonContent);
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, we leverage the standard Microsoft Graph client to retrieve the base URL to work with the Viva Learning APIs. We use it to build the full URL we need to submit the request, by including the id of the learning provider and the external id. Then we create a new &lt;code&gt;HttpClient&lt;/code&gt; instance, and we use the &lt;code&gt;AuthenticationHeaderValue&lt;/code&gt; object with the access token to define the default &lt;code&gt;Authorization&lt;/code&gt; header. Now the &lt;code&gt;HttpClient&lt;/code&gt; object can make authenticated requests to the Microsoft Graph. The body of the request must be defined in JSON, so we create a new &lt;code&gt;LearningContentModel&lt;/code&gt; object and we turn it into a JSON object using the &lt;code&gt;JsonContent.Create()&lt;/code&gt; method. Finally, we submit the request using the &lt;code&gt;PatchAsync()&lt;/code&gt; method, which is the equivalent of performing a PATCH operation.&lt;/p&gt;

&lt;p&gt;There's only one caveat to keep in mind. As you can see, to create the JSON payload, we're using a &lt;code&gt;LearningContentModel&lt;/code&gt; class, instead of the &lt;code&gt;LearningContent&lt;/code&gt; one which is part of the Microsoft Graph .NET SDK. The reason is that this class contains a lot of extra properties that the standard Microsoft Graph .NET client can parse in the right way, but if we use it with a custom client (like in our case) it will generate an invalid request. As such, I've created a class called &lt;code&gt;LearningContentModel&lt;/code&gt; which exposes only the properties I need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class LearningContentModel
{
    [Required]
    public string? ExternalId { get; set; }

    [Required]
    public string? Title { get; set; }

    [Required]
    public string? ContentWebUrl { get; set; }

    [Required]
    public string? LanguageTag { get; set; }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a quite simple implementation which exposes only the minimum set of properties which are required to create learning content. Of course, you can customize it to add any extra property you might need.&lt;/p&gt;

&lt;p&gt;Now we have everything we need in terms of business logic. We need, however, the Razor component which will act as a submission form. Right click on the  &lt;strong&gt;Pages&lt;/strong&gt;  folder and choose  &lt;strong&gt;Add → Razor component&lt;/strong&gt;. Give it a meaningful name, like &lt;code&gt;NewLearningContent.razor&lt;/code&gt; and click  &lt;strong&gt;Add&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let's start to define the header of the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@page "/newLearningContent/{learningProviderId}"
@inject ICustomGraphService graphService;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We define the route of the page, this time however, by supporting a parameter. We're going to pass to the component, through the URL, the id of the learning provider we're going to create the content for. Then, we inject our usual &lt;code&gt;CustomGraphService&lt;/code&gt; object. Now let's look at the code section:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@code {
    [Parameter]
    public string? LearningProviderId { get; set; }

    public LearningContentModel learningContentModel = new();

    public async Task HandleSubmit()
    {
        if (learningContentModel != null)
        {
            await graphService.AddLearningContent(LearningProviderId, learningContentModel.ExternalId, learningContentModel.Title, learningContentModel.ContentWebUrl, learningContentModel.LanguageTag);
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;LearningProviderId&lt;/code&gt; property is decorated with the &lt;code&gt;[Parameter]&lt;/code&gt; attribute, which means that it will be automatically injected with the value coming from the URL. We also define a new property of the type &lt;code&gt;LearningContentModel&lt;/code&gt;: we're going to use it to build the input form. Finally, we define a method called &lt;code&gt;HandleSubmit()&lt;/code&gt;, which is going to be triggered when the user clicks on the Submit button of the form. The method simply takes care of calling the &lt;code&gt;AddLearningContent()&lt;/code&gt; method we have previously defined in the &lt;code&gt;CustomGraphService&lt;/code&gt; class. When the user fills out the form, the &lt;code&gt;learningContentModel&lt;/code&gt; object includes all the information filled in by the user. As such, we just pass its properties as inputs for the method.&lt;/p&gt;

&lt;p&gt;Now that we have all the logic in place, we can build our input form using the &lt;code&gt;EditForm&lt;/code&gt; component in Blazor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;EditForm Model="@learningContentModel" OnValidSubmit="@HandleSubmit"&amp;gt;
    &amp;lt;DataAnnotationsValidator /&amp;gt;
    &amp;lt;ValidationSummary /&amp;gt;

    &amp;lt;p&amp;gt;
        &amp;lt;div&amp;gt;External Id&amp;lt;/div&amp;gt;
        &amp;lt;div&amp;gt;&amp;lt;InputText id="externalId" @bind-Value="learningContentModel.ExternalId" /&amp;gt;&amp;lt;/div&amp;gt;
    &amp;lt;/p&amp;gt;
    &amp;lt;p&amp;gt;
        &amp;lt;div&amp;gt;Title&amp;lt;/div&amp;gt;
        &amp;lt;div&amp;gt;&amp;lt;InputText id="title" @bind-Value="learningContentModel.Title" /&amp;gt;&amp;lt;/div&amp;gt;
    &amp;lt;/p&amp;gt;

    &amp;lt;p&amp;gt;
        &amp;lt;div&amp;gt;Content URL&amp;lt;/div&amp;gt;
        &amp;lt;div&amp;gt;&amp;lt;InputText id="contentUrl" @bind-Value="learningContentModel.ContentWebUrl" /&amp;gt;&amp;lt;/div&amp;gt;
    &amp;lt;/p&amp;gt;

    &amp;lt;p&amp;gt;
        &amp;lt;div&amp;gt;Language&amp;lt;/div&amp;gt;
        &amp;lt;div&amp;gt;&amp;lt;InputText id="longLogoLight" @bind-Value="learningContentModel.LanguageTag" /&amp;gt;&amp;lt;/div&amp;gt;
    &amp;lt;/p&amp;gt;

    &amp;lt;p&amp;gt;
        &amp;lt;button type="submit"&amp;gt;Create&amp;lt;/button&amp;gt;
    &amp;lt;/p&amp;gt;

&amp;lt;/EditForm&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;EditForm&lt;/code&gt; component requires two properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Model&lt;/code&gt; is the object which holds the model that will be filled with the data coming from the form. It's the &lt;code&gt;learningContentModel&lt;/code&gt; property we have defined in the &lt;code&gt;@code&lt;/code&gt; section.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;OnValidSubmit&lt;/code&gt;, which is the method to invoke when the form is submitted. &lt;code&gt;HandleSubmit&lt;/code&gt; is the name of the handler we have created in the &lt;code&gt;@code&lt;/code&gt; section.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Inside the &lt;code&gt;EditForm&lt;/code&gt; component, we have the freedom to define the various fields as we prefer. The only requirement is to use the &lt;code&gt;@bind-value&lt;/code&gt; property to connect the field with the equivalent property in the model. We also need a &lt;code&gt;button&lt;/code&gt; with &lt;code&gt;submit&lt;/code&gt; as &lt;code&gt;type&lt;/code&gt;, which will trigger the method defined in the &lt;code&gt;OnValidSubmit&lt;/code&gt; property.&lt;/p&gt;

&lt;p&gt;This is it. If you want to test this form, you can invoke the URL &lt;code&gt;https://localhost:7073/newLearningContent/&lt;/code&gt; with the id of your custom learning provider (for example, &lt;code&gt;https://localhost:7073/newLearningContentBis/303da5b7-dce3-4998-a76f-7a18849fc697&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422041i09B34B0F8436CEC6%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F422041i09B34B0F8436CEC6%2Fimage-size%2Flarge%3Fv%3Dv2%26px%3D999" title="blazor-new-content.png" alt="blazor-new-content.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to give the user an easy option to create new content, you can customize the &lt;code&gt;LearningProviders.razor&lt;/code&gt; component and:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Inject the &lt;code&gt;NavigationManager&lt;/code&gt; object into the component, which is provided by Blazor to manage navigation across pages:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add a new button in the component:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implement the &lt;code&gt;CreateNewLearningContent&lt;/code&gt; method to perform the navigation:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When the user presses the button, they will be automatically redirected to the &lt;code&gt;newLearningContent&lt;/code&gt; page with, as an extra parameter, the id of the learning provider.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping up
&lt;/h3&gt;

&lt;p&gt;It was quite a long journey, but now we have a fully working web client that we can use to manage our custom learning content in Viva Insights. In this post, we have focused on the various nuances you must keep in mind when you build such a solution, like the requirement of using different authentication types for the Microsoft Graph client to manage the different permissions required to work with the Viva Learning APIs.&lt;/p&gt;

&lt;p&gt;You can find the full solution &lt;a href="https://github.com/qmatteoq/VivaLearningApp" rel="noopener noreferrer"&gt;on GitHub&lt;/a&gt;. Before using it, remember to change the configuration in the &lt;code&gt;appsettings.json&lt;/code&gt; file with the information about your app registration on Azure AD.&lt;/p&gt;

&lt;p&gt;In the same solution, you will find also another project which uses the same classes and APIs to support another common scenarios with learning providers: content syncing. In this scenario, you don't manually add custom learning content, but you sync it from another source, like an Excel file or a Web API. The project on GitHub includes an Azure Function that reads a CSV file stored on Azure Storage and uses it to import a list of contents in the custom learning provider.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>emptystring</category>
    </item>
    <item>
      <title>Adding a custom content provider to Viva Learning with Microsoft Graph</title>
      <dc:creator>Matteo Pagani</dc:creator>
      <pubDate>Mon, 28 Nov 2022 10:29:16 +0000</pubDate>
      <link>https://dev.to/qmatteoq/adding-a-custom-content-provider-to-viva-learning-with-microsoft-graph-g2k</link>
      <guid>https://dev.to/qmatteoq/adding-a-custom-content-provider-to-viva-learning-with-microsoft-graph-g2k</guid>
      <description>&lt;p&gt;Since the pandemic hit, improving the employee experience in this new hybrid world has become a priority for Microsoft. One of the tools to support this mission is &lt;a href="https://www.microsoft.com/en-us/microsoft-viva"&gt;Microsoft Viva&lt;/a&gt;, a suite of apps available through Teams which can help employees to stay more connected, to better manage their time and to work more efficiently.&lt;/p&gt;

&lt;p&gt;Today we'll focus on one of the apps of the Viva suite called &lt;a href="https://www.microsoft.com/en-us/microsoft-viva/learning"&gt;Viva Learning&lt;/a&gt;, which helps employees to grow and learn. Through Viva Learning, enterprises can make available learning content in a variety of forms (videos, articles, books, etc.) that employees can consume at their own pace. Through the platform, administrators can also dispatch learning assignments to employees, which is very useful for scenarios like compliance trainings or learning experiences that are required for your role.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OtLtmet1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421575i51DBB31F22ABF7B1/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OtLtmet1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421575i51DBB31F22ABF7B1/image-size/large%3Fv%3Dv2%26px%3D999" alt="viva-learning-intro.png" title="viva-learning-intro.png" width="800" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Microsoft has partnered with many popular content providers, like Pluralsight or Coursera, to bring their content into Viva Learning. If the company has a license for one of these providers, employees can easily consume their learning content directly from Teams, without having to open an external portal. But what if you want to bring your own custom content into Viva Learning? In this article, we're going to explore the new preview feature which enables adding custom content providers into Viva Learning through the usage of Microsoft Graph. This is a developer focused scenario, which you can leverage when you already have a content system you would like to bring into the platform. For more casual scenarios (like having a curated list of content you want to share with your employees), &lt;a href="https://learn.microsoft.com/en-us/viva/learning/configure-sharepoint-content-source"&gt;you can leverage the SharePoint integration&lt;/a&gt;, which enables to turn a SharePoint list into a content source for Microsoft Viva.&lt;/p&gt;

&lt;p&gt;The APIs are currently in preview, which are the ones we'll see today, are focused on ingesting custom content into Viva Learning. There's an additional set of APIs to manage assignments and track completion which are on the roadmap, and they will be released in the next year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important!&lt;/strong&gt;  If you're a content provider who wants to make your content available to all the Viva Learning customers around the world, this article isn't for you, since this scenario isn't publicly available yet. This article focuses on building your own custom provider, so that you can make it available to employees who are part of a specific Microsoft 365 tenant.&lt;/p&gt;

&lt;p&gt;To walk through the usage of these APIs, we're going to build a custom content provider based on the content published on this blog. We're going to create a learning provider called  &lt;strong&gt;Modern Work App Consult&lt;/strong&gt; , which will host as content some of the articles that have been published here.&lt;/p&gt;

&lt;p&gt;Let's start!&lt;/p&gt;

&lt;h3&gt;
  
  
  Working with Microsoft Graph
&lt;/h3&gt;

&lt;p&gt;The way Microsoft has opted to enable the integration experience for Viva Learning is through the &lt;a href="https://learn.microsoft.com/en-us/graph/overview"&gt;Microsoft Graph&lt;/a&gt;. Specifically, the team has introduced two concepts into the platform:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Learning Provider&lt;/strong&gt; : it's the source of the learning content. Pluralsight or Coursera, for example, are learning providers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning Content&lt;/strong&gt; : it's content that is made available to employees. It can be a video, an article, a book, etc. A learning provider can host one or more learning contents.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These concepts are exposed through a new endpoint in the Microsoft Graph called &lt;code&gt;employeeExperience&lt;/code&gt;. Being the feature still in preview, it's currently part of the beta endpoint: &lt;code&gt;https://graph.microsoft.com/beta/employeeExperience/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;As for any scenario in which you need to work with Microsoft Graph, you will need to register an application in Azure Active Directory to handle the authentication and the required permissions. However, in case of Viva Learning, there's an extra challenge, due to the different way permissions are managed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The APIs to work with a learning provider are supported only with  &lt;strong&gt;delegated permissions&lt;/strong&gt;. This is the scenario where you are authenticated as a user in the tenant, so every operation against the Microsoft Graph is performed as if you were the user. These are the permissions that usually enable Microsoft Graph to retrieve information about the logged user, like their profiles, their appointments, or their contacts.&lt;/li&gt;
&lt;li&gt;The APIs to work with learning content are supported only with  &lt;strong&gt;application permissions&lt;/strong&gt;. This is the scenario where you are operating as a service or a daemon, so there isn't a specific user logged in. These are the permissions that usually enable Microsoft Graph to retrieve information about the entire tenant, like the profiles of all the users. Under these permissions, you will find most of the APIs which you can use to interact with the whole organization, like the Intune APIs, the Teams administration APIs, or the Windows Update management APIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the next section, we're going to setup the Azure Active Directory application in the correct way, then we'll use Postman to start working with the Viva Learning APIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up the application on Azure
&lt;/h3&gt;

&lt;p&gt;Before working with the Viva Learning APIs, we will need to register an application on Azure Active Directory to perform the authentication. As such, open a browser and go to &lt;a href="https://portal.azure.com"&gt;the Azure portal&lt;/a&gt;. Make sure to login with an administrator account who belongs to the tenant which is hosting your Microsoft 365 subscription. Now open the Azure Active Directory section and, from the blade on the left, choose  &lt;strong&gt;App registrations&lt;/strong&gt;  and click on  &lt;strong&gt;New registration&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B-IIL4UX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421577iC6E01A82B1594D0B/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B-IIL4UX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421577iC6E01A82B1594D0B/image-size/large%3Fv%3Dv2%26px%3D999" alt="register-app.png" title="register-app.png" width="800" height="577"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give it a meaningful name (like Viva Learning app), then under  &lt;strong&gt;Supported account types&lt;/strong&gt;  choose  &lt;strong&gt;Accounts in this organizational directory only&lt;/strong&gt;. As  &lt;strong&gt;Redirect URI&lt;/strong&gt; , select  &lt;strong&gt;Web&lt;/strong&gt;  as platform and add the following URL: &lt;code&gt;https://oauth.pstmn.io/v1/browser-callback&lt;/code&gt;. It's the one required by Postman, which we're going to use later to experiment with the APIs. Once you have configured everything, click  &lt;strong&gt;Register&lt;/strong&gt;. As the first step, let's configure the API permissions we need to work with Viva Learning. Click on  &lt;strong&gt;API permissions&lt;/strong&gt; , choose  &lt;strong&gt;Add a permission&lt;/strong&gt;  and select  &lt;strong&gt;Microsoft Graph&lt;/strong&gt;. Now add the following permissions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under  &lt;strong&gt;Delegated permissions&lt;/strong&gt; , choose  &lt;strong&gt;LearningProvider.ReadWrite&lt;/strong&gt;. This is the permission required to create and list content providers, as described &lt;a href="https://learn.microsoft.com/en-us/graph/api/employeeexperience-post-learningproviders?view=graph-rest-beta&amp;amp;tabs=http"&gt;in the official docs&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Under  &lt;strong&gt;Application permissions&lt;/strong&gt; , choose  &lt;strong&gt;LearningContent.ReadWrite&lt;/strong&gt;. This is the permission required to create and list learning content, as described &lt;a href="https://learn.microsoft.com/en-us/graph/api/learningcontent-update?view=graph-rest-beta&amp;amp;tabs=csharp"&gt;in the official docs&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you have added the required permissions, make sure to click on the button  &lt;strong&gt;Grant admin consent for xyz&lt;/strong&gt; , where xyz is your company's name. The Viva Learning APIs, in fact, requires admin's approval to be used. This is how your dashboard should look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kttugeU_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421578iD08FD05E59F247C6/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kttugeU_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421578iD08FD05E59F247C6/image-size/large%3Fv%3Dv2%26px%3D999" alt="api-permissions.png" title="api-permissions.png" width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we need to generate a secret, which we'll need to manage the authentication. Click on  &lt;strong&gt;Certificates &amp;amp; secrets&lt;/strong&gt; , click on  &lt;strong&gt;New client secret&lt;/strong&gt; , choose optionally a name and an expiration and click  &lt;strong&gt;Add&lt;/strong&gt;. The secret will be added, and you'll be able to see its value in the table.  &lt;strong&gt;Make sure to copy it somewhere!&lt;/strong&gt;  We'll need it later, but this is the only time it will be displayed. If you lose it, you'll need to generate another one.&lt;/p&gt;

&lt;p&gt;We have everything we need now. To learn the basics about the Viva Learning APIs, we're going to use Postman to perform requests against the Microsoft Graph.&lt;/p&gt;

&lt;h3&gt;
  
  
  Set up Postman
&lt;/h3&gt;

&lt;p&gt;The Microsoft documentation already provides an excellent article on how to setup Postman with the Microsoft Graph. The process is made easier by the availability of a dedicated Postman collection, which already includes all the Microsoft Graph APIs and a dedicated environment to simplify the authentication process. The documentation &lt;a href="https://learn.microsoft.com/en-us/graph/use-postman"&gt;is available here&lt;/a&gt;, you just need to follow it step by step. At the end of Step 4, titled  &lt;strong&gt;Configure authentication&lt;/strong&gt; , you will get an environment like the one in the image below set up in your Postman client:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8wHnD6i9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421579i4EC3019D5EE46D68/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8wHnD6i9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421579i4EC3019D5EE46D68/image-size/large%3Fv%3Dv2%26px%3D999" alt="postman.png" title="postman.png" width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's customize the variables based on the AAD app registration we have just created:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ClientID&lt;/strong&gt; : copy the value under &lt;strong&gt;Application (client) ID&lt;/strong&gt; from the Overview page.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TenantID&lt;/strong&gt; : copy the value under &lt;strong&gt;Directory (tenant) ID&lt;/strong&gt; from the Overview.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ClientSecret&lt;/strong&gt; : copy the client secret you have previously generated in the  &lt;strong&gt;Certificates &amp;amp; secrets&lt;/strong&gt;  section.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now we're ready to start working with the Viva Learning APIs. If you have properly followed the tutorial to add the Microsoft Graph collection into Postman, you will have two subfolders: Delegated and Application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VaSeLEB1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421581i3E6C7077ABF257CC/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VaSeLEB1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421581i3E6C7077ABF257CC/image-size/large%3Fv%3Dv2%26px%3D999" alt="collections.png" title="collections.png" width="387" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's start with the creation of a new Learning Provider, which is a delegated operation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a learning provider
&lt;/h3&gt;

&lt;p&gt;The first step to working with delegated operations is to get the proper access token, which is required to authenticate against the Microsoft Graph. Click on  &lt;strong&gt;Delegated&lt;/strong&gt;  and you should automatically see the  &lt;strong&gt;Authorization&lt;/strong&gt;  tab. Everything should already be set up in the right way. All you must do is to click on the  &lt;strong&gt;Get New Access Token&lt;/strong&gt;  button at the end of the page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qq2J60KK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421582i07E694BBC5BB2A09/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qq2J60KK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421582i07E694BBC5BB2A09/image-size/large%3Fv%3Dv2%26px%3D999" alt="get-new-access-token.png" title="get-new-access-token.png" width="800" height="631"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since we are in a delegated scenario (so we're going to call the APIs with the identity of a logged user), Postman will open a popup to walk you through the authentication flow with the Microsoft Identity platform. Once you have logged in with an administrator account from your Microsoft 365 tenant, you will be issued an access token, that will be saved in Postman and that we can use to work with all the Graph APIs which require delegated permissions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1jnPo8xF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421583iD5B59515BB81FB13/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1jnPo8xF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421583iD5B59515BB81FB13/image-size/large%3Fv%3Dv2%26px%3D999" alt="postman-authentication.png" title="postman-authentication.png" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Microsoft Graph collection for Postman you have forked doesn't contain the Viva Learning APIs yet, so we'll need to manually create a new request. We'll create it under the Delegated folder, so that we can retain the same authentication flow. Right click on  &lt;strong&gt;Delegated&lt;/strong&gt;  and choose  &lt;strong&gt;Add request&lt;/strong&gt;. Name it  &lt;strong&gt;Create learning provider&lt;/strong&gt;  and configure it like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As method, choose  &lt;strong&gt;POST&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;As endpoint, set &lt;code&gt;https://graph.microsoft.com/beta/employeeExperience/learningProviders&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, we must specify the body of the request, which is the JSON payload that describes the content provider we want to create. This is an example payload we can use to create our learning provider called  &lt;strong&gt;Modern Work App Consult&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "displayName": "Modern Work App Consult",
    "squareLogoWebUrlForDarkTheme": "https://support.content.office.net/en-us/media/4c531d12-4c13-4782-a6e4-4b8f991801a3.png",
    "longLogoWebUrlForDarkTheme": "https://support.content.office.net/en-us/media/4c531d12-4c13-4782-a6e4-4b8f991801a3.png",
    "squareLogoWebUrlForLightTheme": "https://support.content.office.net/en-us/media/4c531d12-4c13-4782-a6e4-4b8f991801a3.png",
    "longLogoWebUrlForLightTheme": "https://support.content.office.net/en-us/media/4c531d12-4c13-4782-a6e4-4b8f991801a3.png",
    "isEnabled": true
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We must provide the name of the content provider (&lt;code&gt;displayName&lt;/code&gt;) and four different versions of the provider's logo, which will be used based on the scenario and the theme of the user. Now hit  &lt;strong&gt;Send&lt;/strong&gt; : if everything goes well, we'll get back as response a JSON payload with the same information we have just submitted, plus some additional ones that have been created as part of the ingestion process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "@odata.context": "https://graph.microsoft.com/beta/$metadata#learningProviders/$entity",
    "id": "ba9790ef-21d5-4c17-808c-acda55230253",
    "displayName": "Modern Work App Consult",
    "squareLogoWebUrlForDarkTheme": "https://support.content.office.net/en-us/media/4c531d12-4c13-4782-a6e4-4b8f991801a3.png",
    "longLogoWebUrlForDarkTheme": "https://support.content.office.net/en-us/media/4c531d12-4c13-4782-a6e4-4b8f991801a3.png",
    "squareLogoWebUrlForLightTheme": "https://support.content.office.net/en-us/media/4c531d12-4c13-4782-a6e4-4b8f991801a3.png",
    "longLogoWebUrlForLightTheme": "https://support.content.office.net/en-us/media/4c531d12-4c13-4782-a6e4-4b8f991801a3.png",
    "isEnabled": true
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The most important one is &lt;code&gt;id&lt;/code&gt;. This is the unique identifier of the content provider, we're going to need it to perform any additional operation with the provider, like adding some content or changing some of its properties. If we want to double check that the provider was created successfully, we can perform the same request against the same endpoint, but this time with a  &lt;strong&gt;GET&lt;/strong&gt;  operation. The API will return the list of all the custom content providers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "@odata.context": "https://graph.microsoft.com/beta/$metadata#employeeExperience/learningProviders",
    "value": [
        {
            "id": "303da5b7-dce3-4998-a76f-7a18849fc697",
            "displayName": "Modern Work App Consult",
            "squareLogoWebUrlForDarkTheme": "https://support.content.office.net/en-us/media/4c531d12-4c13-4782-a6e4-4b8f991801a3.png",
            "longLogoWebUrlForDarkTheme": "https://support.content.office.net/en-us/media/4c531d12-4c13-4782-a6e4-4b8f991801a3.png",
            "squareLogoWebUrlForLightTheme": "https://support.content.office.net/en-us/media/4c531d12-4c13-4782-a6e4-4b8f991801a3.png",
            "longLogoWebUrlForLightTheme": "https://support.content.office.net/en-us/media/4c531d12-4c13-4782-a6e4-4b8f991801a3.png",
            "isEnabled": true,
            "loginWebUrl": null
        }
    ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;IMPORTANT!&lt;/strong&gt;  At the time of authoring this article, you can have only one custom provider per tenant. If you try to submit multiple POST requests to the &lt;code&gt;/employeeExperience/learningProviders&lt;/code&gt; endpoint after you have already created a custom provider, they will update the existing one, instead of creating new ones.&lt;/p&gt;

&lt;p&gt;Now we're ready to add some content to our learning provider.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding content to a learning provider
&lt;/h3&gt;

&lt;p&gt;To add content to a learning provider, we need to move from the  &lt;strong&gt;Delegated&lt;/strong&gt;  folder in Postman to the  &lt;strong&gt;Application&lt;/strong&gt;  one since, as we mentioned at the beginning of the article, the APIs to add content are available only with this type of permission. As such, we must repeat the authentication process to get the proper access token. The starting point is the  &lt;strong&gt;Application&lt;/strong&gt;  folder under the Microsoft Graph collection in Postman. Also in this case, the configuration will be already defined in the right way, all you must do is to click on the  &lt;strong&gt;Get New Access Token&lt;/strong&gt;  button. This time, however, you won't see any pop up asking you to authenticate with a user. We are in an Application permission scenarios, so we're going to use the &lt;a href="https://learn.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow"&gt;Client Credentials authentication flow&lt;/a&gt;. We won't authenticate as a user, but as an application, through the AAD app we have created in the Azure portal. As such, you will directly get an access token, which will be stored in Postman for later usages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Eb0NDgbH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421585iC161B5C73FAAB99F/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Eb0NDgbH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421585iC161B5C73FAAB99F/image-size/large%3Fv%3Dv2%26px%3D999" alt="postman-application-authentication.png" title="postman-application-authentication.png" width="728" height="872"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we did before for the creation of the content provider, also in this case we will need to manually create a new request under the  &lt;strong&gt;Application&lt;/strong&gt;  folder, since the Viva Learning APIs are missing in the collection. Right click on the  &lt;strong&gt;Application&lt;/strong&gt;  folder, click on  &lt;strong&gt;New request&lt;/strong&gt;  and name it  &lt;strong&gt;Add learning content&lt;/strong&gt;. This API is a bit peculiar, because it doesn't differentiate between adding new content (which is typically done with a POST operation) and updating an existing one (which is typically done with a PATCH operation). The only supported operation is PATCH and, based on the id we're going to provide, the API will understand if it needs to create new content or update an existing one. As such, make sure to set PATCH as operation to perform in Postman.&lt;/p&gt;

&lt;p&gt;This is the endpoint we must use the PATCH operation with: &lt;code&gt;https://graph.microsoft.com/beta/employeeExperience/learningProviders/{{LearningProviderId}}/learningContents(externalId='externalId')&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;There are couple of placeholders we must replace:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;LearningProviderId&lt;/code&gt; is the unique identifier of the learning provider this content belongs to. This is the ID that the &lt;code&gt;learningProviders&lt;/code&gt; API has previously returned to us when we have created the new provider.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;externalId&lt;/code&gt; is an ID which uniquely identifies our content. This is an external id, meaning that it's typically the ID that our learning provider uses to identify this content, which is different from the internal one used by the Viva Learning APIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's critical to specify the &lt;code&gt;externalId&lt;/code&gt; as part of the Microsoft Graph URL. Without it, the PATCH operation will fail to create new content. If you try to submit a request only by using the URL &lt;code&gt;https://graph.microsoft.com/beta/employeeExperience/learningProviders/{{LearningProviderId}}/learningContents&lt;/code&gt;, it will fail. In the  &lt;strong&gt;Body&lt;/strong&gt;  tab of the request, we must specify the JSON with the information about the content we want to make available. The full payload can contain a lot of information, as highlighted &lt;a href="https://learn.microsoft.com/en-us/graph/api/learningcontent-update?view=graph-rest-beta"&gt;in the official docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let's use the knowledge we have acquired to add as content the &lt;a href="https://dev.to/qmatteoq/create-viva-connections-extensions-using-spfx-with-windows-and-wsl-4a3a-temp-slug-8504440"&gt;following article&lt;/a&gt; from this blog. We're going to add the content to our existing learning provider (so we're going to use &lt;code&gt;303da5b7-dce3-4998-a76f-7a18849fc697&lt;/code&gt; as &lt;code&gt;learningProviderId&lt;/code&gt;) and we're going to use, as external id, the unique identifier of the post which is contained in the url (in this case, it's &lt;code&gt;3617543&lt;/code&gt;). Based on this information, this is the endpoint we're going to reach:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;https://graph.microsoft.com/beta/employeeExperience/learningProviders/303da5b7-dce3-4998-a76f-7a18849fc697/learningContents(externalId='3617543')&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;``&lt;/p&gt;

&lt;p&gt;And this is the JSON payload we're going to submit as body, with the minimum set of information required by the API: title, URL of the content and language.&lt;/p&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;br&gt;
{&lt;br&gt;
    "title": "Create Viva Connections extensions using SPFx with Windows and WSL",&lt;br&gt;
    "contentWebUrl": "&lt;a href="https://techcommunity.microsoft.com/t5/windows-dev-appconsult/create-viva-connections-extensions-using-spfx-with-windows-and/ba-p/3617543"&gt;https://techcommunity.microsoft.com/t5/windows-dev-appconsult/create-viva-connections-extensions-using-spfx-with-windows-and/ba-p/3617543&lt;/a&gt;",&lt;br&gt;
    "languageTag": "en-us",&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;/p&gt;

&lt;p&gt;When you hit send, if the request is successful, you will get back a response with the same JSON, plus some additional information generated by the Viva Learning APIs, like the internal unique identifier of the content. If you want to double check that the content was indeed added, you can get the list of contents which belong to a content provider by performing a  &lt;strong&gt;GET&lt;/strong&gt;  operation against the following endpoint: &lt;code&gt;https://graph.microsoft.com/beta/employeeExperience/learningProviders/{{learningProviderId}}/learningContents&lt;/code&gt;. Also in this case, you must replace &lt;code&gt;learningProviderId&lt;/code&gt; with the unique id of your learning provider. If we perform a GET operation against our learning provider (&lt;code&gt;https://graph.microsoft.com/beta/employeeExperience/learningProviders/303da5b7-dce3-4998-a76f-7a18849fc697/learningContents&lt;/code&gt;), we're going to get the following response:&lt;/p&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;br&gt;
{&lt;br&gt;
    "@odata.context": "&lt;a href="https://graph.microsoft.com/beta/$metadata#employeeExperience/learningProviders('303da5b7-dce3-4998-a76f-7a18849fc697')/learningContents"&gt;https://graph.microsoft.com/beta/$metadata#employeeExperience/learningProviders('303da5b7-dce3-4998-a76f-7a18849fc697')/learningContents&lt;/a&gt;",&lt;br&gt;
    "value": [&lt;br&gt;
        {&lt;br&gt;
            "id": "3ff3c4ff-1e34-4738-aa31-3de087ba7ab3",&lt;br&gt;
            "externalId": "3617543",&lt;br&gt;
            "title": "Create Viva Connections extensions using SPFx with Windows and WSL",&lt;br&gt;
            "contentWebUrl": "&lt;a href="https://techcommunity.microsoft.com/t5/windows-dev-appconsult/create-viva-connections-extensions-using-spfx-with-windows-and/ba-p/3617543"&gt;https://techcommunity.microsoft.com/t5/windows-dev-appconsult/create-viva-connections-extensions-using-spfx-with-windows-and/ba-p/3617543&lt;/a&gt;",&lt;br&gt;
            "languageTag": "en-us",&lt;br&gt;
            "description": "",&lt;br&gt;
            "sourceName": "Modern Work App Consult",&lt;br&gt;
            "thumbnailWebUrl": null,&lt;br&gt;
            "numberOfPages": 0,&lt;br&gt;
            "duration": "PT0S",&lt;br&gt;
            "format": null,&lt;br&gt;
            "createdDateTime": "0001-01-01T00:00:00Z",&lt;br&gt;
            "lastModifiedDateTime": "0001-01-01T00:00:00Z",&lt;br&gt;
            "contributors": [],&lt;br&gt;
            "additionalTags": [],&lt;br&gt;
            "skillTags": [],&lt;br&gt;
            "isActive": true,&lt;br&gt;
            "isPremium": false,&lt;br&gt;
            "isSearchable": true&lt;br&gt;
        }&lt;br&gt;
    ]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;/p&gt;

&lt;p&gt;If we want to update the content (for example, to add some of the fields we missed out, like &lt;code&gt;duration&lt;/code&gt; or &lt;code&gt;contributors&lt;/code&gt;), we just need to repeat the same &lt;strong&gt;PATCH&lt;/strong&gt; operation again. The only difference is that this time, we have two options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep using the endpoint with the &lt;code&gt;externalId&lt;/code&gt;, like &lt;code&gt;https://graph.microsoft.com/beta/employeeExperience/learningProviders/303da5b7-dce3-4998-a76f-7a18849fc697/learningContents(externalId='3617543')&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Switching to the internal id, which can be passed after the &lt;code&gt;learningContents&lt;/code&gt; portion of the URL: &lt;code&gt;https://graph.microsoft.com/beta/employeeExperience/learningProviders/303da5b7-dce3-4998-a76f-7a18849fc697/learningContents/3ff3c4ff-1e34-4738-aa31-3de087ba7ab3&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Testing the work in the Viva Learning app
&lt;/h3&gt;

&lt;p&gt;To test the outcome of your work, you can use the Viva Learning app which is available in Microsoft Teams. If you don't have it, you can search for it in the Apps section and install it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3MmnJN8l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421586i0C455683E076E575/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3MmnJN8l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421586i0C455683E076E575/image-size/large%3Fv%3Dv2%26px%3D999" alt="viva-learning.png" title="viva-learning.png" width="755" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have opened it, you should be able to see in the catalogue the new content provider with the related content:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ji0stQsh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421587i5FE65F63D0306042/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ji0stQsh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/421587i5FE65F63D0306042/image-size/large%3Fv%3Dv2%26px%3D999" alt="custom-provider-viva.png" title="custom-provider-viva.png" width="607" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Very important!&lt;/strong&gt;  The Viva Learning engine takes a while to synchronize the content coming from external providers. As such, if you open the Viva Learning app immediately after having added the content with the Microsoft Graph, you won't be able to see it. You must be patient and wait for a bit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Manage the AAD app registration
&lt;/h3&gt;

&lt;p&gt;There's a very important caveat to keep in mind when you are working with the Viva Learning APIs in Microsoft Graph. When you create a content provider, you must use the same AAD app registration to perform any operation related to the content of the provider, like adding new content or listing the existing one. If you, let's say, create a content provider with an application, but then you try to add some content to it with another one, you will get back a forbidden error, like the following one:&lt;/p&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;br&gt;
{&lt;br&gt;
    "error": {&lt;br&gt;
        "code": "forbidden",&lt;br&gt;
        "message": "Forbidden",&lt;br&gt;
        "innerError": {&lt;br&gt;
            "date": "2022-11-21T19:21:39",&lt;br&gt;
            "request-id": "c244018f-ab2a-4de8-9358-ca69835b2bbd",&lt;br&gt;
            "client-request-id": "c244018f-ab2a-4de8-9358-ca69835b2bbd"&lt;br&gt;
        }&lt;br&gt;
    }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;/p&gt;

&lt;p&gt;As an example, this happened to me when I created the Learning Provider using &lt;a href="https://developer.microsoft.com/en-us/graph/graph-explorer"&gt;Graph Explorer&lt;/a&gt; (which is backed by its own AAD app registration), but then I tried to add some content using Postman, backed by the AAD app registration I created on my tenant.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping up
&lt;/h3&gt;

&lt;p&gt;In this post we have learned the basics of how to use the Microsoft Graph APIs to expose custom learning paths to the Viva Learning application for our employees. This post was useful to understand the basic concepts behind these APIs but, in a real scenario, you won't use Postman to work with them. In the next article, we're going to explore a more realistic implementation: we're going to build a web application based on Blazor that we can use to manage our custom learning provider and its content.&lt;/p&gt;

&lt;p&gt;Stay tuned and happy coding!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Create Viva Connections extensions using SPFx with Windows and WSL</title>
      <dc:creator>Matteo Pagani</dc:creator>
      <pubDate>Mon, 05 Sep 2022 12:20:47 +0000</pubDate>
      <link>https://dev.to/qmatteoq/create-viva-connections-extensions-using-spfx-with-windows-and-wsl-1i87</link>
      <guid>https://dev.to/qmatteoq/create-viva-connections-extensions-using-spfx-with-windows-and-wsl-1i87</guid>
      <description>&lt;p&gt;When it comes to building web applications which heavily rely on platforms like Node.js, a Linux environment is often the best choice. These platforms were created with the Linux ecosystem as main target and, as such, tools like NPM (the Node Package Manager) provide much better performances than when they are executed on Windows due to the different file system implementation. On Windows, however, we have a much better graphical experience and a richer ecosystem of applications that we can use for our development purposes, from Visual Studio Code to all the major browsers.&lt;/p&gt;

&lt;p&gt;Luckily, thanks to the Windows Subsystem for Linux, we don't have to choose anymore or go through the hassle of setting up a virtual machine. We can run a Linux environment right inside our Windows installation.&lt;/p&gt;

&lt;p&gt;In this post, we'll focus on the experience in setting up WSL properly to build Viva Connections extensions (or any other extension based on SPFx, the SharePoint Framework) so that we can get the best of both worlds: the Windows experience, but the Linux performances when it comes to build and run our project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up WSL
&lt;/h3&gt;

&lt;p&gt;WSL is available both on Windows 10 and Windows 11. The team has recently enabled a new installation experience so, if you don't already have it on your machine, all you have to do is open a terminal with administrator rights and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wsl --install

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command will install Ubuntu as default distribution but, if you prefer, you can use another one by passing an extra parameter. You will find all the information and options &lt;a href="https://docs.microsoft.com/en-us/windows/wsl/install"&gt;in the official docs&lt;/a&gt;. During the process, you will be asked to create a new administrator user and to choose a password.&lt;/p&gt;

&lt;p&gt;The best tool to work with WSL&lt;/p&gt;

&lt;p&gt;is Windows Terminal, which is the default terminal in Windows 11. If, instead, you're using Windows 10, you can manually install it &lt;a href="https://aka.ms/terminal"&gt;from the Microsoft Store&lt;/a&gt;. After you have installed WSL, in fact, a new tab will be automatically added to open up a new terminal on your Linux installation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tiIkZkR0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401589iD7EB138125D4C524/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tiIkZkR0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401589iD7EB138125D4C524/image-size/large%3Fv%3Dv2%26px%3D999" alt="Terminal-WSL.png" title="Terminal-WSL.png" width="476" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up the development environment
&lt;/h3&gt;

&lt;p&gt;Setting up the development environment to build projects with SPFx (including Viva Connections extensions) isn't different from setting up a Windows or MacOS machine, so we can follow the &lt;a href="https://docs.microsoft.com/en-us/sharepoint/dev/spfx/set-up-your-development-environment"&gt;official guidance&lt;/a&gt;. As the first step, let's install Node.js. However, instead of installing the standard version, we're going to leverage &lt;a href="https://github.com/nvm-sh/nvm"&gt;nvm&lt;/a&gt;, a tool that you can use to install multiple versions of Node.js on the same machine. The main motivation is that, by default, you can have only a single Node.js installation on a machine, however some frameworks might require different versions to work properly. With nvm, we can easily manage this scenario since it allows us to install multiple versions of Node.js and then, with a single command, make one of them the "default" version installed on the system.&lt;/p&gt;

&lt;p&gt;Open Windows Terminal, make sure to open tab on Ubuntu and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the operation is completed, let's install a Node.js version. We can use version 16.x for SPFx, so let's run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nvm install 16

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Since this is the only version we have, it will be automatically set as default version for the machine. If, in the future, let's say that we install a framework which requires Node.js v14.x, we can just run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nvm install 14
nvm use 14

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to check, at any point in time, which version of Node.js you're running, you can just type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node -v

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have Node.js up and running, we can leverage NPM to install the three tools which are required to work with SPFx:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gulp&lt;/strong&gt; : it's a task runner based on JavaScript. It's used by SPFx to build projects, create bundles and run the testing environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Yeoman&lt;/strong&gt; : it's a tool to scaffold new projects, by providing a series of steps that the developer can follow to configure it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Yeoman SharePoint Generator&lt;/strong&gt; : it's a specific implementation of the Yeoman generator for SharePoint. It contains various templates to create projects for SharePoint and Viva, like web parts or Adaptive Card Extensions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can install all of them by running a single command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install gulp-cli yo @microsoft/generator-sharepoint --global

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The tools will be installed as global, which means that you will be able to leverage them from any location on your machine.&lt;/p&gt;

&lt;p&gt;Now that we have all the tools we need, we can start the creation of a new project. We're going to build an extension for Viva Connections.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a new project
&lt;/h3&gt;

&lt;p&gt;As a developer, you might already have a folder on your Windows machine (called &lt;code&gt;src&lt;/code&gt; or &lt;code&gt;Source&lt;/code&gt;, for example) where you store code projects. As such, when you start a new SPFx based project on WSL, you might be tempted to look for a way to store the project in the same folder. This is indeed possible: the local Windows drives are automatically mounted on WSL, so with a command like &lt;code&gt;cd /mnt/c/Users/&amp;lt;your_username&amp;gt;&lt;/code&gt; you will be able to access your Windows user folder and all its subfolders.  &lt;strong&gt;Don't do that!&lt;/strong&gt;  You will lose all the performance advantages which led you, in the first place, to adopt WSL to build your SPFx based projects. To gain the maximum file system performance, in fact, you must leverage the native Linux file system, which means that you will have to create your project in your Ubuntu user folder, instead of using the Windows one.&lt;/p&gt;

&lt;p&gt;When you open Windows Terminal on Ubuntu, you will be automatically logged in to your home folder, so all you must do is create a folder to host your code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir src
cd src

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's create a folder to host our project and navigate to it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir viva-connections-hello-world
cd viva-connections-hello-world

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can generate our first project using Yeoman, by typing the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yo @microsoft/sharepoint

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will be greeted by an ASCII user interface, which will ask a few questions to properly scaffold our new project. In this case, we're assuming we're going to build an Adaptive Card Extension for Viva Connections, so let's set the following properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What is your solution name?&lt;/strong&gt; : viva-connections-hello-world&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Which type of side client-component to create?&lt;/strong&gt; : Adaptive Card Extension&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Which template do you want to use?&lt;/strong&gt; : Basic Card template&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What is your Adaptive Card Extension name?&lt;/strong&gt; : HelloWorld&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After you have provided all the information, Yeoman will scaffold the project and will automatically use NPM to restore all the required dependencies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R-pa5guD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401590iA4A565DD7E72B24E/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R-pa5guD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401590iA4A565DD7E72B24E/image-size/large%3Fv%3Dv2%26px%3D999" alt="Terminal-Yeoman.png" title="Terminal-Yeoman.png" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we have a project, we can build it and test it. However, there's an extra step we must take care of before: trusting the development certificate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Working with the development certificate
&lt;/h3&gt;

&lt;p&gt;When you are debugging a project created with SPFx, you leverage a SharePoint feature called Workbench: you're going to run the extension on your real SharePoint environment, but the code will be served by your local machine. This will allow you to test the extension in a real environment, but taking advantage of all the benefits of a local debugging system, like breakpoints, watchers, live reload, etc. To effectively use this feature, Yeoman generates a development certificate, which must be trusted by your own machine, otherwise your local environment won't be considered secure, and you will get all kinds of warnings and issues. Gulp provides a command to generate the development certificate. Go back to your Ubuntu environment in Windows Terminal and, in the root of your project, type the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gulp trust-dev-cert

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will generate the certificate, but when you run it on Linux it won't automatically trust it. As such, we must take a couple of extra steps, which Don Kirkham (Microsoft MVP) highlighted &lt;a href="https://www.donkirkham.com/blog/spfx-docker"&gt;in his excellent post&lt;/a&gt; about working with SPFx on Docker. In our case we aren't using Docker, but we can apply the same technique to manage our certificate in WSL.&lt;/p&gt;

&lt;p&gt;The first step is to copy the generated certificate in the root of the project, so that Linux can leverage it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp ~/.rushstack/rushstack-serve.pem ./spfx-dev-cert.pem

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will place the certificate in the root of our project and rename it to &lt;code&gt;spfx-dev-cert.pem&lt;/code&gt;. However, this isn't enough. Linux will indeed serve the extension, but in our case, Windows will consume it. When we start testing it, in fact, we won't use a browser running inside the Linux environment, but our favorite browser (Edge, Chrome, Firefox, etc.) running on our Windows machine. As such, we must trust this certificate on Windows as well, if we want to avoid warnings and errors from our browser that the content is coming from an unsafe source. The first step is to convert the generated certificate into a format that Windows can understand, since the .pem one we have works only for Linux. This is the command we can use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl x509 -inform PEM -in ~/.rushstack/rushstack-serve.pem -outform DER -out ./spfx-dev-cert.cer

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, in the root of our project, we'll have also a file called &lt;code&gt;spfx-dev-cert.cer&lt;/code&gt;, which is the certificate we can use on Windows. To install it, we can leverage a feature that was recently added to Windows 10 and Windows 11: File Explorer integration. If you're using a recent version of Windows, you will see the following icon in File Explorer:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2Tzyx0FJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401591i6C775990F6A0E57E/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2Tzyx0FJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401591i6C775990F6A0E57E/image-size/large%3Fv%3Dv2%26px%3D999" alt="Linux.png" title="Linux.png" width="150" height="142"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By clicking on it, you will be able to browse the Linux file system. As such, you can head to the path where you have created your Viva Connections extension (which will be something like &lt;code&gt;\home\&amp;lt;your username&amp;gt;\src\viva-connections-oil-prices&lt;/code&gt;) to see the files which belong to your project. Double click on the &lt;code&gt;spfx-dev-cert.cer&lt;/code&gt; file and follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on  &lt;strong&gt;Install certifcate&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Choose  &lt;strong&gt;Local machine&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Choose  &lt;strong&gt;Place all certificates in the following store&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click  &lt;strong&gt;Browse&lt;/strong&gt;  and seleted the store called  &lt;strong&gt;Trusted Root Certificate Authorities&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click  &lt;strong&gt;Next&lt;/strong&gt; , then  &lt;strong&gt;Finish&lt;/strong&gt;  to complete the import process.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You're all set! Now you're ready to launch and debug your extension.&lt;/p&gt;

&lt;h3&gt;
  
  
  Debug your extension
&lt;/h3&gt;

&lt;p&gt;Now we can use gulp to launch the debugging experience of our project. Back to the Ubuntu terminal, type the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gulp serve -nobrowser

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Gulp will package the project and it will serve it through your localhost environment. One of the powerful features of WSL is that it's able to automatically tunnel the Linux network to your Windows machine. As such, even if your Viva Connections extension is being served by a process running in your Linux environment, it will be visible by your browser running on Windows. To test this, you can simply open your browser on Windows and open the following URL: &lt;code&gt;https://localhost:4321/temp/manifests.js&lt;/code&gt;. If you did everything correctly, two things would happen:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The browser will display the content of the manifest file for the Viva Connections extension.&lt;/li&gt;
&lt;li&gt;The connection will be reported as safe since the development certificate being used is trusted.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uvv-42h4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401593iC12A23B209B61341/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uvv-42h4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401593iC12A23B209B61341/image-size/large%3Fv%3Dv2%26px%3D999" alt="BrowserTest.png" title="BrowserTest.png" width="800" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we have verified that the server is working properly, open your favorite browser on the SharePoint workbench, which URL is &lt;code&gt;https://&amp;lt;yourdomain&amp;gt;.sharepoint.com/_layouts/workbench.aspx&lt;/code&gt;. Then click on the + sign to add a new component to the page. You should see your extension available under a section called  &lt;strong&gt;Local&lt;/strong&gt; , as in the following image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wBKGsxu5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401594i8E4D20E1084C13ED/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wBKGsxu5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401594i8E4D20E1084C13ED/image-size/large%3Fv%3Dv2%26px%3D999" alt="LocalDev.png" title="LocalDev.png" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's great! Now we can fully leverage the power of the Windows ecosystem to test a web project which, instead, is being served by a Linux environment. But wait, what if we want to make some changes to the project? Surely, we don't want to deploy a HelloWorld solution to our customers!&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Visual Studio Code with WSL
&lt;/h3&gt;

&lt;p&gt;Visual Studio Code is the best editor to work with web projects and, in the context of WSL, it's even better. Visual Studio Code, in fact, supports the ability to remotely connect to a WSL instance, so that you can have your development experience on Windows, but edit, test and debug the code running on Linux. The first thing you need, of course, is Visual Studio Code installed on your Windows machine. Then move to the Ubuntu terminal and simply type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;code .

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first time WSL will install the Visual Studio Code Server extension, which is required for the remote experience. Once the operation is completed, Visual Studio Code will open. Compared to a standard execution, however, you will notice the following icon in the lower left corner:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fxPsVm4B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401595i4087F519CCCEE56A/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fxPsVm4B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401595i4087F519CCCEE56A/image-size/large%3Fv%3Dv2%26px%3D999" alt="WSLRemoting.png" title="WSLRemoting.png" width="283" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Visual Studio Code is connected to our Ubuntu instance, but it's still providing a full development experience. Through the panel on the left, we can see all the files that belong to our project and edit them. If we open a terminal instance within Code, it will run in the Linux environment. You can also launch a debugging session, by using the built-in  &lt;strong&gt;Hosted workbench&lt;/strong&gt;  target, which will enable you to put breakpoints, add watchers, etc. To get the full debugging experience, you have first to open the &lt;code&gt;launch.json&lt;/code&gt; file under the &lt;code&gt;.vscode&lt;/code&gt; folder. You will find a property called &lt;code&gt;url&lt;/code&gt; set in the following way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"url": "https://enter-your-SharePoint-site/_layouts/workbench.aspx",

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace the URL with the one of your SharePoint websites. The next step is to launch gulp since the debugger won't do it for us automatically. In Visual Studio Code click on  &lt;strong&gt;Terminal → New terminal&lt;/strong&gt; , which will open a new terminal instance on our WSL installation. Type the usual command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gulp serve -nobrowser

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, once gulp is up &amp;amp; running, move to the  &lt;strong&gt;Run and Debug&lt;/strong&gt;  tab in Visual Studio Code and click the play symbol near  &lt;strong&gt;Hosted workbench&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--60ZAl7cI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401596i96BB9552DF931F6C/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--60ZAl7cI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401596i96BB9552DF931F6C/image-size/large%3Fv%3Dv2%26px%3D999" alt="HostedWorkbench.png" title="HostedWorkbench.png" width="459" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The debugger will start, and it will launch a new browser instance on your SharePoint workbench. You will be asked to login to your SharePoint tenant and then you will be able to add your custom component to the page. Once the component gets added, you will trigger any breakpoint that you have placed in your project. As an example, you can try to place it in the &lt;code&gt;onInit&lt;/code&gt; function declared in the file &lt;code&gt;src/adaptiveCardExtensions/helloWorld/HelloWorldAdaptiveCardExtension.ts&lt;/code&gt;, which gets called when the component is rendered. As soon as you add the component to the page, you will see the breakpoint being hit, giving you the option to explore the content of each variable, the threads, the call stack, etc:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q08meJw6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401597i5E3BE4A16333F4BC/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q08meJw6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/401597i5E3BE4A16333F4BC/image-size/large%3Fv%3Dv2%26px%3D999" alt="Debugging.png" title="Debugging.png" width="800" height="645"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping up
&lt;/h3&gt;

&lt;p&gt;Linux is a great platform when it comes to hosting web applications and working with SharePoint and Viva Connections isn't any exception. Thanks to WSL, we can get the best of both worlds: host our extensions in a Linux environment, so that we can get maximum performance, but still using the great tools and usability provided by Windows. And we don't have to do this by switching context or having to create our own virtual machine, but simply by opening a new tab in our terminal. WSL will do all the heavy work to blend the two ecosystems.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Access to XAML controls in a React Native for Windows application (Part 2)</title>
      <dc:creator>Matteo Pagani</dc:creator>
      <pubDate>Fri, 20 May 2022 13:29:38 +0000</pubDate>
      <link>https://dev.to/qmatteoq/access-to-xaml-controls-in-a-react-native-for-windows-application-part-2-1m0h</link>
      <guid>https://dev.to/qmatteoq/access-to-xaml-controls-in-a-react-native-for-windows-application-part-2-1m0h</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/qmatteoq/access-to-xaml-controls-in-a-react-native-for-windows-application-5c3p-temp-slug-8151335"&gt;In the previous blog post&lt;/a&gt; I showed one approach to manage a scenario that can't be done in React Native for Windows out of the box: getting access to the native XAML controls behind the JSX ones, so that you can support scenarios like showing a flyout near another control.&lt;/p&gt;

&lt;p&gt;Thanks to the feedback from &lt;a href="https://twitter.com/moyessa"&gt;Steven Moyes&lt;/a&gt; and &lt;a href="https://twitter.com/alexsklar"&gt;Alexander Sklar&lt;/a&gt; from the React Native for Windows PG, I've found out that there's a better way to achieve the same goal. Alexander is the main maintainer of an open-source library, published by Microsoft, called &lt;a href="https://github.com/microsoft/react-native-xaml"&gt;react-native-xaml&lt;/a&gt;, which enables you to bring any XAML control into your React Native for Windows applications. The only caveat is that such a UI won't be any more really cross-platform, since these controls can't be rendered on Android, iOS or macOS. However, the same limitation applies to the approach we have seen in the previous post, since &lt;code&gt;Flyout&lt;/code&gt; is a control which is available only for Windows.&lt;/p&gt;

&lt;p&gt;Thanks to react-native-xaml, you can easily build a component like the following one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import React from 'react';
import { DatePicker } from 'react-native-xaml';
import { View } from 'react-native';

const MyXamlComponent = () =&amp;gt; {

    return (
    &amp;lt;View&amp;gt;
        &amp;lt;DatePicker dayVisible={false} onSelectedDateChanged={(args) =&amp;gt; console.log(args.nativeEvent.args.newDate) } /&amp;gt;
      &amp;lt;/View&amp;gt;
    );

}

export default MyXamlComponent;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are importing the &lt;code&gt;DatePicker&lt;/code&gt; control from the react-native-xaml library, which is mapped to the native &lt;a href="https://docs.microsoft.com/en-us/windows/winui/api/microsoft.ui.xaml.controls.datepicker?view=winui-3.0"&gt;DatePicker&lt;/a&gt; control in WinUI. Thanks to the library, we have also access to the same properties and events we have in XAML, but exposed in camel case to follow the JavaScript naming conventions. In the previous snippet, you can see we're setting the &lt;code&gt;dayVisible&lt;/code&gt; property to hide the day from the selection and we're subscribing to the &lt;code&gt;onSelectedDateChanged&lt;/code&gt; event to get notified any time the user selects a date. We even have access to the event arguments raised by the event, using the &lt;code&gt;nativeEvent.args&lt;/code&gt; property. Now we have a fully fledged &lt;code&gt;DatePicker&lt;/code&gt; control available in our application, without having to write a custom native module:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mRrvSUYo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/373609iA03F2C5A88E58828/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mRrvSUYo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/373609iA03F2C5A88E58828/image-size/large%3Fv%3Dv2%26px%3D999" alt="datepicker.png" title="datepicker.png" width="500" height="900"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Is it possible to use react-native-xaml to implement the scenario we have seen yesterday with the &lt;code&gt;Flyout&lt;/code&gt; control? Yes!&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting a flyout to an existing control
&lt;/h3&gt;

&lt;p&gt;Thanks to react-native-xaml, we can import true XAML controls in our application, which means that they behave in the same way they would do in a native XAML application. As such, we can simply attach a flyout in the same way we would do in a XAML page. Let's see the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import React, {useState} from 'react';
import { TextBox, MenuFlyout, MenuFlyoutItem } from 'react-native-xaml';
import { View } from 'react-native';

const XamlNotificationComponent = () =&amp;gt; {

    return (
    &amp;lt;View&amp;gt;
      &amp;lt;TextBox text="this is a textbox with a menuFlyout"&amp;gt;
        &amp;lt;MenuFlyout&amp;gt;
          &amp;lt;MenuFlyoutItem text="option 1" onClick={() =&amp;gt; { alert('clicked 1'); }} /&amp;gt;
          &amp;lt;MenuFlyoutItem text="option 2" onClick={() =&amp;gt; { alert("clicked 2");}} /&amp;gt;
        &amp;lt;/MenuFlyout&amp;gt;
        &amp;lt;/TextBox&amp;gt;

      &amp;lt;/View&amp;gt;
    );
}

export default XamlNotificationComponent;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, we import a few controls from the react-native-xaml library: &lt;code&gt;TextBox&lt;/code&gt;, &lt;code&gt;MenuFlyout&lt;/code&gt; and &lt;code&gt;MenuFlyoutItem&lt;/code&gt;. Then, exactly like we would do in XAML, we define a new &lt;code&gt;MenuFlyout&lt;/code&gt; control, and we set it as children of the &lt;code&gt;TextBox&lt;/code&gt; control. This way, we'll anchor the flyout to the &lt;code&gt;TextBox&lt;/code&gt;, so that it will be displayed close to it. Like in XAML, we can subscribe to the onClick event to trigger an action when an item is selected. In this case, we just display a popup with a message.&lt;/p&gt;

&lt;p&gt;We have created the flyout, now we must make it visible. Let's expand the code to add a Button to our component, which will trigger the flyout once it's clicked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import React, {useState} from 'react';
import { TextBox, MenuFlyout, MenuFlyoutItem, Button } from 'react-native-xaml';
import { View } from 'react-native';

const XamlNotificationComponent = () =&amp;gt; {
    [isOpen, setIsOpen] = useState(false);

    return (
    &amp;lt;View&amp;gt;
      &amp;lt;TextBox text="this is a textbox with a menuFlyout"&amp;gt;
        &amp;lt;MenuFlyout isOpen={isOpen} onClosed={() =&amp;gt; { setIsOpen(false); }}&amp;gt;
          &amp;lt;MenuFlyoutItem text="option 1" onClick={() =&amp;gt; { alert('clicked 1'); }} /&amp;gt;
          &amp;lt;MenuFlyoutItem text="option 2" onClick={() =&amp;gt; { alert("clicked 2");}}/&amp;gt;
        &amp;lt;/MenuFlyout&amp;gt;
        &amp;lt;/TextBox&amp;gt;
        &amp;lt;Button content="Show flyout" onClick={(a) =&amp;gt; { setIsOpen(true); }} /&amp;gt;

      &amp;lt;/View&amp;gt;
    );

}

export default XamlNotificationComponent;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compared to the first snippet, we have made the following changes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using the &lt;code&gt;useState()&lt;/code&gt; React hook, we have created a new property to store in the state called &lt;code&gt;isOpen&lt;/code&gt;. We're going to use it to store the information if the flyout is open or closed.&lt;/li&gt;
&lt;li&gt;We have connected the &lt;code&gt;isOpen&lt;/code&gt; property in the state to the &lt;code&gt;isOpen&lt;/code&gt; property of the &lt;code&gt;MenuFlyout&lt;/code&gt; control. This way, we can keep them coordinated and, when the state changes, also the flyout will be displayed or hidden automatically.&lt;/li&gt;
&lt;li&gt;Flyouts leverage a light dismiss UI. They can be closed simply by clicking outside the flyout. As such, since we need to make sure that we keep the state coordinated with the status of the control, we subscribe to the &lt;code&gt;onClosed&lt;/code&gt; event, and we set the &lt;code&gt;isOpen&lt;/code&gt; property in the state to &lt;code&gt;false&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;We have added a &lt;code&gt;Button&lt;/code&gt; control and we have subscribed to the &lt;code&gt;onClick&lt;/code&gt; event. When it's clicked, we're going to set the &lt;code&gt;isOpen&lt;/code&gt; property in the state to &lt;code&gt;true&lt;/code&gt;, which will make the flyout appear.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it! Now, when you click on the button, you will see the flyout being displayed on top of the &lt;code&gt;TextBox&lt;/code&gt; control, in the same way we did in the previous blog post, but without needing to create a custom native module:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qWPiA1zj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/373610i4F17AD1D7471CCFD/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qWPiA1zj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/373610i4F17AD1D7471CCFD/image-size/large%3Fv%3Dv2%26px%3D999" alt="flyout.png" title="flyout.png" width="500" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Displaying a flyout in a specific area of the application
&lt;/h3&gt;

&lt;p&gt;In the previous blog post, we used the reference approach supported by React Native to pass the JSX control to our native module, so that we could retrieve the underneath XAML control and show the flyout close to it. The reference approach can also be helpful in case you want to use react-native-xaml to display a flyout in a specific area of the application. Let's see the code!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import React from 'react';
import { View} from 'react-native';
import { MenuFlyout, MenuFlyoutItem, Button } from 'react-native-xaml';

class XamlNotificationWithRefComponent extends React.Component {

    constructor(props) {
        super(props);
        this.myFlyout = React.createRef();
    }
      showNotification = () =&amp;gt; {
          MenuFlyout.ShowAt(this.myFlyout, { point: { x: 120, y: 130}});
      };

      render() {
        return (
            &amp;lt;View&amp;gt;
                &amp;lt;MenuFlyout ref={this.myFlyout}&amp;gt;
                    &amp;lt;MenuFlyoutItem text='option 1' /&amp;gt;
                    &amp;lt;MenuFlyoutItem text='option 2' /&amp;gt;
                &amp;lt;/MenuFlyout&amp;gt;
                &amp;lt;Button
                    content="Click me"
                    onClick={this.showNotification} /&amp;gt;
            &amp;lt;/View&amp;gt;
        );
    }
}

export default XamlNotificationWithRefComponent;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, we create a &lt;code&gt;MenuFlyout&lt;/code&gt; control in JSX, using the react-native-xaml library. However, this time, we are using the &lt;code&gt;React.createRef()&lt;/code&gt; method we've learned in the previous post to store a reference to the control using the &lt;code&gt;myFlyout&lt;/code&gt; variable. Once we have a reference, we can call the &lt;code&gt;ShowAt()&lt;/code&gt; method exposed by the base &lt;code&gt;MenuFlyout&lt;/code&gt; control, which requires two parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;MenuFlyout&lt;/code&gt; control we want to display. This is where we use the reference we have just acquired.&lt;/li&gt;
&lt;li&gt;The coordinates where we want to display it, using a &lt;code&gt;point&lt;/code&gt; object.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks to this code, we can display the flyout in any position we want, as in the following image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_ZVwaTo7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/373612i35E7E95F411AAE69/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_ZVwaTo7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/373612i35E7E95F411AAE69/image-size/large%3Fv%3Dv2%26px%3D999" alt="flyout-coordinates.png" title="flyout-coordinates.png" width="500" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping up
&lt;/h3&gt;

&lt;p&gt;react-native-xaml is an amazing library, which helps you to get the best of both worlds: the great developer experience offered by React Native and the powerful XAML controls offered by WinUI. If you want to see more examples of how to use it, I recommend checking the guide &lt;a href="https://github.com/microsoft/react-native-xaml/blob/main/USAGE.md"&gt;you'll find in the GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I've also updated &lt;a href="https://github.com/qmatteoq/InAppNotifications"&gt;my sample&lt;/a&gt; to support the scenarios we've seen in this post.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Access to XAML controls in a React Native for Windows application</title>
      <dc:creator>Matteo Pagani</dc:creator>
      <pubDate>Thu, 19 May 2022 15:07:27 +0000</pubDate>
      <link>https://dev.to/qmatteoq/access-to-xaml-controls-in-a-react-native-for-windows-application-5h15</link>
      <guid>https://dev.to/qmatteoq/access-to-xaml-controls-in-a-react-native-for-windows-application-5h15</guid>
      <description>&lt;p&gt;When you build a Windows application using React Native for Windows, typically you don't have to worry about the underline generated application. One of the powerful features of React Native is that it generates a truly native application, which means that the JSX controls you place in the UI are translated with the equivalent native XAML control. This feature helps to deliver great performance and a familiar Fluent look &amp;amp; feel to your applications.&lt;/p&gt;

&lt;p&gt;However, there are a few scenarios where you would need to access the underlying XAML infrastructure. Recently, I worked on an engagement with a customer who needed to show a flyout when specific actions occur. However, the flyout didn't have to be displayed in a fixed position, but near the control that generated the action (like the click of a button). Achieving this goal only using JSX isn't feasible for two reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;React Native is a technology to build cross-platform applications, while &lt;code&gt;Flyout&lt;/code&gt; is a specific Windows control. As such, by default, React Native doesn't expose a JSX equivalent.&lt;/li&gt;
&lt;li&gt;When you create a &lt;code&gt;Flyout&lt;/code&gt;, you must link it to another XAML control (like a &lt;code&gt;Button&lt;/code&gt;) by setting its &lt;code&gt;Flyout&lt;/code&gt; property or by passing it as parameter to the &lt;code&gt;ShowAt()&lt;/code&gt; method. We can't access this information in JSX.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this blog post we'll learn how to support this scenario, by combining a native module and the &lt;code&gt;findNodeHandle()&lt;/code&gt; React Native function.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the native module
&lt;/h3&gt;

&lt;p&gt;The first step is to create a native module, which we'll need to interact with the native XAML control. You can follow the guidance &lt;a href="https://microsoft.github.io/react-native-windows/docs/native-modules-setup"&gt;on the official website&lt;/a&gt; to create a new one and add support for Windows. Once you have the basic infrastructure, we can add in the module class a method to display the flyout, as in the following example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace ReactNativeInAppNotifications
{
    [ReactModule("inappnotifications")]
    internal sealed class ReactNativeModule
    {

        private ReactContext _reactContext;

        [ReactInitializer]
        public void Initialize(ReactContext reactContext)
        {
            _reactContext = reactContext;
        }

        [ReactMethod("showNotification")]
        public void ShowNotification(int tag, string title)
        {
            _reactContext.Handle.UIDispatcher.Post(() =&amp;gt;
            {
                Flyout flyout = new Flyout
                {
                    Content = new TextBlock { Text = title }
                };

                var control = XamlUIService.FromContext(_reactContext.Handle).ElementFromReactTag(tag) as FrameworkElement;

                flyout.ShowAt(control);
            });
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, notice how the class is decorated with the &lt;code&gt;[ReactModule]&lt;/code&gt; attribute, while the &lt;code&gt;ShowNotification()&lt;/code&gt; method is decorated with the &lt;code&gt;[ReactMethod]&lt;/code&gt; attribute. They will enable us to access them from the JavaScript layer of the application. The &lt;code&gt;ShowNotification()&lt;/code&gt; method accepts two parameters: one is the title of the notification, while the other one is a tag, which is an index that we can use to reference a JSX control. We'll learn later in the post, when we'll talk about the JavaScript layer, how to pass this information to the native module.&lt;/p&gt;

&lt;p&gt;The next step is to create our &lt;code&gt;Flyout&lt;/code&gt; control. In the previous snippet, we're using a very simple approach and we're setting the &lt;code&gt;Content&lt;/code&gt; property with a new &lt;code&gt;TextBlock&lt;/code&gt; control; then we set its &lt;code&gt;Text&lt;/code&gt; property with the title that we're passing from the JavaScript layer. Since we are using native code, you have the flexibility to customize the &lt;code&gt;Flyout&lt;/code&gt; control as you prefer: you can use a more complex XAML to define the &lt;code&gt;Content&lt;/code&gt;; or you can customize other properties like &lt;code&gt;LightDismissOverlayMode&lt;/code&gt; or &lt;code&gt;ShowMode&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now that we have our flyout, we need to display it near the control that triggered the action. As such, we need to use the tag we have received from the JavaScript layer to retrieve a reference to the corresponding XAML control. We can achieve this goal thanks to the &lt;code&gt;XamlUIService&lt;/code&gt; class offered by React Native for Windows. First, we get an instance from the current context (which is exposed by the &lt;code&gt;Handle&lt;/code&gt; property of the &lt;code&gt;ReactContext&lt;/code&gt; object that is set in the &lt;code&gt;Initialize()&lt;/code&gt; method of the module). Then, we call the &lt;code&gt;ElemenFromReactTag()&lt;/code&gt; method, passing as parameter the tag we have received. In the end, we cast the result into a &lt;code&gt;FrameworkElement&lt;/code&gt; object, which is the base class of XAML controls.&lt;/p&gt;

&lt;p&gt;Now we have access to underline XAML control which triggered the event, so we can just pass it to &lt;code&gt;ShowAt()&lt;/code&gt; method exposed by the &lt;code&gt;Flyout&lt;/code&gt; control. This will make sure that the &lt;code&gt;Flyout&lt;/code&gt; will be displayed near the control.&lt;/p&gt;

&lt;p&gt;Let's see now the code that we have to write in the React Native layer to use the native module we have just written.&lt;/p&gt;

&lt;h3&gt;
  
  
  Invoke the native module from JavaScript
&lt;/h3&gt;

&lt;p&gt;Let's now create a React Native component that we can use to display the notification. Let's see the code first, then we'll comment it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import React from 'react';
import {Button, NativeModules, findNodeHandle} from 'react-native';

class NotificationComponent extends React.Component {

    constructor(props) {
        super(props);
        this.myButton = React.createRef();
    }

  showNotification = async () =&amp;gt; {
    await NativeModules.inappnotifications.showNotification(findNodeHandle(this.myButton.current), 'Hello World');
  };

  render() {
    return (
      &amp;lt;Button
        title="Click me"
        onPress={this.showNotification}
        ref={this.myButton} /&amp;gt;
    );
  }
}

export default NotificationComponent;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to pass to the native module the JSX control which triggered the event, we need to use the concept of reference in React Native. You can think of it like the &lt;code&gt;x:Name&lt;/code&gt; property in XAML: it's a way to directly reference a control from code. If this approach is frequently used in other platforms (like WPF or Windows Forms), it's not very common in React Native. In most of the scenarios, in fact, you won't need to directly access to the control, but you're going to use properties, events and state to implement the logic you need.&lt;/p&gt;

&lt;p&gt;However, there are scenarios (like this one) in which a reference is the only way to achieve a goal and, as such, React Native provides the infrastructure to implement it. This is achieved with two changes to the code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the constructor of the component, we define a variable (in this case, &lt;code&gt;myButton&lt;/code&gt;) and we initialize it by calling &lt;code&gt;React.createRef()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;In JSX, we set the &lt;code&gt;ref&lt;/code&gt; property of the control we want to reference (in this sample, a &lt;code&gt;Button&lt;/code&gt; control) with the same variable we have created in the constructor (in this case, &lt;code&gt;myButton&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now we can access to the control by using the &lt;code&gt;current&lt;/code&gt; property of the &lt;code&gt;myButton&lt;/code&gt; object. We have everything we need now, so we can call the &lt;code&gt;showNotification()&lt;/code&gt; method exposed by our native module, through the &lt;code&gt;NativeModules&lt;/code&gt; object offered by React Native. Remember that the syntax to use a native module is &lt;code&gt;NativeModule.&amp;lt;class name&amp;gt;.&amp;lt;method name&amp;gt;&lt;/code&gt;. In our example, it's &lt;code&gt;NativeModules.inappnotifications.showNotification&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Well, we have almost everything we need, there's one last thing to add. Remember that the &lt;code&gt;ShowNotification()&lt;/code&gt; method in the native module requires a tag to reference the XAML control? To translate our reference to a tag, we must use the &lt;code&gt;findNodeHandle()&lt;/code&gt; method offered by React Native. This is how the implementation looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;showNotification = async () =&amp;gt; {
await NativeModules.inappnotifications.showNotification(findNodeHandle(this.myButton.current), 'Hello World');
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To get the tag, we invoke &lt;code&gt;findNodeHandle()&lt;/code&gt; passing, as parameter, the &lt;code&gt;current&lt;/code&gt; property of the &lt;code&gt;myButton&lt;/code&gt; reference we have created earlier.&lt;/p&gt;

&lt;p&gt;In the end, we just connect the &lt;code&gt;showNotification()&lt;/code&gt; method we have just created to the &lt;code&gt;onPress&lt;/code&gt; event of the &lt;code&gt;Button&lt;/code&gt; control, so that we can invoke it when it's clicked.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping up
&lt;/h3&gt;

&lt;p&gt;That's it! Now, if you launch your React Native for Windows application and you press the button, you will see the flyout being displayed right at the top of the Button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qp0IJrBx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/373197i4C571ACAA7D52D7F/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qp0IJrBx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/373197i4C571ACAA7D52D7F/image-size/large%3Fv%3Dv2%26px%3D999" alt="Flyout.png" title="Flyout.png" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can test the full code with the sample application &lt;a href="https://github.com/qmatteoq/InAppNotifications"&gt;on GitHub&lt;/a&gt;. Happy coding!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Getting full control over MSIX updates with the App Installer APIs</title>
      <dc:creator>Matteo Pagani</dc:creator>
      <pubDate>Fri, 13 May 2022 14:59:04 +0000</pubDate>
      <link>https://dev.to/qmatteoq/getting-full-control-over-msix-updates-with-the-app-installer-apis-5hk2</link>
      <guid>https://dev.to/qmatteoq/getting-full-control-over-msix-updates-with-the-app-installer-apis-5hk2</guid>
      <description>&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/windows/msix/app-installer/app-installer-file-overview"&gt;App Installer&lt;/a&gt; is a powerful technology which enables to streamline the deployment and update of applications packaged with MSIX. Thanks to App Installer, you can enable features which are typically reserved only to managed deployment platforms (like the Microsoft Store or Endpoint Manager), for instance automatic updates. By setting up the App Installer file in the proper way, you can let Windows automatically check the availability of updates and install them without any extra effort from the developer. It's enough to publish an updated version of the package on the original location (a website or a network share) to let Windows download and install it, based on the logic you have defined in the App Installer file (you can check for updates in background, when the application is launched, etc.).&lt;/p&gt;

&lt;p&gt;This approach is great for many scenarios, especially the ones in which you don't have access to the source code (for example, you're a system administrator managing the deployment of apps for the company). However, if you're a developer who is actively building and evolving your application, you might want more control over the update process. For instance, you may want to tell the user, within the application itself, if there's an update available.&lt;/p&gt;

&lt;p&gt;To support these scenarios, the Windows Runtime comes with a series of APIs that you can use to interact with App Installer: if your MSIX packaged application has been deployed using an App Installer file, you can leverage these APIs to perform tasks like checking if an update is available, triggering the update, etc.&lt;/p&gt;

&lt;p&gt;Let's explore this scenario in more detail.&lt;/p&gt;

&lt;h3&gt;
  
  
  Checking for an available update
&lt;/h3&gt;

&lt;p&gt;The heart of these APIs is the &lt;code&gt;Package&lt;/code&gt; class, which belongs to the &lt;code&gt;Windows.ApplicationModel&lt;/code&gt; namespace. This is a Windows Runtime namespace, so to access it you might need to make a few tweaks to your project based on the UI platform you've chosen:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If it's a UWP or WinUI app built using Windows App SDK, then you're good to go. Both technologies offer built-in access to Windows Runtime APIs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If it's a WPF or Windows Forms application based on .NET Framework or .NET Core 3.x, you must install &lt;a href="https://www.nuget.org/packages/Microsoft.Windows.SDK.Contracts"&gt;a dedicated NuGet package&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If it's a WPF or Windows Forms application based on .NET 5 or .NET 6, you must set in the project's properties one of the target frameworks dedicated to Windows 10/11, like in the following sample:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now you can use the following code snippet to check if an updated version of the package is available via App Installer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public async Task CheckForUpdates()
{
    Package package = Package.Current;
    PackageUpdateAvailabilityResult result = await package.CheckUpdateAvailabilityAsync();
    switch (result.Availability)
    {
        case PackageUpdateAvailability.Available:
        case PackageUpdateAvailability.Required:
            //update is available
            break;
        case PackageUpdateAvailability.NoUpdates:
            //no updates available
            break;
        case PackageUpdateAvailability.Unknown:
        default:
            break;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code is simple. First, we get a reference to the current package, using the &lt;code&gt;Package.Current&lt;/code&gt; singleton. This object will enable us to access all the properties related to the MSIX package and the identity of the application. Then we call the &lt;code&gt;CheckUpdateAvailabilityAsync()&lt;/code&gt; method, which will return us a &lt;code&gt;PackageUpdateAvailabityResult&lt;/code&gt; object, that includes an &lt;code&gt;Availability&lt;/code&gt; property which is an enumerator. If we get &lt;code&gt;Available&lt;/code&gt; or &lt;code&gt;Required&lt;/code&gt;, it means there's an update available. As you can see, we don't have to specify the URL where to check the update availability from. The API will automatically use the App Installer URL which is linked to the application. Windows automatically stores this connection when we install a MSIX packaged application through an App Installer file.&lt;/p&gt;

&lt;p&gt;Thanks to this code, we can implement our own logic to communicate the information to our users: we can display a pop-up or a notification, we can tell them to restart the app so that Windows will download and install the update, etc.&lt;/p&gt;

&lt;p&gt;But what if you want to take full control of the update process as well? Let's see how we can do it!&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing the update from code
&lt;/h3&gt;

&lt;p&gt;The App Installer APIs enables us not just to check if an update is available, but also to install the update. This feature can be used as a companion of the automatic feature provided by App Installer or independently.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the first scenario, you will light up the App Installer APIs but, at the same time, you will define update rules in the App Installer file, like in the following example:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the second scenario, you won't have any &lt;code&gt;UpdateSettings&lt;/code&gt; section in the XML file, which will simply look like this:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's see now the code we can use to download and install the update:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private async Task InstallUpdate()
{
    var pm = new PackageManager();
    var result = await pm.RequestAddPackageByAppInstallerFileAsync(new Uri("http://mywebservice.azurewebsites.net/appset.appinstaller"),
                        AddPackageByAppInstallerOptions.ForceTargetAppShutdown, pm.GetDefaultPackageVolume());

    if (result.ExtendedErrorCode != null)
    {
        txtUpdateStatus.Text = result.ErrorText;
        logger.Error(result.ExtendedErrorCode);
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, we create a new instance of the &lt;code&gt;PackageManager&lt;/code&gt; class, which belongs to the &lt;code&gt;Windows.Management.Deployment&lt;/code&gt; namespace. Then we call the &lt;code&gt;RequestAddPackageByAppInstallerFileAsync()&lt;/code&gt; method, passing as parameters&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The URL of your App Installer file (yep, this time we need to specify it, unlike when we were checking for updates).&lt;/li&gt;
&lt;li&gt;The behavior we want to achieve when the update is downloaded. There are a few options, but unfortunately the only one which is applicable is &lt;code&gt;ForceTargetAppShutdown&lt;/code&gt;, which means that the application will be closed so that the update can be applied.&lt;/li&gt;
&lt;li&gt;The folder where to install the update. By calling the &lt;code&gt;GetDefaultPackageVolume()&lt;/code&gt; method of the &lt;code&gt;PackageManager&lt;/code&gt; class, we get a reference to the default folder where MSIX packages are deployed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You get back a &lt;code&gt;DeploymentResult&lt;/code&gt; object as a result which, however, doesn't tell you much about the operation status. Remember, in fact, that if the update is successful, the application will be downloaded and reinstalled. In case of issues, however, the object will contain an &lt;code&gt;ExtendedErrorCode&lt;/code&gt; property which, despite the name, contains a full &lt;code&gt;Exception&lt;/code&gt; object with all the details about what went wrong.&lt;/p&gt;

&lt;p&gt;Pay attention that the way the update process works might be highly disruptive for the user. After calling the &lt;code&gt;RequestAddPackageByAppInstallerFileAsync()&lt;/code&gt; method, Windows will forcibly close the application to enable the update process to complete, without any warning message. As such, before calling it, make sure to save any data that the user might be working with and provide a clear message to the user to notify him about what's going to happen.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tracking the progress of the update operation
&lt;/h3&gt;

&lt;p&gt;To improve the user experience, you might want to at least display to the user the download progress, especially if the update is big. For this purpose, the &lt;code&gt;RequestAddPackageByAppInstallerFileAsync()&lt;/code&gt; method doesn't return a standard &lt;code&gt;IAsyncOperation&lt;/code&gt; object, but an &lt;code&gt;IAsyncOperationWithProgress&lt;/code&gt; one. This means that we can use the following code to track progress:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private async Task InstallUpdate()
{
    var pm = new PackageManager();
    var deploymentTask = pm.RequestAddPackageByAppInstallerFileAsync(new Uri("http://mywebservice.azurewebsites.net/appset.appinstaller"),
                        AddPackageByAppInstallerOptions.ForceTargetAppShutdown, pm.GetDefaultPackageVolume());

    deploymentTask.Progress = (task, progress) =&amp;gt;
    {
        logger.Info($"Progress: {progress.percentage} - Status: {task.Status}");
        Dispatcher.Invoke(() =&amp;gt;
        {
            txtUpdateProgress.Text = $"Progress: {progress.percentage}";
        });

    };

    var result = await deploymentTask;

    if (result.ExtendedErrorCode != null)
    {
        txtUpdateStatus.Text = result.ErrorText;
        logger.Error(result.ExtendedErrorCode);
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first key difference is that we have removed the &lt;code&gt;await&lt;/code&gt; keyword before calling the &lt;code&gt;RequestAddPackageByAppInstallerFileAsync()&lt;/code&gt; method. This means that we aren't immediately starting the operation, but we are simply storing a reference to the asynchronous operation we want to execute. Then we subscribe to the &lt;code&gt;Progress&lt;/code&gt; event, which is triggered every time the status of the download changes. We can use the &lt;code&gt;progress&lt;/code&gt; parameter to determine the status of the operation, through the &lt;code&gt;percentage&lt;/code&gt; property. Once we have defined the handler, we can start the operation, by invoking the task again, this time with the &lt;code&gt;await&lt;/code&gt; prefix.&lt;/p&gt;

&lt;p&gt;There's a catch, however. The API doesn't return an update in real time, but only after a certain amount of time. As such, if the size of the update isn't big enough, you might not see any actual progress being returned. You will see the &lt;code&gt;Progress&lt;/code&gt; event being triggered only at the beginning and at the end. This can be a common scenario when you use MSIX as a packaging technology. Remember, in fact, that MSIX supports differential updates, so even if the updated package is big, Windows will download only the files which changed.&lt;/p&gt;

&lt;p&gt;If you want to provide a better user experience, there's a nice workaround that you can adopt and that was suggested by one of my customers during an engagement: downloading and launching the update App Installer file. This way, you'll continue to use the App Installer APIs to check for available updates, but the update process will be managed by Windows with the traditional App Installer UI, like in the following image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NoUiZRoC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/371472i37FDDE8227C9A2C4/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NoUiZRoC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/371472i37FDDE8227C9A2C4/image-size/large%3Fv%3Dv2%26px%3D999" alt="external-appinstaller.png" title="external-appinstaller.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is how you can change the code to support this scenario:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private async void OnInstallUpdate(object sender, RoutedEventArgs e)
{
    HttpClient client = new HttpClient();
    using (var stream = await client.GetStreamAsync("http://mywebservice.azurewebsites.net/appset.appinstaller"))
    {
        using (var fileStream = new FileStream(@"C:\Temp\app.appinstaller", FileMode.CreateNew))
        {
            await stream.CopyToAsync(fileStream);
        }
    }

    try
    {
        var ps = new ProcessStartInfo(@"C:\Temp\app.appinstaller")
        {
            UseShellExecute = true
        };
        Process.Start(ps);
    }
    catch (Exception exc)
    {
        logger.Error(exc);
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, using the &lt;code&gt;HttpClient&lt;/code&gt; class, we download the most recent version of the App Installer file from our server, and we store it on the computer. Then, by using the &lt;code&gt;Process.Start()&lt;/code&gt; API in .NET, we launch the file we have just downloaded, which will trigger the App Installer UI to show up and start the update.&lt;/p&gt;

&lt;p&gt;The suggestions I shared with you before still apply, however. The UX will be indeed more polished, but the application will continue to be terminated once the update process is completed. As such, make sure to save all the data and notify the user about what's going to happen.&lt;/p&gt;

&lt;h3&gt;
  
  
  Updating an application without changing the code
&lt;/h3&gt;

&lt;p&gt;What if you are interested in using the App Installer APIs to have more control over updates, but you don't want to change the code of your main application? This is a common scenario when you still need to distribute your app with a traditional installer technology, and you don't want to make code changes which are specific for MSIX deployment. In this case, you can leverage the fact that, inside a MSIX package, you can have multiple executables, which all share the same identity. Using the Windows Application Packaging Project, you can reference two different projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your main application, which won't have any code change.&lt;/li&gt;
&lt;li&gt;An updater application, which will use the APIs we have seen so far.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is how the solution looks like in Visual Studio:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4EuEkoKm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/371473iC033B7A8C6F672FF/image-size/large%3Fv%3Dv2%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4EuEkoKm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/371473iC033B7A8C6F672FF/image-size/large%3Fv%3Dv2%26px%3D999" alt="solution-explorer.png" title="solution-explorer.png" width="302" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since both applications are packaged together, the App Installer API will work regardless of if they are called by the updater application or by the main application. Being another process, it's up to you how you want to invoke it. You might have a "Check for updates" option in the app that will invoke the updater application. Or you might have the updater application set as startup and check for updates every time the application starts. If no updates are found, the updater will close itself and launch the main application; otherwise, it will propose to the user to update the whole package.&lt;/p&gt;

&lt;p&gt;The sample I've published &lt;a href="https://github.com/qmatteoq/UpdateAppInstaller/"&gt;on GitHub&lt;/a&gt; follows the second approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping up
&lt;/h3&gt;

&lt;p&gt;In this article, we have seen how App Installer isn't just a technology for easily enable deployment and updates of Windows apps through a website or a network share, but also a set of APIs that we can use in our applications to get the best of both worlds: the benefits of MSIX and App Installer (like differential updates or the ability to manage dependencies) and the flexibility of having full control over the update process.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Passing installation parameters to a Windows application with MSIX and App Installer</title>
      <dc:creator>Matteo Pagani</dc:creator>
      <pubDate>Mon, 28 Sep 2020 12:14:48 +0000</pubDate>
      <link>https://dev.to/qmatteoq/passing-installation-parameters-to-a-windows-application-with-msix-and-app-installer-2a2p</link>
      <guid>https://dev.to/qmatteoq/passing-installation-parameters-to-a-windows-application-with-msix-and-app-installer-2a2p</guid>
      <description>&lt;p&gt;As a Windows developer, a very common requirement you might be asked to implement is to track information about the installation process. For example, let's say that you have started a special campaign to advertise your application and you want to understand how many people are installing it because they have clicked one of the promotional banners. Or, in an enterprise environment, you may want to know which is the department of the employee who is installing the application, so that you can apply a different configuration. A very common solution for this requirement is to leverage a web installation and to use query string parameters, appended to the installation URI, which must be collected by the Windows application the first time it's launched. For example, your installation URL can be something like &lt;strong&gt;&lt;a href="http://www.foo.com/setup?source=campaign"&gt;http://www.foo.com/setup?source=campaign&lt;/a&gt;&lt;/strong&gt;. Then your application, when it's launched for the first time, must able to retrieve the value of the query string parameter called  &lt;strong&gt;source&lt;/strong&gt;  and use it as it's needed (for example, by sending this information to an analytic platform like &lt;a href="https://visualstudio.microsoft.com/app-center/"&gt;App Center&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;As you might know if you follow this blog, MSIX is the most modern technology to deploy Windows application and, through a feature called &lt;a href="https://docs.microsoft.com/en-us/windows/msix/app-installer/app-installer-root"&gt;App Installer&lt;/a&gt;, you can support web installations with automatic updates. As such, is there a way to pass activation parameters to the application from the installer URL with MSIX and App Installer? The answer is yes!&lt;/p&gt;

&lt;p&gt;Let's see how we can do that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding protocol support
&lt;/h3&gt;

&lt;p&gt;The way MSIX supports this feature is by leveraging &lt;a href="https://docs.microsoft.com/en-us/windows/apps/desktop/modernize/desktop-to-uwp-extensions#protocol"&gt;protocol support&lt;/a&gt;. Your app must register a custom protocol, which will be used to launch the application after it has been installed from your website using App Installer. Then, your application will retrieve all the information about the activation through the startup arguments, like in a regular protocol activation scenario. For example, let's say you register a protocol called &lt;code&gt;contoso-expenses:&lt;/code&gt;. This means that, when someone invokes a URL like &lt;code&gt;contoso-expenses:?source=campaign&lt;/code&gt;, your application will receive as activation arguments the value &lt;code&gt;source=campaign&lt;/code&gt;. This is exactly what App Installer is going to do the first time it launches your MSIX packaged app after the installation has been completed.&lt;/p&gt;

&lt;p&gt;Adding protocol support in a MSIX packaged application is quite easy, thanks to the application manifest. In my scenario, I have a WPF application built with .NET Core, which is packaged as MSIX using the &lt;a href="https://docs.microsoft.com/en-us/windows/msix/desktop/desktop-to-uwp-packaging-dot-net"&gt;Windows Application Packaging Project&lt;/a&gt;. As such, all I have to do is to double click on the  &lt;strong&gt;Package.appxmanifest&lt;/strong&gt;  file in the Windows Application Packaging Project and move to the  &lt;strong&gt;Declarations&lt;/strong&gt;  section. In the  &lt;strong&gt;Available declarations&lt;/strong&gt;  dropdown menu choose  &lt;strong&gt;Protocol&lt;/strong&gt;  and fill the  &lt;strong&gt;Name&lt;/strong&gt;  field with the name of the custom protocol you want to register (in my scenario, it's &lt;code&gt;contoso-expenses&lt;/code&gt;:(&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bFFQuB0C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222530i287F362B59A2986B/image-size/large%3Fv%3D1.0%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bFFQuB0C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222530i287F362B59A2986B/image-size/large%3Fv%3D1.0%26px%3D999" alt="protocol.png" title="protocol.png" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Listening for activation arguments
&lt;/h3&gt;

&lt;p&gt;The next step is to make our application aware of activation arguments. The way you implement this support changes based on the development framework you have chosen. In this sample we're going to see how to do it in a WPF application. Activation arguments can be retrieved in the  &lt;strong&gt;App.xaml.cs&lt;/strong&gt;  file, by overriding the &lt;code&gt;OnStartup()&lt;/code&gt; method, which is invoked every time the application starts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public partial class App : Application
{
    protected override void OnStartup(StartupEventArgs e)
    {
        string path = $"{Environment.GetFolderPath(Environment.SpecialFolder.Desktop)}//AppInstaller.txt";

        if (e.Args.Length &amp;gt; 0)
        {
            System.IO.File.WriteAllText(path, e.Args[0]);
        }
        else
        {
            System.IO.File.WriteAllText(path, "No arguments available");
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;OnStartup()&lt;/code&gt; method gets, as input parameter, an object of type &lt;code&gt;StartupEventArgs&lt;/code&gt;, which includes any activation parameter that has been passed to the application. They are stored inside an array called &lt;code&gt;Args&lt;/code&gt;. In case of protocol activation, there will be only one item in the array with the full URL that has been used to activate the application. The previous sample code simply writes this information, for logging purposes, in a text file stored on the desktop of the current user.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test the custom protocol implementation
&lt;/h3&gt;

&lt;p&gt;Before putting all together for App Installer, we have a way to quickly test if the implementation we have done works as expected. Since the App Installer implementation of this feature is based on the standard approach for managing custom protocols, we can test this scenario right away, without needing to package everything together and upload it on our website. We just need to invoke our custom protocol from any Windows shell.&lt;/p&gt;

&lt;p&gt;As first step, right click on the Windows Application Packaging Project and choose  &lt;strong&gt;Deploy&lt;/strong&gt; , in order to install the application on the system and register the custom protocol. Now open the Run panel in Windows 10 (or just press Start+R on your keyboard) and type a URI which uses your custom protocol. For example, in my scenario I can use something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;contoso-expenses:?source=campaign

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have implemented everything correctly, your application will start. The &lt;code&gt;OnStartup()&lt;/code&gt; event will have been triggered as well, so if I check my desktop I will find a file called  &lt;strong&gt;AppInstaller.txt&lt;/strong&gt;  with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;contoso-expenses:?source=campaign

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, everything is working as expected. The application has launched and, in the activation arguments, I've been able to get the full URL that it has been used to invoke the custom protocol.&lt;/p&gt;

&lt;p&gt;If you encounter issues during the &lt;code&gt;OnStartup()&lt;/code&gt; event, the Windows Application Packaging Project gives you an option to easily test this scenario. Right click on it, move to the  &lt;strong&gt;Debug&lt;/strong&gt;  section and enable the option  &lt;strong&gt;Do not launch, but debug my code when it starts&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I1k_yIV7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222531i8B2C02960DFCFD35/image-size/large%3Fv%3D1.0%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I1k_yIV7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222531i8B2C02960DFCFD35/image-size/large%3Fv%3D1.0%26px%3D999" alt="Debug.png" title="Debug.png" width="669" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can press F5 to launch the debugger, but the application won't be actively launched. The debugger will be on hold, waiting for the application to be activated. Thanks to this feature, you can easily test different activation paths. In our scenario, you can just add a breakpoint inside the &lt;code&gt;OnStartup()&lt;/code&gt; method and then, from the Run panel, invoke the custom URL. The application will start and the debugger will wake up, triggering the breakpoint and giving you the option to debug your code.&lt;/p&gt;

&lt;p&gt;Now that we have a working implementation in our application, let's see how we can configure App Installer to leverage it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up App Installer
&lt;/h3&gt;

&lt;p&gt;When it comes to the App Installer configuration, you don't have to do anything special to support this scenario. The App Installer file you're using today to deploy your MSIX application works fine. However, you will have to customize the URL that is used to trigger the installation. If you're generating the App Installer file as part of the publishing process in Visual Studio, you will end up with a web page similar to the following one:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G7G4wz9_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222532i19F2759A60D168DE/image-size/large%3Fv%3D1.0%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G7G4wz9_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222532i19F2759A60D168DE/image-size/large%3Fv%3D1.0%26px%3D999" alt="ContosoExpenses.png" title="ContosoExpenses.png" width="703" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The  &lt;strong&gt;Get the app&lt;/strong&gt;  button will trigger the installation of the MSIX package by leveraging the special &lt;code&gt;ms-appinstaller&lt;/code&gt; protocol and it will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ms-appinstaller:?source=https://contosoexpensescd.z19.web.core.windows.net/ContosoExpenses.Package.appinstaller

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The trick here is to add a special parameter to this URI, called &lt;code&gt;activationUri&lt;/code&gt;. As such, open the web page with your favorite text editor and change the URL to look like this (or copy it directly in your web browser, if you just want to do a test):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ms-appinstaller:?source=https://contosoexpensescd.z19.web.core.windows.net/ContosoExpenses.Package.appinstaller&amp;amp;activationUri=contoso-expenses:?source=campaign

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, I have added at the end an &lt;code&gt;activationUri&lt;/code&gt; parameter with, as value, the same exact custom URL I have tested before, based on the &lt;code&gt;contoso-expenses&lt;/code&gt; protocol registered in the application. When you do this, Windows will automatically launch this URL at the end of the MSIX deployment process, instead of simply launching your application.&lt;/p&gt;

&lt;p&gt;The installation process won't look different. When you click on  &lt;strong&gt;Get the app&lt;/strong&gt;  button with the new custom URL, the user will continue to see the traditional MSIX deployment experience:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xnOcHdhS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222533iB0B09CC6F6112D35/image-size/large%3Fv%3D1.0%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xnOcHdhS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://techcommunity.microsoft.com/t5/image/serverpage/image-id/222533iB0B09CC6F6112D35/image-size/large%3Fv%3D1.0%26px%3D999" alt="InstallApp.png" title="InstallApp.png" width="650" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The only difference is that, as you can notice, the option  &lt;strong&gt;Launch when ready&lt;/strong&gt;  is hidden. The application will be forcefully launched at the end of the process with the custom URL we have specified, otherwise we won't be able to get the activation parameters. Once the deployment is complete, Windows will silently invoke the custom URL we have passed to the &lt;code&gt;activationUri&lt;/code&gt; parameter, which in our case is &lt;code&gt;contoso-expenses:?source=campaign&lt;/code&gt;. As a consequence, the experience will be like the one we have tested before when we have locally invoked the custom protocol: the application will be launched and, on the desktop, we'll find a file called  &lt;strong&gt;AppInstaller.txt&lt;/strong&gt;  with the full URL that has been used for triggering the execution.&lt;/p&gt;

&lt;p&gt;Unfortunately, Visual Studio doesn't have a way to customize the web page that is generated to add this custom App Installer URI. However, in a real scenario, you won't probably use that page, but you will have the App Installer link embedded in a button or banner of your existing website.&lt;/p&gt;

&lt;h3&gt;
  
  
  Working with the activation parameters
&lt;/h3&gt;

&lt;p&gt;An easier way to work with the activation parameters is to leverage the &lt;code&gt;HttpUtility&lt;/code&gt; class which is included in the &lt;code&gt;System.Web&lt;/code&gt; namespace. Thanks to this class, you can manipulate the query string parameters coming from the Uri in an easier way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;protected override void OnStartup(StartupEventArgs e)
{
    string path = $"{Environment.GetFolderPath(Environment.SpecialFolder.Desktop)}//AppInstaller.txt";

    if (e.Args.Length &amp;gt; 0)
    {
        UriBuilder builder = new UriBuilder(e.Args[0]);
        var result = HttpUtility.ParseQueryString(builder.Query);
        var value = result["source"];

        System.IO.File.WriteAllText(path, $"Source: {value}");
    }
    else
    {
        System.IO.File.WriteAllText(path, "No arguments available");
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As first step, we use the &lt;code&gt;UriBuilder&lt;/code&gt; class to create a Uri object out of the activation parameter we have received inside the first position of the &lt;code&gt;e.Args&lt;/code&gt; array. This way, we can easily access only to the query string parameters (without the whole &lt;code&gt;contoso-expenses&lt;/code&gt; protocol) by using the &lt;code&gt;Query&lt;/code&gt; property. By using the &lt;code&gt;ParseQueryString()&lt;/code&gt; method and passing the query string we get in return a dictionary, which makes easy to access to all the various parameters. The previous sample shows how easily we can retrieve the value of the &lt;code&gt;source&lt;/code&gt; parameter from the query string. Thanks to this approach, we can avoid to use string manipulation to retrieve the same information, which is more error prone.&lt;/p&gt;

&lt;h3&gt;
  
  
  What about a Universal Windows Platform application?
&lt;/h3&gt;

&lt;p&gt;What if, instead of a .NET application like in this example, I have a UWP app? From a custom protocol registration perspective, the approach is exactly the same. The deployment technology for UWP is MSIX, so also in this case we have a manifest file where we can register our protocol. From a code perspective, instead, UWP offers a dedicated method for custom activation paths, called &lt;code&gt;OnActivated()&lt;/code&gt;, which must be declared in the  &lt;strong&gt;App.xaml.cs&lt;/strong&gt;  file. This is an example implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;protected override void OnActivated(IActivatedEventArgs args)
{
    if (args is ProtocolActivatedEventArgs eventArgs)
    {
        UriBuilder builder = new UriBuilder(eventArgs.Uri);
        var result = HttpUtility.ParseQueryString(builder.Query);
        var value = result["source"];
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the code is very similar to the WPF implementation. The only difference is that the &lt;code&gt;OnActivated&lt;/code&gt; event can be triggered by multiple patterns and, as such, first we need to make sure that this is indeed a protocol activation, by checking the real type of the &lt;code&gt;IActivataedEventArgs&lt;/code&gt; parameter. In case of protocol activation, it will be &lt;code&gt;ProtocolActivatedEventArgs&lt;/code&gt;, which gives us access to the to the &lt;code&gt;Uri&lt;/code&gt; property we need to retrieve the activation URL.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping up
&lt;/h3&gt;

&lt;p&gt;This implementation is based on the standard custom protocol support offered by Windows, so it's very easy to implement. Every development platform support a way to get activation parameters when the application starts, so you just need to write some code to manage the desired scenario.&lt;/p&gt;

&lt;p&gt;You can find the full sample used in this blog post &lt;a href="https://github.com/microsoft/Windows-AppConsult-Samples-DesktopBridge/tree/main/Blog-AppInstallerWithActivation"&gt;on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;App Installer: &lt;a href="https://docs.microsoft.com/en-us/windows/msix/app-installer/app-installer-root"&gt;https://docs.microsoft.com/en-us/windows/msix/app-installer/app-installer-root&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Protocol support in MSIX: &lt;a href="https://docs.microsoft.com/en-us/windows/apps/desktop/modernize/desktop-to-uwp-extensions#protocol"&gt;https://docs.microsoft.com/en-us/windows/apps/desktop/modernize/desktop-to-uwp-extensions#protocol&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Windows Application Packaging Project: &lt;a href="https://docs.microsoft.com/en-us/windows/msix/desktop/desktop-to-uwp-packaging-dot-net"&gt;https://docs.microsoft.com/en-us/windows/msix/desktop/desktop-to-uwp-packaging-dot-net&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;HttpUtility class: &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/system.web.httputility?view=netcore-3.1"&gt;https://docs.microsoft.com/en-us/dotnet/api/system.web.httputility?view=netcore-3.1&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Enable CI/CD for Windows apps with GitHub Actions and Azure Static Web Apps</title>
      <dc:creator>Matteo Pagani</dc:creator>
      <pubDate>Wed, 10 Jun 2020 14:59:06 +0000</pubDate>
      <link>https://dev.to/qmatteoq/enable-cicd-for-windows-apps-with-github-actions-and-azure-static-web-apps-2c38</link>
      <guid>https://dev.to/qmatteoq/enable-cicd-for-windows-apps-with-github-actions-and-azure-static-web-apps-2c38</guid>
      <description>&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/static-web-apps/overview"&gt;Azure Static Web Apps&lt;/a&gt; is a new Azure service launched in Preview at Build 2020, which offers an app service dedicated to static websites. You could say "why is it so special"? In the end, hosting a static website is relatively simple, since there isn't any server-side component: no databases, no ASP.NET Core or PHP or Node runtime, just pure HTML, CSS and JavaScript. Azure Static Web Apps is more than just hosting for static content: thanks to its tight connection to GitHub, it supports creating automated pipelines (through GitHub Actions) which are able to automatically build and deploy full stack web apps.&lt;/p&gt;

&lt;p&gt;This service is a great companion for frameworks like &lt;a href="https://docs.microsoft.com/en-us/azure/static-web-apps/publish-gatsby"&gt;Gatsby&lt;/a&gt; or &lt;a href="https://docs.microsoft.com/en-us/azure/static-web-apps/publish-hugo"&gt;Hugo&lt;/a&gt;, which are able to generate static pages as part of the build process. Let's say that you're using Hugo to host your personal blog. Thanks to Azure Static Web App, you can just simply commit to your GitHub repository a markdown file with your latest post to trigger the execution of a GitHub Action, which will build it as a static page.&lt;/p&gt;

&lt;p&gt;And if you need to add a server-side component (for example, hosting a REST API), the Azure Static Web App service supports the deployment of serverless APIs based on Azure Functions.&lt;/p&gt;

&lt;p&gt;However, as you know, in my everyday job I'm focused on the development and the deployment of Windows desktop applications. And, as you probably know if you follow this blog or &lt;a href="https://www.syncfusion.com/ebooks/msix-succinctly"&gt;if you have read my book about MSIX&lt;/a&gt;, I'm a big fan of MSIX and the App Installer technology, which enables to easily have a solid CI/CD story for Windows applications. Thanks to them, in fact, we can enable features like automatic updates, critical updates, differential updates, etc. And guess what? All you need is a static website that can host your MSIX package, plus the AppInstaller file.&lt;/p&gt;

&lt;p&gt;I guess now you know where I'm headed to =) Let's see how we can use an Azure Static Web App to host our MSIX package, which will be automatically built and deployed by a GitHub Action.&lt;/p&gt;

&lt;h3&gt;
  
  
  Set up the Azure Static Web App
&lt;/h3&gt;

&lt;p&gt;As first step, let's create our Azure Static Web App. Login to the &lt;a href="https://portal.azure.com"&gt;Azure portal&lt;/a&gt; with your account and click on  &lt;strong&gt;Create a resource&lt;/strong&gt;. Start a search using the  &lt;strong&gt;static&lt;/strong&gt;  keyword. One of the results will be &lt;strong&gt;Static Web App (Preview)&lt;/strong&gt;, as in the following image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x1fr_R4---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197910i54A5339C2D163889/image-size/large%3Fv%3D1.0%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x1fr_R4---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197910i54A5339C2D163889/image-size/large%3Fv%3D1.0%26px%3D999" alt="NewStaticWebApp.png" title="NewStaticWebApp.png" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on it and choose  &lt;strong&gt;Create&lt;/strong&gt;  to start the process:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MmXuY7UW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197911i38747EFF82490AD6/image-size/large%3Fv%3D1.0%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MmXuY7UW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197911i38747EFF82490AD6/image-size/large%3Fv%3D1.0%26px%3D999" alt="CreateNewApp.png" title="CreateNewApp.png" width="793" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first information you have to provide are the subscription, the resource group, the name of the app and the region. The only available SKU, for the moment, is the Free one, since the service is in preview. Currently this service works only with GitHub, since it doesn't use it just to connect to the repository, but it's going to create the GitHub Action needed to compile and deploy the project for us. As such, the next required step is to click on  &lt;strong&gt;Sign in with GitHub&lt;/strong&gt;  to complete the login process with your GitHub account. Once you are logged in, you will have the opportunity to choose an organization, a repository and a branch from the ones have on GitHub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D1zoyO86--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197912i671512D98ADD3397/image-size/large%3Fv%3D1.0%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D1zoyO86--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197912i671512D98ADD3397/image-size/large%3Fv%3D1.0%26px%3D999" alt="GitHubSetup.png" title="GitHubSetup.png" width="772" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For our scenario we're going to choose a repository which contains a .NET Desktop application packaged with the Windows Application Packaging Project. The one I'm using for this blog post is available at &lt;a href="https://github.com/qmatteoq/ContosoApp"&gt;https://github.com/qmatteoq/ContosoApp&lt;/a&gt;. It's just a plain WPF application based on .NET Core 3.1.&lt;/p&gt;

&lt;p&gt;Once you have completed the configuration, press  &lt;strong&gt;Review + create&lt;/strong&gt; , followed by  &lt;strong&gt;Create&lt;/strong&gt;  in the review page. Once the deployment has been completed, you will notice a few changes in your GitHub repository:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There will be a new folder, called &lt;code&gt;github/workflows&lt;/code&gt; with a YAML file inside. That's our GitHub workflow. We can see it also by clicking on the  &lt;strong&gt;Actions&lt;/strong&gt;  tab in the repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gIWijoOl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197913i0E52C884CD0E33C9/image-size/large%3Fv%3D1.0%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gIWijoOl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197913i0E52C884CD0E33C9/image-size/large%3Fv%3D1.0%26px%3D999" alt="GitHubAction.png" title="GitHubAction.png" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The workflow is called  &lt;strong&gt;Azure Static Web Apps CI/CD&lt;/strong&gt;  and it will be executed immediately. However, it will fail, since our repository contains a desktop application and the default workflow isn't tailored for this scenario. Don't worry, we're going to fix that!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you go to the  &lt;strong&gt;Settings&lt;/strong&gt;  section of the repository and you move to the  &lt;strong&gt;Secrets&lt;/strong&gt;  tab, you will find out that Azure Static Web App has added a new secret for you:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hzm-WJUe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197914i584E2D6A7214E34B/image-size/large%3Fv%3D1.0%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hzm-WJUe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197914i584E2D6A7214E34B/image-size/large%3Fv%3D1.0%26px%3D999" alt="Secret.png" title="Secret.png" width="759" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This secret contains the API token which is required by the GitHub Action to connect to the Azure Static Web App instance we have created, in order to perform the deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Customize the workflow
&lt;/h3&gt;

&lt;p&gt;Let's take a look at the workflow that the Azure Static Web App has created for us:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Azure Static Web Apps CI/CD

on:
  push:
    branches:
      - master
  pull_request:
    types: [opened, synchronize, reopened, closed]
    branches:
      - master

jobs:
  build_and_deploy_job:
    if: github.event_name == 'push' || (github.event_name == 'pull_request' &amp;amp;&amp;amp; github.event.action != 'closed')
    runs-on: ubuntu-latest
    name: Build and Deploy Job
    steps:
      - uses: actions/checkout@v2
        with:
          submodules: true
      - name: Build And Deploy
        id: builddeploy
        uses: Azure/static-web-apps-deploy@v0.0.1-preview
        with:
          azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_CALM_DESERT_05F99C503 }}
          repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
          action: "upload"
          ###### Repository/Build Configurations - These values can be configured to match you app requirements. ######
          # For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
          app_location: "/" # App source code path
          api_location: "api" # Api source code path - optional
          app_artifact_location: "" # Built app content directory - optional
          ###### End of Repository/Build Configurations ######

  close_pull_request_job:
    if: github.event_name == 'pull_request' &amp;amp;&amp;amp; github.event.action == 'closed'
    runs-on: ubuntu-latest
    name: Close Pull Request Job
    steps:
      - name: Close Pull Request
        id: closepullrequest
        uses: Azure/static-web-apps-deploy@v0.0.1-preview
        with:
          azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_CALM_DESERT_05F99C503 }}
          action: "close"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The workflow contains two distinct jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;build_and_deploy_job&lt;/strong&gt;  is triggered whenever you submit a new Pull Request to the repository. The goal of this job is to build the updated website included in the Pull Request and to deploy it in a staging environment, so that you can test that everything is working as expected.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;close_pull_request_job&lt;/strong&gt;  is triggered whenever you close a Pull Request. This means that the you have validated that the Pull Request is good and, as such, the staging environment can be deleted because the new version of the website can be safely pushed to production.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can imagine, however, for our scenario this isn't really applicable, since we aren't really deploying a website, but a desktop application to be deployed. As such, we're going to basically delete everything from this workflow file, except for the  &lt;strong&gt;Build and deploy&lt;/strong&gt;  task, which is based on the action called  &lt;strong&gt;Azure/&lt;a href="mailto:static-web-apps-deploy@v0.0.1-preview"&gt;static-web-apps-deploy@v0.0.1-preview&lt;/a&gt;&lt;/strong&gt;. We're going to use this one to publish the output of the Visual Studio compilation: the MSIX package, the AppInstaller file and the web page to trigger the installation.&lt;/p&gt;

&lt;p&gt;To customize the workflow, we're going to take inspiration &lt;a href="https://devblogs.microsoft.com/dotnet/continuous-integration-workflow-template-for-net-core-desktop-apps-with-github-actions"&gt;from the template created by .NET Team&lt;/a&gt;, which is a great starting point when you want to build desktop .NET Core applications on GitHub. This template perfectly fits my scenario: I have a WPF application based on .NET Core and a Windows Application Packaging Project to generate the MSIX package.&lt;/p&gt;

&lt;p&gt;Let's start by creating a copy of the existing workflow file. We're going to need it later. Now open on GitHub the workflow file (in my case, it's &lt;a href="https://github.com/qmatteoq/ContosoApp/blob/master/.github/workflows/azure-static-web-apps-calm-desert-05f99c503.yml"&gt;https://github.com/qmatteoq/ContosoApp/blob/master/.github/workflows/azure-static-web-apps-calm-desert-05f99c503.yml&lt;/a&gt;) and click on the edit icon (the small pencil in the toolbar):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EE-kkLFW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197915i2DA6FA4BC78F726E/image-size/large%3Fv%3D1.0%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EE-kkLFW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197915i2DA6FA4BC78F726E/image-size/large%3Fv%3D1.0%26px%3D999" alt="EditFile.png" title="EditFile.png" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you do this, GitHub will propose you a special edit interface tailored for workflows. Thanks a to a panel on the right, you'll be able to easily browse the Actions marketplace and integrate tasks in the workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nF-wD-mu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197916i0D83E64DB78A3BB0/image-size/large%3Fv%3D1.0%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nF-wD-mu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197916i0D83E64DB78A3BB0/image-size/large%3Fv%3D1.0%26px%3D999" alt="EditGitHubAction.png" title="EditGitHubAction.png" width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the moment, let's delete everything (except for the first line which defines the &lt;code&gt;name&lt;/code&gt;) and replace the content with the one &lt;a href="https://github.com/actions/starter-workflows/blob/master/ci/dotnet-core-desktop.yml"&gt;from the .NET Desktop workflow&lt;/a&gt;. This is how your workflow should look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Azure Static Web Apps CI/CD

on:
  push:
    branches: [master]
  pull_request:
    branches: [master]

jobs:

  build:

    strategy:
      matrix:
        configuration: [Debug, Release]

    runs-on: windows-latest # For a list of available runner types, refer to 
                             # https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idruns-on

    env:
      Solution_Name: your-solution-name # Replace with your solution name, i.e. MyWpfApp.sln.
      Test_Project_Path: your-test-project-path # Replace with the path to your test project, i.e. MyWpfApp.Tests\MyWpfApp.Tests.csproj.
      Wap_Project_Directory: your-wap-project-directory-name # Replace with the Wap project directory relative to the solution, i.e. MyWpfApp.Package.
      Wap_Project_Path: your-wap-project-path # Replace with the path to your Wap project, i.e. MyWpf.App.Package\MyWpfApp.Package.wapproj.

    steps:
    - name: Checkout
      uses: actions/checkout@v2
      with:
        fetch-depth: 0

    # Install the .NET Core workload
    - name: Install .NET Core
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: 3.1.101

    # Add MSBuild to the PATH: https://github.com/microsoft/setup-msbuild
    - name: Setup MSBuild.exe
      uses: microsoft/setup-msbuild@2008f912f56e61277eefaac6d1888b750582aa16

    # Execute all unit tests in the solution
    - name: Execute unit tests
      run: dotnet test

    # Restore the application to populate the obj folder with RuntimeIdentifiers
    - name: Restore the application
      run: msbuild $env:Solution_Name /t:Restore /p:Configuration=$env:Configuration
      env:
        Configuration: ${{ matrix.configuration }}

    # Decode the base 64 encoded pfx and save the Signing_Certificate
    - name: Decode the pfx
      run: |
        $pfx_cert_byte = [System.Convert]::FromBase64String("${{ secrets.Base64_Encoded_Pfx }}")
        $certificatePath = Join-Path -Path $env:Wap_Project_Directory -ChildPath GitHubActionsWorkflow.pfx
        [IO.File]::WriteAllBytes("$certificatePath", $pfx_cert_byte)

    # Create the app package by building and packaging the Windows Application Packaging project
    - name: Create the app package
      run: msbuild $env:Wap_Project_Path /p:Configuration=$env:Configuration /p:UapAppxPackageBuildMode=$env:Appx_Package_Build_Mode /p:AppxBundle=$env:Appx_Bundle /p:PackageCertificateKeyFile=GitHubActionsWorkflow.pfx /p:PackageCertificatePassword=${{ secrets.Pfx_Key }}
      env:
        Appx_Bundle: Always
        Appx_Bundle_Platforms: x86|x64
        Appx_Package_Build_Mode: StoreUpload
        Configuration: ${{ matrix.configuration }}

    # Remove the pfx
    - name: Remove the pfx
      run: Remove-Item -path $env:Wap_Project_Directory\$env:Signing_Certificate

    # Upload the MSIX package: https://github.com/marketplace/actions/upload-artifact
    - name: Upload build artifacts
      uses: actions/upload-artifact@v1
      with:
        name: MSIX Package
        path: ${{ env.Wap_Project_Directory }}\AppPackages

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's go and customize a few things. First, you'll need to customize the environment variables defined at the top to point to your solutions and project. You'll have to setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Solution_Name&lt;/code&gt; with the relative path of the solution file&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Wap_Project_Directory&lt;/code&gt; with the relative path of the folder which contains the Windows Application Packaging Project&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Wap_Project_Path&lt;/code&gt; with the full path of Windows Application Packaging Project file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Optionally, you can also use the &lt;code&gt;Test_Project_Path&lt;/code&gt; to configure the path of the project which contains your unit tests. In my case, since it's a sample project, I don't have one, so I simply removed that variable. This is how my environment configuration looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;env:
  Solution_Name: ContosoApp.sln                
  Wap_Project_Directory: ContosoApp.Package    
  Wap_Project_Path: ContosoApp.Package\ContosoApp.Package.wapproj

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Second, for our scenario, we don't really need to build the application both in Debug and Release mode. We just want to have, as output, a single release MSIX package, so that we can deploy it on our website. As such, remove from the configuration matrix the &lt;code&gt;Debug&lt;/code&gt; entry:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;strategy:
  matrix:
    configuration: [Release]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As next step, I'm going to customize some of the available tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;Execute unit tests&lt;/code&gt; isn't needed for my sample application, since I don't have unit tests. As such, I'm going to delete this one.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;Build&lt;/code&gt;. Since we're going to use the AppInstaller technology to distribute the package using our Azure Static Web App, we need to tweak a bit the various parameters.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first one is the &lt;code&gt;Appx_Package_Build_Mode&lt;/code&gt; parameter, which by default is set to &lt;code&gt;StoreUpload&lt;/code&gt;, which is the needed when you want to publish the application on the Microsoft Store. In this case we're using manual distribution, so we need to change it to &lt;code&gt;SideloadOnly&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;We need to add the &lt;code&gt;/p:GenerateAppInstallerFile&lt;/code&gt; parameter and set it to &lt;code&gt;true&lt;/code&gt;, in order to generate the .appinstaller file and the web page as part of the process.&lt;/li&gt;
&lt;li&gt;We need to add the &lt;code&gt;/p:AppInstallerUri&lt;/code&gt; parameter and set it to the URL that has been assigned to our Azure Static Web App. You can find this URL in the Azure portal:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8ovHGTuH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197917i01E0CD8BEE56938A/image-size/large%3Fv%3D1.0%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8ovHGTuH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197917i01E0CD8BEE56938A/image-size/large%3Fv%3D1.0%26px%3D999" alt="AzureAppUrl.png" title="AzureAppUrl.png" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is how the task looks like after the changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create the app package by building and packaging the Windows Application Packaging project
- name: Create the app package
  run: msbuild $env:Wap_Project_Path /p:Configuration=$env:Configuration /p:UapAppxPackageBuildMode=$env:Appx_Package_Build_Mode /p:AppxBundle=$env:Appx_Bundle /p:GenerateAppInstallerFile=$env:Generate_AppInstaller /p:AppInstallerUri=$env:AppInstaller_Url /p:PackageCertificateKeyFile=GitHubActionsWorkflow.pfx /p:PackageCertificatePassword=${{ secrets.Pfx_Key }}
  env:
    Appx_Bundle: Always
    Appx_Bundle_Platforms: x86|x64
    Appx_Package_Build_Mode: SideloadOnly
    Configuration: ${{ matrix.configuration }}
    Generate_AppInstaller: true
    AppInstaller_Url: https://calm-desert-05f99c503.azurestaticapps.net
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setting up the signing
&lt;/h3&gt;

&lt;p&gt;As I have discussed multiple times in this blog, signing is a critical aspect of MSIX packaging. If you don't sign a MSIX package with a trusted certificate, the user will never be able to install it. At the same time, you must be very careful in how you enable signing as part of a CI/CD pipeline. If you expose your private certificate, someone might be able to steal it and use your identity to sign malicious applications. There are multiple approaches to sign MSIX packages in the right way as part of a CI/CD pipeline. The approach used by the workflow template is to ask to the developer to encode the PFX into a base64 string and store it as a secret. Then the workflow includes a PowerShell script which takes care of decoding the secret back into a PFX, so that it can be used as part of the Visual Studio build to do the signing.&lt;/p&gt;

&lt;p&gt;The main reasoning of using this approach on GitHub is that, unlike Azure DevOps, the platform doesn't have the concept of &lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/library/secure-files?view=azure-devops"&gt;Secure Files&lt;/a&gt;, which is often used in scenarios like this one.&lt;/p&gt;

&lt;p&gt;The GitHub Action workflow already contains everything you need to sign the package. The only missing step is to create the secrets to store the various required information.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Open a PowerShell terminal in the folder in which you keep your PFX certificate.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Run the following script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$pfx_cert = Get-Content '.\SigningCertificate.pfx' -Encoding Byte
[System.Convert]::ToBase64String($pfx_cert) | Out-File 'SigningCertificate_Encoded.txt'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At the end of the process you will find, in the same folder, a text file called  &lt;strong&gt;SigningCertificate_Encoded.txt&lt;/strong&gt; , which contains the certificate encoded as a base64 string. Open the text file and copy the whole content.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now go to your repository on GitHub, choose  &lt;strong&gt;Settings → Secrets&lt;/strong&gt;  and create a new secret.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Call the secret &lt;code&gt;Base64_Encoded_Pfx&lt;/code&gt; and, as value, paste the encoded base64 string you have just copied.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now create a new secret and call it &lt;code&gt;Pfx_Key&lt;/code&gt;. As value, type the password for the signing certificate.&lt;/p&gt;

&lt;p&gt;That's it. Another and safer alternative to this approach is to leverage Azure Key Vault to store your certificate and the Azure SignTool utility to do the actual signing as part of the pipeline. You can find all the details about this approach, including how to use it in a GitHub Action workflow, &lt;a href="https://techcommunity.microsoft.com/t5/windows-dev-appconsult/signing-a-msix-package-with-azure-key-vault/ba-p/1436154"&gt;in a recent blog post I wrote&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reviewing the pipeline
&lt;/h3&gt;

&lt;p&gt;We have finished the work on the build pipeline. This is how it should look like after the changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Azure Static Web Apps CI/CD

on:
  push:
    branches: [master]
  pull_request:
    branches: [master]

jobs:

  build:

    strategy:
      matrix:
        configuration: [Release]

    runs-on: windows-latest # For a list of available runner types, refer to 
                             # https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idruns-on

    env:
      Solution_Name: ContosoApp.sln                
      Wap_Project_Directory: ContosoApp.Package    
      Wap_Project_Path: ContosoApp.Package\ContosoApp.Package.wapproj                  

    steps:
    - name: Checkout
      uses: actions/checkout@v2
      with:
        fetch-depth: 0

    # Add MSBuild to the PATH: https://github.com/microsoft/setup-msbuild
    - name: Setup MSBuild.exe
      uses: microsoft/setup-msbuild@2008f912f56e61277eefaac6d1888b750582aa16

    # Restore the application to populate the obj folder with RuntimeIdentifiers
    - name: Restore the application
      run: msbuild $env:Solution_Name /t:Restore /p:Configuration=$env:Configuration
      env:
        Configuration: ${{ matrix.configuration }}

    # Decode the base 64 encoded pfx and save the Signing_Certificate
    - name: Decode the pfx
      run: |
        $pfx_cert_byte = [System.Convert]::FromBase64String("${{ secrets.Base64_Encoded_Pfx }}")
        $certificatePath = Join-Path -Path $env:Wap_Project_Directory -ChildPath GitHubActionsWorkflow.pfx
        [IO.File]::WriteAllBytes("$certificatePath", $pfx_cert_byte)

    # Create the app package by building and packaging the Windows Application Packaging project
    - name: Create the app package
      run: msbuild $env:Wap_Project_Path /p:Configuration=$env:Configuration /p:UapAppxPackageBuildMode=$env:Appx_Package_Build_Mode /p:AppxBundle=$env:Appx_Bundle /p:GenerateAppInstallerFile=$env:Generate_AppInstaller /p:AppInstallerUri=$env:AppInstaller_Url /p:PackageCertificateKeyFile=GitHubActionsWorkflow.pfx /p:PackageCertificatePassword=${{ secrets.Pfx_Key }}
      env:
        Appx_Bundle: Always
        Appx_Bundle_Platforms: x86|x64
        Appx_Package_Build_Mode: SideloadOnly
        Configuration: ${{ matrix.configuration }}
        Generate_AppInstaller: true
        AppInstaller_Url: https://calm-desert-05f99c503.azurestaticapps.net

    # Remove the pfx
    - name: Remove the pfx
      run: Remove-Item -path $env:Wap_Project_Directory\GitHubActionsWorkflow.pfx

    # Upload the MSIX package: https://github.com/marketplace/actions/upload-artifact
    - name: Upload build artifacts
      uses: actions/upload-artifact@v1
      with:
        name: MSIX Package
        path: ${{ env.Wap_Project_Directory }}\AppPackages

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is enough to get a ready to be deployed MSIX package. If you commit any change to the repository which hosts your desktop app, the process should complete without errors and, in the end, you should have an artifact folder which contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your signed MSIX package&lt;/li&gt;
&lt;li&gt;The file with .appinstaller extension required to install the package from a website or network share&lt;/li&gt;
&lt;li&gt;A web page (index.html) which contains the link to trigger the installation of the MSIX package&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Deploy on Azure Static Web App
&lt;/h3&gt;

&lt;p&gt;Now it's time to go back to the original workflow file that was created when we have created the Azure Static Web App and that we have copied before starting making changes. We need, in fact, to leverage the task that will take care of deploying on the Web App the artifact we have just created. However, in order to do this, we're going to create a different job. Its purpose will be to download the artifact we have generated and to deploy it.&lt;/p&gt;

&lt;p&gt;Why using a separate job? The first reason is more "philosophical". The deployment is a different step compared to the build process. Azure DevOps does a great job in helping to keep the two phases separated, by providing release pipelines to handle the release management story. GitHub doesn't support this approach but, still, a multi-stage pipeline can help us to better split the two phases. The second reason is more practical. The action which performs the deployment to Azure Static Web App (the &lt;code&gt;Azure/static-web-apps-deploy@v0.0.1-preview&lt;/code&gt; one) works only on Linux, while the build process we have executed so far must run on Windows. By using multiple jobs, we can use different environments: the build job will continue to run on a Windows hosted agent, while we're going to run the deployment job on a Linux machine.&lt;/p&gt;

&lt;p&gt;Let's go and add the second job after the first one we have previously configured:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deploy:
  needs: [build]
  runs-on: ubuntu-latest
  name: Deploy Job
  steps:
    - name: Download Package artifact
      uses: actions/download-artifact@master
      with:
        name: MSIX Package
        path: MSIX 

    - name: Build And Deploy
      id: builddeploy
      uses: Azure/static-web-apps-deploy@v0.0.1-preview
      with:
        azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_CALM_DESERT_05F99C503 }}
        repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
        action: "upload"
        ###### Repository/Build Configurations - These values can be configured to match you app requirements. ######
        # For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
        app_location: "MSIX" # App source code path
        ###### End of Repository/Build Configurations ######

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This job contains only two tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first one uses the action &lt;code&gt;actions/download-artifact&lt;/code&gt; to download on the machine the artifact we have just created in the build job. The &lt;code&gt;name&lt;/code&gt; property must match the value of the &lt;code&gt;name&lt;/code&gt; property we have set in the &lt;code&gt;actions/upload-artifact&lt;/code&gt; task of the build job.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The second one is the task we have previously copied from the original workflow file, which takes care to do the deployment to the Azure Static Web App. Compared to the original task, we have to make a couple of changes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;app_location&lt;/code&gt; parameter must be set with the name of the folder that contains the artifact we need to deploy. In our case, it's &lt;code&gt;MSIX&lt;/code&gt;, which is the value we have set as &lt;code&gt;path&lt;/code&gt; in the &lt;code&gt;actions/download-artifact&lt;/code&gt; task.&lt;/li&gt;
&lt;li&gt;We can remove the &lt;code&gt;api_location&lt;/code&gt; and &lt;code&gt;app_artifact_location&lt;/code&gt; properties, since we aren't deploying a full static web app.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can notice, we have configured the job to run on Linux (&lt;code&gt;runs-on: ubuntu-latest&lt;/code&gt;) and we have used the &lt;code&gt;needs&lt;/code&gt; parameter to specify that this task must run only after the &lt;code&gt;build&lt;/code&gt; one has been completed. Without this option, GitHub would try to run both jobs in parallel.&lt;/p&gt;

&lt;p&gt;That's it! Now try to commit any change to the code to trigger the execution of the workflow. If you did everything in the correct way, once the workflow is completed, you should be able to open your browser on the URL assigned to your Azure Static Web App and see the web page generated by Visual Studio to trigger the installation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--50y6Q9oC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197918i1D1C11D919BDDDE7/image-size/large%3Fv%3D1.0%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--50y6Q9oC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197918i1D1C11D919BDDDE7/image-size/large%3Fv%3D1.0%26px%3D999" alt="ContosoApp.png" title="ContosoApp.png" width="713" height="566"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By clicking on the  &lt;strong&gt;Get the app&lt;/strong&gt;  button, you will trigger the process to start the installation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Yj8VF0OC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197919i4B5D953FC21EBED8/image-size/large%3Fv%3D1.0%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Yj8VF0OC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197919i4B5D953FC21EBED8/image-size/large%3Fv%3D1.0%26px%3D999" alt="ContosoAppInstallation.png" title="ContosoAppInstallation.png" width="650" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping up
&lt;/h3&gt;

&lt;p&gt;In this blog post we have seen how to use the recently announced Azure Static Web App service to host our MSIX packages and enable an easy installation process of our desktop apps through a website. By tweaking the project we can also enable automatic updates: we just need to configure the AppInstaller file &lt;a href="https://docs.microsoft.com/en-us/windows/msix/app-installer/update-settings"&gt;to support automatic updates&lt;/a&gt; and use a tool like &lt;a href="https://github.com/dotnet/Nerdbank.GitVersioning"&gt;Nerdbank.GitVersioning&lt;/a&gt; to handle the versioning of the MSIX packages, so that we're sure that every new generated package will have a higher version number.&lt;/p&gt;

&lt;p&gt;The sample project used in this blog post (including the GitHub workflow) is available &lt;a href="https://github.com/qmatteoq/ContosoApp"&gt;here&lt;/a&gt;. If, instead, you want to see a more advanced approach, you can take a look &lt;a href="https://github.com/qmatteoq/ContosoExpenses-GitHub"&gt;at another project from me&lt;/a&gt;. This second repository contains a more complete workflow, which includes versioning management and signing using Azure Key Vault.&lt;/p&gt;

&lt;p&gt;A special thanks to &lt;a href="https://withinrafael.com/"&gt;Rafael Rivera&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/mitchell-webster-a01656a0/"&gt;Mitchell Webster&lt;/a&gt; for the help in troubleshooting and fixing &lt;a href="https://github.com/Azure/static-web-apps/issues/26"&gt;a series of issues&lt;/a&gt; that were preventing Azure Static Web Apps to work properly with MSIX deployment.&lt;/p&gt;

&lt;p&gt;Happy deployment!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Create a Windows module for React Native with asynchronous code in C#</title>
      <dc:creator>Matteo Pagani</dc:creator>
      <pubDate>Mon, 08 Jun 2020 15:39:51 +0000</pubDate>
      <link>https://dev.to/qmatteoq/create-a-windows-module-for-react-native-with-asynchronous-code-in-c-jhk</link>
      <guid>https://dev.to/qmatteoq/create-a-windows-module-for-react-native-with-asynchronous-code-in-c-jhk</guid>
      <description>&lt;p&gt;&lt;a href="https://techcommunity.microsoft.com/t5/windows-dev-appconsult/building-a-react-native-module-for-windows/ba-p/1067893"&gt;We have already explored on this blog&lt;/a&gt; the opportunity to create native Windows modules for React Native. Thanks to these modules, you are able to surface native Windows APIs to JavaScript, so that you can leverage them from a React Native application running on Windows. Native modules are one of the many ways supported by React Native to build truly native applications. Despite you're using web technologies, you are able to get access to native features of the underlying platform, like notifications, storage, geolocation, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://techcommunity.microsoft.com/t5/windows-dev-appconsult/building-a-react-native-module-for-windows/ba-p/1067893"&gt;In the previous post&lt;/a&gt; we have learned the basics on how to create a module in C# and how to register it in the main application, so that the C# functions are exposed as JavaScript functions. As such, in this post I won't go again into the details on how to build a module and how to register it in the Windows implementation of the React Native app. However, recently, I came across a blocker while working on a customer's project related to this scenario. My post (like the &lt;a href="https://microsoft.github.io/react-native-windows/docs/native-modules"&gt;official documentation&lt;/a&gt;) was leveraging synchronous methods. In my scenario, instead, I needed to use the Geolocation APIs provided by Windows 10, which are asynchronous and based on the async / await pattern.&lt;/p&gt;

&lt;p&gt;As such, I've started to build my module using the standard async / await pattern:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace GeolocationModule
{
    [ReactModule]
    class GeolocationModule
    {
        [ReactMethod("getCoordinates")]
        public async Task&amp;lt;string&amp;gt; GetCoordinates()
        {
            Geolocator geolocator = new Geolocator();
            var position = await geolocator.GetGeopositionAsync();

            string result = $"Latitude: {position.Coordinate.Point.Position.Latitude} - Longitude: {position.Coordinate.Point.Position.Longitude}";

            return result;
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the traditional implementation of this pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;GetCoordinates()&lt;/code&gt; method is marked with the &lt;code&gt;async&lt;/code&gt; keyword&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;GetCoordinates()&lt;/code&gt; method returns &lt;code&gt;Task&amp;lt;T&amp;gt;&lt;/code&gt;, where &lt;code&gt;T&lt;/code&gt; is the type of the result we want to return (in our case, its' a &lt;code&gt;string&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;When we call an asynchronous API in the body (the &lt;code&gt;GetGeopositionAsync()&lt;/code&gt; method exposed by the &lt;code&gt;Geolocator&lt;/code&gt; object), we add the &lt;code&gt;await&lt;/code&gt; prefix.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then, in the React Native portion, I've created a wrapper for this module using the &lt;code&gt;NativeModules&lt;/code&gt; APIs, exactly like I did in my previous sample:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const getCoordinates = () =&amp;gt; {
  return new Promise((resolve, reject) =&amp;gt; {
    NativeModules.GeolocationModule.getCoordinates(function(result, error) {
      if (error) {
        reject(error);
      }
      else {
        resolve(result);
      }
    })
  })
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once I've connected all the dots and launched the React Native application, however, I was greeted with an unpleasant surprise. As soon as I hit the button which was invoking the &lt;code&gt;getCoordinates()&lt;/code&gt; function, the application was crashing.&lt;/p&gt;

&lt;p&gt;Thanks to a chat with &lt;a href="https://www.linkedin.com/in/vmoroz/"&gt;Vladimir Morozov&lt;/a&gt; from the React Native team, it turned out that React Native doesn't support methods which return a &lt;code&gt;Task&amp;lt;T&amp;gt;&lt;/code&gt;. In order to work properly, they must return &lt;code&gt;void&lt;/code&gt;. How is it possible to achieve this goal and, at the same time, being able to keep calling asynchronous APIs, like the ones exposed by the &lt;code&gt;Geolocator&lt;/code&gt; class? Thanks to Vladimir who put me on the right track, the solution is easy. Let's explore the options we have.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use promises
&lt;/h3&gt;

&lt;p&gt;When it comes to JavaScript, I'm a big fan of &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise"&gt;promises&lt;/a&gt; since they enable a syntax which is very similar to the C# one. When an asynchronous method returns a Promise, you can simply mark the function with the &lt;code&gt;async&lt;/code&gt; keyword and add the &lt;code&gt;await&lt;/code&gt; prefix before calling the asynchronous function, like in the following sample:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const getData = async () =&amp;gt; {
    var result = await fetch ('this-is-a-url');
    //do something with the result
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's start to see how we can build our module so that it can return a promise. It's easy, thanks to the &lt;code&gt;IReactPromise&lt;/code&gt; interface included in the React Native implementation for Windows. Let's see some code first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using Microsoft.ReactNative.Managed;
using System;
using Windows.Devices.Geolocation;

namespace GeolocationModule
{
    [ReactModule]
    class GeolocationModule
    {
        [ReactMethod("getCoordinatesWithPromise")]
        public async void GetCoordinatesWithPromise(IReactPromise&amp;lt;string&amp;gt; promise)
        {
            try
            {
                Geolocator geolocator = new Geolocator();
                var position = await geolocator.GetGeopositionAsync();

                string result = $"Latitude: {position.Coordinate.Point.Position.Latitude} - Longitude: {position.Coordinate.Point.Position.Longitude}";

                promise.Resolve(result);
            }
            catch (Exception e)
            {
                promise.Reject(new ReactError { Exception = e });
            }
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As first step, we need to declare the method as &lt;code&gt;async void&lt;/code&gt;, in order to be compliant with the React Native requirements. To handle the asynchronous nature of the method, the key component is the requested parameter, which type is &lt;code&gt;IReactPromise&amp;lt;T&amp;gt;&lt;/code&gt;, where &lt;code&gt;T&lt;/code&gt; is the type of the value we want to return. In my scenario I want to return a string with the full coordinates, so I'm using the &lt;code&gt;IReactPromise&amp;lt;string&amp;gt;&lt;/code&gt; type.&lt;/p&gt;

&lt;p&gt;Inside the method we can start writing our code like we would do in a traditional Windows application. Since we have marked the method with the &lt;code&gt;async&lt;/code&gt; keyword, we can just call any asynchronous API (like the &lt;code&gt;GetGeopositionAsync()&lt;/code&gt; one exposed by the &lt;code&gt;Geolocator&lt;/code&gt; class) by adding the &lt;code&gt;await&lt;/code&gt; prefix.&lt;/p&gt;

&lt;p&gt;Once we have completed the work and we have obtained the result we want to return, we need to pass it to the &lt;code&gt;Resolve()&lt;/code&gt; method exposed by the &lt;code&gt;IReactPromise&lt;/code&gt; parameter. The method expects a value which type is equal to &lt;code&gt;T&lt;/code&gt;. In my case, for example, we would get an error if we try to pass anything but a string.&lt;/p&gt;

&lt;p&gt;In case something goes wrong, instead, we can use the &lt;code&gt;Reject()&lt;/code&gt; method to surface the error to the React Native application, by creating a new &lt;code&gt;ReactError&lt;/code&gt; object. You can customize it with different parameters, like &lt;code&gt;Code&lt;/code&gt;, &lt;code&gt;Message&lt;/code&gt; and &lt;code&gt;UserInfo&lt;/code&gt;. In my case, I just want to raise the whole exception, so I simply set the &lt;code&gt;Exception&lt;/code&gt; property exposed by my &lt;code&gt;try / catch&lt;/code&gt; block.&lt;/p&gt;

&lt;p&gt;That's it! Now we can easily define a function in our React Native code that, by using the &lt;code&gt;NativeModules&lt;/code&gt; API and a Promise, can invoke the C# method we have just created, as in the following sample:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const getCoordinatesWithPromise = async () =&amp;gt; {
  var coordinates = await NativeModules.GeolocationModule.getCoordinatesWithPromise();
  console.log(coordinates);
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we're using a Promise, the syntax is the same we have seen in the previous sample. We mark the function with the &lt;code&gt;async&lt;/code&gt; keyword and we call the &lt;code&gt;getCoordinatesWithPromise()&lt;/code&gt; method with the &lt;code&gt;await&lt;/code&gt; prefix. Thanks to the Promise we can be sure that, when we invoke the &lt;code&gt;console.log()&lt;/code&gt; function, the &lt;code&gt;coordinates&lt;/code&gt; property has been properly set. The native module is exposed through the &lt;code&gt;NativeModules&lt;/code&gt; APIs and the &lt;code&gt;GeolocationModule&lt;/code&gt; object, which is the name of the C# class we have created (if you remember, we have marked it with the &lt;code&gt;[ReactModule]&lt;/code&gt; attribute).&lt;/p&gt;

&lt;h3&gt;
  
  
  Use callbacks
&lt;/h3&gt;

&lt;p&gt;I really much prefer the syntax offered by Promises but if, by any chance, you prefer to use callbacks, we got you covered! You can use, in fact, a slightly different syntax, based on the &lt;code&gt;Action&amp;lt;T&amp;gt;&lt;/code&gt; object, to expose your native C# method as a callback. Let's see the implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using Microsoft.ReactNative.Managed;
using System;
using Windows.Devices.Geolocation;

namespace GeolocationModule
{
    [ReactModule]
    class GeolocationModule
    {
        [ReactMethod("getCoordinatesWithCallback")]
        public async void GetCoordinatesWithCallback(Action&amp;lt;string&amp;gt; resolve, Action&amp;lt;string&amp;gt; reject)
        {
            try
            {
                Geolocator geolocator = new Geolocator();
                var position = await geolocator.GetGeopositionAsync();

                string result = $"Latitude: {position.Coordinate.Point.Position.Latitude} - Longitude: {position.Coordinate.Point.Position.Longitude}";

                resolve(result);
            }
            catch (Exception e)
            {
                reject(e.Message);
            }
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, as parameters of the method, we are passing two objects which type is &lt;code&gt;Action&amp;lt;T&amp;gt;&lt;/code&gt;. The first one will be used when the method completes successfully, the second one when instead something goes wrong. In this case, &lt;code&gt;T&lt;/code&gt; is a &lt;code&gt;string&lt;/code&gt; in both cases. In case of success, we want to return the usual string with the full coordinates; in case of failure, we want to return the message of the exception.&lt;/p&gt;

&lt;p&gt;The rest of the code is similar to the one we have written before. The only difference is that, when we have achieved our result, we just pass it to the &lt;code&gt;resolve()&lt;/code&gt; function; when something goes wrong, instead, we call the &lt;code&gt;reject()&lt;/code&gt; function. The main difference compared to the previous approach is that &lt;code&gt;Action&amp;lt;T&amp;gt;&lt;/code&gt; isn't a structured object like the &lt;code&gt;IReactNative&amp;lt;T&amp;gt;&lt;/code&gt; interface. As such, it doesn't support to pass the full exception but, in this case, we choose to pass only the &lt;code&gt;Message&lt;/code&gt; property of the exception.&lt;/p&gt;

&lt;p&gt;That's if from the C# side. Let's move to the JavaScript one. In this case, being a callback, we can't use anymore the &lt;code&gt;async&lt;/code&gt; and &lt;code&gt;await&lt;/code&gt; keywords, but we need to pass to the method two functions: one will be called when the operation is successful, one when we get an error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const getCoordinatesWithCallback = () =&amp;gt; {
  NativeModules.GeolocationModule.getCoordinatesWithCallback( (result) =&amp;gt; { console.log(result); }, (error) =&amp;gt; console.log(error));
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Passing parameters
&lt;/h3&gt;

&lt;p&gt;In both scenarios, if we need to pass any parameter from the JavaScript code to the C# one we can just add them before the &lt;code&gt;IReactPromise&amp;lt;T&amp;gt;&lt;/code&gt; or the &lt;code&gt;Action&amp;lt;T&amp;gt;&lt;/code&gt; ones. For example, let's say we want to set the desired accuracy of the &lt;code&gt;Geolocator&lt;/code&gt; object with a value in meters, passed from the React Native app. We can just define the method like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ReactMethod("getCoordinatesWithPromise")]
public async void GetCoordinatesWithPromise(uint meters, IReactPromise &amp;lt;string&amp;gt; promise)
{
    try
    {
        Geolocator geolocator = new Geolocator();
        geolocator.DesiredAccuracyInMeters = meters;

       var position = await geolocator.GetGeopositionAsync();

        string result = $"Latitude: {position.Coordinate.Point.Position.Latitude} - Longitude: {position.Coordinate.Point.Position.Longitude}";

        promise.Resolve(result);
    }
    catch (Exception e)
    {
        promise.Reject(new ReactError { Exception = e });
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, from JavaScript, we just need to pass the value of the parameter when we invoke the &lt;code&gt;getCoordinatesWithPromise()&lt;/code&gt; function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const getCoordinatesWithPromise = async () =&amp;gt; {
  var coordinates = await NativeModules.GeolocationModule.getCoordinatesWithPromise(15);
  console.log(coordinates);
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Of course, you can use the same pattern if you prefer to leverage the callback approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping up
&lt;/h3&gt;

&lt;p&gt;On this blog &lt;a href="https://techcommunity.microsoft.com/t5/windows-dev-appconsult/building-a-react-native-module-for-windows/ba-p/1067893"&gt;we have already learned&lt;/a&gt; how to build native C# modules for React Native so that, as a developer, you can access native Windows APIs from JavaScript. However, in that sample we leveraged only synchronous APIs. The reality, however, is that most of the Windows 10 APIs are implemented with the asynchronous pattern. Unfortunately, as soon as you start using them, you will face errors and crashes, since React Native isn't able to properly understand the traditional async / await pattern, based on the Task object. Thanks to the React Native team I've been pointed into the right direction, by leveraging the &lt;code&gt;IReactNative&lt;/code&gt; interface (in case you want to enable asynchronous APIs with promises) or the &lt;code&gt;Action&lt;/code&gt; object (in case you want to leverage the more traditional callback approach).&lt;/p&gt;

&lt;p&gt;Regardless of the approach you prefer, you'll be able to achieve your goal of leveraging asynchronous Windows 10 APIs from your React Native application running on Windows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2u4FWeQp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197301i6D55AC3AC86BD3EC/image-size/large%3Fv%3D1.0%26px%3D999" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2u4FWeQp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gxcuf89792.i.lithium.com/t5/image/serverpage/image-id/197301i6D55AC3AC86BD3EC/image-size/large%3Fv%3D1.0%26px%3D999" alt="FinalApp.png" title="FinalApp.png" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
