<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Arafat Tehsin</title>
    <description>The latest articles on DEV Community by Arafat Tehsin (@arafattehsin).</description>
    <link>https://dev.to/arafattehsin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/arafattehsin"/>
    <language>en</language>
    <item>
      <title>ParkingGPT - Your AI Parking Copilot</title>
      <dc:creator>Arafat Tehsin</dc:creator>
      <pubDate>Sat, 13 Jan 2024 23:41:33 +0000</pubDate>
      <link>https://dev.to/arafattehsin/parkinggpt-your-ai-parking-copilot-2m6e</link>
      <guid>https://dev.to/arafattehsin/parkinggpt-your-ai-parking-copilot-2m6e</guid>
      <description>&lt;p&gt;About 10 months (March, 2023) ago, I got a mention in a &lt;a href="https://twitter.com/simonwaight/status/1630787390393704448" rel="noopener noreferrer"&gt;tweet&lt;/a&gt; from one of my good friends and our local community leader, &lt;a href="https://twitter.com/simonwaight" rel="noopener noreferrer"&gt;Simon Waight&lt;/a&gt;. It was about solving a complex parking problem with Computer Vision. I bookmarked this interesting challenge and made it as a part of my &lt;strong&gt;want-to-do&lt;/strong&gt; list. After a month, I picked this up and started to think as how I would like to solve this problem. I faced several problems which I have mentioned later in this blog post but finally managed to create this small yet useful cross-platform app (&lt;em&gt;or copilot or gpt or whatever you want to call it&lt;/em&gt;) for everyone living in Australia. While there's a big need of improvements and additional features, it is live and opensource. You may consider it as &lt;em&gt;a new year gift from me&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/simonwaight/status/1630787390393704448" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farafattehsin.com%2Fwp-content%2Fuploads%2F2024%2F01%2FTweet.png" alt="Tweet by Simon Waight"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Learning from my failures&lt;/h2&gt;

&lt;p&gt;As I mentioned earlier, it was just a tweet which triggered this idea. However, there were many hiccups in its development. Someone rightly said that success never comes easy. As the world is going through a rapid change of how we develop, use and interact with technology, picking up these challenges can be daunting, time-consuming and hard. Initially, as soon as I picked it up, I opted for the latest and finest technology of that time for Computer Vision. I chose &lt;a href="https://azure.microsoft.com/blog/announcing-a-renaissance-in-computer-vision-ai-with-microsofts-florence-foundation-model/?WT.mc_id=AI-MVP-5003464" rel="noopener noreferrer"&gt;Project Florence&lt;/a&gt; by Microsoft which is a large foundational model for Vision. The reason why I chose this is because I wanted to use the &lt;a href="https://portal.vision.cognitive.azure.com/demo/dense-captioning" rel="noopener noreferrer"&gt;dense captions&lt;/a&gt; capability of this model.&lt;/p&gt;

&lt;p&gt;However, after trying out different techniques, I failed as it couldn't do the proper OCR of the parking signs. Later on, I was suggested by one of the product team members to use Form Recognizer (which is now called &lt;a href="https://azure.microsoft.com/products/ai-services/ai-document-intelligence?WT.mc_id=AI-MVP-5003464" rel="noopener noreferrer"&gt;Azure AI Document Intelligence&lt;/a&gt;). I tried with that too but couldn't get any success either. The problem I was getting with Form Recognizer was that the text was not correct all the time. It used to be messed up sometimes. To combat this problem, I decided to use a combo approach which means that I used Form Recognizer for OCR (with a bit of gibberish results) and then corrected them with GPT-3. This worked for a few signs but failed miserably for others. Basically, the success rate was 50%. I wanted something robust that would solve this problem for us without any training of a custom dataset.&lt;/p&gt;

&lt;p&gt;Finally, the time came when I witnessed the launch of OpenAI's GPT-4 with Vision followed by the support for the same in Azure OpenAI Service. It felt like my wish came true with time, but then I had to wait because I had so many other things planned esp. the year end tasks and the family holidays. 🎈&lt;/p&gt;

&lt;p&gt;Considering all of it, I kept a dedicated slot between my personal time to work on this as I really wanted to finish it off before my work begins in January 2024. So, here I am, I think I achieved what I planned. I created a &lt;strong&gt;ParkingGPT &lt;/strong&gt;for all of my fellow Aussies to decide when they can park or not. It's very initial stage and currently I have just tested with the parking signs within New South Wales. However, if you are based in any other state then please feel free to give it a shot and let me know your feedback. 🙏&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farafattehsin.com%2Fwp-content%2Fuploads%2F2024%2F01%2FAI-Parking-Copilot.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farafattehsin.com%2Fwp-content%2Fuploads%2F2024%2F01%2FAI-Parking-Copilot.jpg" alt="AI Parking Copilot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;ParkingGPT&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/arafattehsin/ParkingGPT" rel="noopener noreferrer"&gt;ParkingGPT&lt;/a&gt; enables you to decide whether you want to park or not, all using the power of Vision AI &amp;amp; LLM. This uses the newly released &lt;a href="https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/gpt-4-turbo-with-vision-on-azure-openai-service/ba-p/3979933?WT.mc_id=AI-MVP-5003464" rel="noopener noreferrer"&gt;GPT-4 Turbo with Vision&lt;/a&gt;. With ParkingGPT, you can easily identify parking signs and make informed decisions about where to park. Whether you’re a driver looking for a convenient parking spot or a parking enforcement officer looking to enforce parking regulations, ParkingGPT is your AI parking copilot.&lt;/p&gt;

&lt;p&gt;This app just has two main features. It either allows you to capture a photo of a parking sign from your camera or it lets you pick it up from the photo gallery.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farafattehsin.com%2Fwp-content%2Fuploads%2F2024%2F01%2Fhomescreen.png" alt="Home Screen of ParkingGPT"&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farafattehsin.com%2Fwp-content%2Fuploads%2F2024%2F01%2Fpick.png" alt="Pick a photo screen of ParkingGPT"&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farafattehsin.com%2Fwp-content%2Fuploads%2F2024%2F01%2Fresult.png" alt="Parking Result of ParkingGPT"&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;Technical Implementation&lt;/h3&gt;

&lt;p&gt;It is quite straight forward as you can see from the repo. I have tried my best to follow the best practices for .NET MAUI and how you usually develop apps. There may be certain negative cases which I may have missed, such as writing an effective prompt of ignoring non-parking sign images but overall, it works well.&lt;/p&gt;

&lt;p&gt;This app takes an image (either from the camera or from the gallery) and then converts it into base64 encoded string. This is then sent to OpenAI API's for processing and returned with the response. This app is good to work with both Azure OpenAI and OpenAI. All you need is the API endpoint key for it to run. The API endpoint key stays in your app's specific storage which means that you do not have to enter it again and again.&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;a href="https://www.youtube.com/shorts/s_H4MVvEfCs?feature=share" rel="noopener noreferrer"&gt;
      youtube.com
    &lt;/a&gt;
&lt;/div&gt;


&lt;h3&gt;Some common FAQs&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Is it published on a Google Play Store?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No, but you can request me for the .apk package by just commenting on this post and sideload it to your Android for now. If there's much interest, I can think of publishing it to the store.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is it published on an Apple App Store?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No. I do not own a Macbook which restricts me from creating a package (or .ipa) for this app. The developer program subscription is too expensive for a hobbyist but I will see.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does it work seamlessly with multiple parking signs?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I have tried 2 to 3 ones and it works fine. However, I am not sure how it would react to the more than 2 or 3 signs. &lt;em&gt;I am still testing it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is it smart enough to make decision based upon the no parking signs and parking signs?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not for now. I will see this for future.&lt;/p&gt;




&lt;p&gt;If you are a developer then the code is there for you as opensource to see how I have done it. As a part of my mission to empower every developer with necessary applied AI skills, I have done this in my personal time. Please do not forget to star ⭐ the repo if you like and share it among your friends so others can get an idea on how to create small but useful utilities like ParkingGPT.&lt;/p&gt;

&lt;p&gt;Until next time.&lt;/p&gt;

</description>
      <category>openai</category>
      <category>llm</category>
      <category>azure</category>
      <category>ai</category>
    </item>
    <item>
      <title>Best practices for building collaborative UX with Human-AI partnership</title>
      <dc:creator>Arafat Tehsin</dc:creator>
      <pubDate>Thu, 30 Nov 2023 07:04:58 +0000</pubDate>
      <link>https://dev.to/arafattehsin/best-practices-for-building-collaborative-ux-with-human-ai-partnership-hfd</link>
      <guid>https://dev.to/arafattehsin/best-practices-for-building-collaborative-ux-with-human-ai-partnership-hfd</guid>
      <description>&lt;p&gt;In today’s digital age, it is not easy for any business to work without the application of artificial intelligence (AI). Especially for those in retail or consumer market, AI has now become the integral aspect of their growth. If it is rightly (read: responsibly) used then it ensures the improvement in overall customer experience and helps drive sales. I recently completed a &lt;a href="https://www.linkedin.com/learning/ux-for-ai-design-practices-for-ai-developers/designing-for-ai" rel="noopener noreferrer"&gt;course&lt;/a&gt; by &lt;a href="https://www.linkedin.com/in/johnmaeda" rel="noopener noreferrer"&gt;John Maeda&lt;/a&gt;, VP of Design and AI at Microsoft where he discusses a few essential principles for building remarkable user experience (UX) with the &lt;a href="https://news.microsoft.com/source/features/ai/microsoft-outlines-framework-for-building-ai-apps-and-copilots-expands-ai-plugin-ecosystem/" rel="noopener noreferrer"&gt;AI copilot stack&lt;/a&gt; in real-world applications. These principles focus on creating a collaborative user experience rather than a pretty UX.&lt;/p&gt;

&lt;h1&gt;Collaborative UX&lt;/h1&gt;

&lt;p&gt;Collaborative UX is a concept that recognises the importance of collaboration between humans and AI in the design and development process. When building AI copilots, it is crucial to understand that the user is the pilot and the AI is the copilot. The success of the copilot depends on the skills and expertise of the pilot (human). Collaborative UX aims to establish a strong partnership between humans and AI, ensuring that the AI copilot complement the human pilot rather than replacing them.&lt;/p&gt;

&lt;p&gt;There are several collaborative UX patterns that help establish and maintain the pilot-copilot relationship. I have summarised a few of them below:&lt;/p&gt;

&lt;h2&gt;AI Notice Pattern&lt;/h2&gt;

&lt;p&gt;One of the collaborative UX design patterns that serves as a reminder to users that the information they are interacting with is generated by AI, is referred as AI Notice Pattern. This pattern encourages users to critically assess the AI’s output rather than blindly trusting it. It acknowledges the non-deterministic nature of AI, emphasising that while AI can offer valuable insights, it is not foolproof and may occasionally produce inaccurate or incomplete results. The presence of AI in the user experience is highlighted which fosters a collaborative relationship between humans and AI and empowers users to make informed decisions.&lt;/p&gt;

&lt;p&gt;The implementation of AI Notice Pattern involves the usage of visual cues or textual indicators that clearly state the information is AI-generated. This can be done through notifications, disclaimers or prompts which encourage users to question or verify the AI’s output. The pattern aligns with the broader concept of collaborative UX stressing the importance of human-AI collaboration and active user involvement in the decision-making process. By incorporating this pattern into AI-powered applications, designers can create more responsible and user-centric experiences.&lt;/p&gt;

&lt;p&gt;[caption id="attachment_31814" align="aligncenter" width="970"]&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farafattehsin.com%2Fwp-content%2Fuploads%2F2023%2F10%2Fainotice.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farafattehsin.com%2Fwp-content%2Fuploads%2F2023%2F10%2Fainotice.png" alt="AI Notice Pattern"&gt;&lt;/a&gt; AI Notice Pattern[/caption]&lt;/p&gt;

&lt;h2&gt;Suggested Prompts Pattern&lt;/h2&gt;

&lt;p&gt;Another collaborative UX pattern designed to improve the relationship between users and AI copilots is called the Suggested Prompts Pattern. It gives users the suggested prompts or options with the power of copilot's capabilities. The copilot can suggest actions or make recommendations based on their knowledge and abilities by offering specific prompts. These prompts can then suggest the alternative approaches to problem solving or suggest specific actions to achieve a desired outcome.&lt;/p&gt;

&lt;p&gt;Implementation of the Suggested Prompts pattern is comprised of both business workflow understanding as well as designing user interfaces that integrate the suggested prompts. This can be done through visual prompts, buttons or interactive elements that present users with the available options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farafattehsin.com%2Fwp-content%2Fuploads%2F2023%2F10%2Fsuggested-prompts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farafattehsin.com%2Fwp-content%2Fuploads%2F2023%2F10%2Fsuggested-prompts.png" alt="Suggested Prompts Pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is important to have the prompts to be concise, relevant and contextually aligned with the user’s goal. This collaborative UX pattern can help improve the efficiency and effectiveness of user interactions with the copilot and encourage users to explore different possibilities and consider alternative options. This way, users can benefit from the copilot’s capabilities through suggested prompts while maintaining control and decision-making authority to themselves.&lt;/p&gt;

&lt;blockquote&gt;If you want to learn about UX for Copilots in depth then you do not want to miss &lt;a href="https://build.microsoft.com/en-US/sessions/bac1bf0b-7c70-4366-bb86-0abd11b009bc?source=sessions" rel="noopener noreferrer"&gt;this session&lt;/a&gt; of Microsoft Build, 2023. All the images of this post are also taken from this session.&lt;/blockquote&gt;

&lt;h2&gt;Feedback Opportunities Pattern&lt;/h2&gt;

&lt;p&gt;The last UX pattern to discuss today is around the feedback. Feedback about the performance of AI and how it helps out user’s goal and its output. It gives an opportunity to the users to contribute to the improvement of overall copilot’s capabilities therefore known as Feedback Opportunities Pattern. These contributions can vary from sharing their insights to corrections to suggestions and many more. Thus, a feedback loop is created, facilitating continuous learning and improvement. Feedback mechanisms such as a review form, issue reporting or a rating system can be integrated within the user interface of the AI-powered application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farafattehsin.com%2Fwp-content%2Fuploads%2F2023%2F10%2Ffeedback.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Farafattehsin.com%2Fwp-content%2Fuploads%2F2023%2F10%2Ffeedback.png" alt="Feedback Pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The continuous feedback is valuable for the copilots strengths and weaknesses. It is essential for the underlying technology and algorithms which can be used to address any shortcomings. Feedback Opportunities not only empower users to contribute to the copilot’s development but also foster a sense of ownership and engagement. By incorporating user feedback into the development process, user experience designers (or developers in my opinion) can enhance the copilot’s capabilities, resulting in a more refined and effective user experience.&lt;/p&gt;

&lt;h2&gt;The Emotional Iron Triangle&lt;/h2&gt;

&lt;p&gt;In addition to these patterns, it is also important that developers consider the "Iron Triangle" of quality, speed, and cost when building AI-first apps. This triangle can also be topped up with Emotion as a fourth dimension, making it a tetrahedron of factors to manage. As developers create AI-powered applications, they need to be mindful of managing users' emotions, ensuring they don't over-humanise the AI and maintaining a sense of responsibility in this era of AI.&lt;/p&gt;

&lt;blockquote&gt;Embrace the essence of humanity but beware of over-humanising. For in the quest to understand others, we must not lose sight of their unique individuality&lt;/blockquote&gt;

&lt;p&gt;The copilot stack with its non-deterministic nature, bring new challenges and opportunities. Developers while also considering user emotions, must learn to balance quality with speed and cost. One has to become a new kind of a developer who understands both the technical aspects of AI and the emotional aspects of UX design.&lt;/p&gt;

&lt;p&gt;In a nutshell, creating successful AI-powered applications depends on creating a collaborative user experience that emphasises the partnership between humans and AI. In addition to implementing the above discussed design patterns, developers can also consider Iron Triangle with the user's emotions to create a strong pilot-copilot relationship. With this approach, AI-first apps become more engaging, effective and responsible driving growth in a rapidly changing digital landscape.&lt;/p&gt;

&lt;p&gt;Until next time.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This blog post was originally published at the &lt;a href="https://learn.microsoft.com/en-us/community/content/best-practices-ai-ux" rel="noopener noreferrer"&gt;Microsoft Learn Community&lt;/a&gt; blog.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ux</category>
      <category>ai</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
    <item>
      <title>Build Sentiment Classifier, Translator and a Flight Tracker with the Generative AI</title>
      <dc:creator>Arafat Tehsin</dc:creator>
      <pubDate>Mon, 24 Jul 2023 00:07:49 +0000</pubDate>
      <link>https://dev.to/arafattehsin/build-sentiment-classifier-translator-and-a-flight-tracker-with-the-generative-ai-12g5</link>
      <guid>https://dev.to/arafattehsin/build-sentiment-classifier-translator-and-a-flight-tracker-with-the-generative-ai-12g5</guid>
      <description>&lt;p&gt;Last week I received an email from the Microsoft Reactor team informing me that I had 360 RSVPs on the day of &lt;a href="https://arafattehsin.com/thegem_news/generative-ai-for-developers/"&gt;my talk&lt;/a&gt;. While I was super excited, I was a bit nervous because I wanted to ensure that I deliver the content responsibly. If you value your audience's time, ensure your content is engaging, informative and well-organized. During my talk, I answered around 25 questions therefore I thought that I shouldn't just deliver the session and upload my slide deck and demos. Instead, I should add some more value to it and make it more inclusive for most of the developers. As a result, I created not just one, but two Polyglot Notebooks (fully interoperable with Jupyter Notebooks) and decided to write about it.&lt;/p&gt;

&lt;p&gt;At first, I considered developing simple console applications for the demos. Although these console apps would have sufficed, the getting started guide would have been somewhat cluttered, requiring multiple apps. Therefore, I decided to use a Polyglot Notebook, allowing us to consolidate everything into a single file.&lt;/p&gt;

&lt;blockquote&gt;The biggest benefit of Polyglot Notebooks is its interoperability with Jupyter Notebooks &amp;amp; multi-language .NET support.&lt;/blockquote&gt;

&lt;h1&gt;Introducing Generative AI Notebooks&lt;/h1&gt;

&lt;p&gt;As a long time .NET developer, I always try my best to bring tech like &lt;a href="https://arafattehsin.com/reinforcement-learning-in-apps-bots-websites-with-azure-personalizer-part-2/"&gt;Reinforcement Learning&lt;/a&gt;, &lt;a href="https://arafattehsin.com/beyond-sentiment-analysis-object-detection-with-ml-net/"&gt;Computer Vision&lt;/a&gt; and Generative AI to the developer community using the similar languages and frameworks.  Despite my personal interest in .NET, I also did not want to ignore the fact that many developers are also (and rightly so) using Python within this space. Therefore, I decided to create &lt;a href="https://github.com/arafattehsin/generative-ai/tree/main/notebooks/azure%20openai"&gt;two notebooks&lt;/a&gt;, one for &lt;a href="https://github.com/arafattehsin/generative-ai/tree/main/notebooks/azure%20openai/dotnet"&gt;C# with .NET Interactive&lt;/a&gt; and the other one for &lt;a href="https://github.com/arafattehsin/generative-ai/tree/main/notebooks/azure%20openai/python"&gt;Python&lt;/a&gt;. Both of the notebooks are identical in terms of the topics covered, so it depends on you which one you'd like to use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ScdcLWNB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/07/genai-notebook.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ScdcLWNB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/07/genai-notebook.png" alt="Generative AI Notebook" width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've created somewhat boilerplate code for a few use-cases, which is the topic of our blog post. Let's dive into each of them.&lt;br&gt;
 &lt;/p&gt;

&lt;h2&gt;Sentiment Classifier&lt;/h2&gt;

&lt;h4&gt;&lt;strong&gt;Potential Use-cases: Customer Feedback Analysis, Client Satisfaction Index, Call Centre Real-time Insights&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Traditionally, machine learning models are always trained on a bigger dataset for better accuracy along with the right choice of algorithm with some hyperparameters. For example, if you want to build an engine that'd classify a sentiment out of a sentence, you may want to train it on at least 1000 records with a mixture of negative and positive sentiments. This way, the model may achieve as 70-80% of accuracy. I've done it in the past with my &lt;a href="https://www.nuget.org/packages/SentimentAnalyzer/"&gt;SentimentAnalyzer&lt;/a&gt; package which has already been used 24k times by developers. However, this was a bit of a task for me to train and then integrate it within my applications.&lt;/p&gt;

&lt;p&gt;Large language models have changed this all. Now, with the magic of &lt;a href="https://youtu.be/xVYAUwAOSQg?t=1709"&gt;few shot learning,&lt;/a&gt; you can easily build a sentiment classifier. This can easily be done using the &lt;a href="https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/completions?WT.mc_id=AI-MVP-5003464"&gt;Completion API with GPT-3&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Below is the syntax on how I did it. You can try it using the notebook.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://vscode.dev/github/arafattehsin/generative-ai/blob/main/notebooks/azure%20openai/dotnet/gen-ai-for-devs.ipynb#C11:L4"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sGw3KlGx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/07/sentiment-classifier.png" alt="Sentiment Classifier using Azure OpenAI Service" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Remember to initialise a .NET or Python kernel before you run the subsequent steps of the notebook(s)&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;Summarisation and Translation&lt;/h2&gt;

&lt;h4&gt;
&lt;strong&gt;Potential Use-cases: &lt;/strong&gt;&lt;strong&gt;Summary of the long emails, translating it from non-English to English and vice versa&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Imagine you work in a hospitality services business, and you request your customers to fill a survey at the end of their service. Your customers are both native English speakers and those who speak other languages. You often receive around 100-200 words of feedback and due to the volume, it gets tricky for you to go through each of them. This is where you want to summarise the feedback with their appropriate sentiments and also get the proper translation of it.&lt;/p&gt;

&lt;p&gt;Below example is a glimpse of how you'd like to do it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://vscode.dev/github/arafattehsin/generative-ai/blob/main/notebooks/azure%20openai/dotnet/gen-ai-for-devs.ipynb#C15:L4"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_dlG9WN---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/07/summary-translation.png" alt="Summary and Translation with GPT-3.5" width="800" height="723"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Flight Tracker&lt;/h2&gt;

&lt;h4&gt;&lt;strong&gt;Potential Use-cases: Flight tracking, postage tracking, food delivery &lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;&lt;span&gt;In 2020, I wrote two posts about how to build a Flight Tracker using Language Understanding Intelligent Service (aka LUIS), which is now upgraded as &lt;/span&gt;Language&lt;span&gt; Understanding in the Azure Language Studio. The reason why I brought the similar use-case here is to show you how effective it is to use the power of generative AI to build a much better version of it with less code. 💯&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Before we get there, imagine how you'd build it yourself? For this, you may need a source, destination or a flight number. If you do not have either of the locations or just the flight number, then you may not be able to track down the flight. This is &lt;strong&gt;at least&lt;/strong&gt; required information for any flight for a simplest hypothetical use-case.&lt;/p&gt;

&lt;p&gt;I have been working with Conversational AI since 2017 and pushed some useful solutions to production since then, I can confirm and you will also appreciate that this is not an easy problem. Your customer (who is an actual user of this bot) is a human. Humans are not naturally accustomed to following a pattern or sequence of the questions, all the time. That's why there's a good chance that humans may either provide a flight number, a destination or a source.&lt;/p&gt;

&lt;p&gt;In order to solve this problem with Generative AI (Azure OpenAI Service), we just have to provide a good prompt, few shot examples and that's it. You will be able to see that it works quite well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Protip: You may get a response returned in JSON or any format you'd like for your use-case. &lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Below is the syntax on how I did it. You can try it using the notebook.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://vscode.dev/github/arafattehsin/generative-ai/blob/main/notebooks/azure%20openai/dotnet/gen-ai-for-devs.ipynb#C13:L3"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1kYRakbI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/07/flight-tracker.png" alt="Flight Tracker with the Generative AI" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;DALL-E (Bonus 🎁)&lt;/h2&gt;

&lt;p&gt;If you look at the last section of the notebook, you will see that I have also created a boilerplate code for you to work with the DALL-E model which is still in preview. This creates an image out of the natural language utterance. Try it out yourself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rq2cjQQc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/07/batman-dall-e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rq2cjQQc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/07/batman-dall-e.png" alt="Batman working from home. Image generated by DALL-E" width="533" height="562"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The power of generative AI has revolutionised the way we approach problem-solving in various domains. By leveraging these Generative AI notebooks and the advancements in AI technology such as Azure OpenAI Service (GPT-3, GPT-4 and so on), developers can now create more efficient and effective solutions with less code and effort. This blog post highlights the potential of generative AI in use-cases such as sentiment classification, summarization and translation, and flight tracking. As AI continues to evolve, we can expect to see even more innovative applications and streamlined development processes, making it easier for developers to deliver cutting-edge solutions to their users. Embracing these advancements will not only enhance our ability to solve complex problems but also foster a more inclusive and accessible environment for developers across various programming languages and platforms.&lt;/p&gt;

&lt;p&gt;Until next time.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>azure</category>
      <category>machinelearning</category>
      <category>ai</category>
    </item>
    <item>
      <title>Break out of your comfort zone, and start learning new skills</title>
      <dc:creator>Arafat Tehsin</dc:creator>
      <pubDate>Wed, 12 Jul 2023 05:08:12 +0000</pubDate>
      <link>https://dev.to/arafattehsin/break-out-of-your-comfort-zone-and-start-learning-new-skills-5cpl</link>
      <guid>https://dev.to/arafattehsin/break-out-of-your-comfort-zone-and-start-learning-new-skills-5cpl</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DSJkSDlG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/109cbhnyz8z0mkudfsqi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DSJkSDlG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/109cbhnyz8z0mkudfsqi.png" alt="Image description" width="677" height="480"&gt;&lt;/a&gt;When we’re kids, we explore our interests and abilities, adopting what we like and leaving what we don’t. In this way, we create our comfort zone, where we find less stress, increased efficiency, and improved confidence. But given today’s fast-paced world and ever-growing opportunities, staying in our comfort zone can limit our perspectives and lead to stagnation, ultimately resulting in a lack of personal and professional growth.&lt;/p&gt;

&lt;p&gt;It’s important to be willing to stretch and learn new skills, especially in the tech community. There are countless examples of people who stepped outside of their comfort zone and achieved great success, including Microsoft Chief Executive Officer (CEO) Satya Nadella. I remember reading his book, Hit Refresh, in which he discussed leading different businesses, and breaking out of his comfort zone before becoming Microsoft CEO.&lt;/p&gt;

&lt;p&gt;The question I want to address in this post is, How do you break out of your comfort zone? It can be challenging—but also rewarding. It’s definitely a process, so I recommend taking things one step at a time. In my career as a Solution Architect, I’ve discovered some useful tips on going beyond my comfort zone, and I’d like to share some of them with you.&lt;/p&gt;

&lt;h2&gt;Identify your fears&lt;/h2&gt;

&lt;p&gt;It’s absolutely natural to be afraid of something unknown to you. So, start by asking yourself: Are you scared of failure, rejection, or the unknown—and why?&lt;/p&gt;

&lt;p&gt;Once you’ve identified your fears and the reasons for them, you can work on overcoming those fears. This process may involve small steps outside of your comfort zone, and these don’t need to be perfect on the first try. For example, let’s say you’re a technology expert and you want to teach others about it, but you’re afraid of addressing a large audience. A great way to counter this is to start by teaching a small group of people. You can also join a Microsoft &lt;a href="https://techcommunity.microsoft.com/t5/custom/page/page-id/learn?WT.mc_id=AI-MVP-5003464"&gt;Learning Room&lt;/a&gt; and co-present there with other experts.&lt;/p&gt;

&lt;h2&gt;Set clear goals&lt;/h2&gt;

&lt;p&gt;Clear goals can provide you with a solid sense of purpose, motivating you to go outside of your comfort zone and take on new challenges. Although there are different frameworks for setting up goals, I recommend that you follow &lt;strong&gt;SMART&lt;/strong&gt;—that is, your goals should be:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;strong&gt;Specific&lt;/strong&gt;. Clearly define what you want to achieve.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Measurable&lt;/strong&gt;. Set milestones to track your progress and to evaluate when you have achieved them. You can even divide individual goals into smaller milestones, making them less daunting.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Achievable&lt;/strong&gt;. Be sure your goals are realistic and attainable.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Relevant&lt;/strong&gt;. Build your goals on professional or personal aspirations to help maintain your motivation.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Time bound&lt;/strong&gt;. Set deadlines by which to achieve these goals.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, if you’re a Microsoft Power Platform expert who wants to get out of your comfort zone and stretch your abilities, your SMART goal could be, I want to learn AI and find out how to add value to my business applications with it. A logical next step would be to start with a &lt;a href="https://learn.microsoft.com/en-us/training/browse/?WT.mc_id=AI-MVP-5003464"&gt;learning path&lt;/a&gt; on Microsoft Learn, such as &lt;a href="https://learn.microsoft.com/training/paths/bring-ai/?wt.mc_id=community_expert_blog_wwl&amp;amp;WT.mc_id=AI-MVP-5003464"&gt;Bring AI to your business with AI Builder&lt;/a&gt; or &lt;a href="https://learn.microsoft.com/training/paths/improve-business-performance-ai-builder/?wt.mc_id=community_expert_blog_wwl&amp;amp;WT.mc_id=AI-MVP-5003464"&gt;Improve business performance with AI Builder&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;Embrace failure&lt;/h2&gt;

&lt;p&gt;Everyone experiences failure at one time or another, and this is to be expected when you venture out of your comfort zone. This is because you’re taking risks and trying out new things. Instead of avoiding failure, embrace it as part of the learning process. In fact, failure can give you valuable feedback and help you identify areas for improvement. Use it as a steppingstone toward great success, as it helps you build resilience and develop a growth mindset.&lt;/p&gt;

&lt;h2&gt;Surround yourself with supportive people&lt;/h2&gt;

&lt;p&gt;As a human, it’s important to have a solid support system around you—family, friends, mentors, colleagues, and others. Moving out of your comfort zone can result in many challenges, but people in your support circle can help keep you motivated, boost your confidence, and cheer you on as you overcome obstacles.&lt;/p&gt;

&lt;p&gt;As you set your goals, look for people who can provide you with constructive feedback. Seek advice from those with experience in the area you’re pursuing. One place to connect with peers and experts who are exploring technology is the &lt;a href="https://learn.microsoft.com/training/learn-community?WT.mc_id=AI-MVP-5003464"&gt;Microsoft Learn Community&lt;/a&gt;, where you can engage with others and get guidance on your journey.&lt;/p&gt;

&lt;h2&gt;Break out of your comfort zone&lt;/h2&gt;

&lt;p&gt;Saying goodbye to your comfort zone can be scary, but it’s important to do so—for your personal and professional growth. By identifying your fears, setting clear goals, embracing failure, and surrounding yourself with supportive people, you can set yourself up to achieve great success. Ready to explore new tech skills? &lt;a href="https://learn.microsoft.com/?WT.mc_id=AI-MVP-5003464"&gt;Microsoft Learn&lt;/a&gt; provides you with an opportunity to learn new things every day. Invest your time trying out new technologies, building new skills, and breaking free from your comfort zone. To find out more about how we can learn together, join me in my Learning Room, &lt;a href="https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR-M5cgAqLI9PvkpbzfDADTVUNE9RMjNGRkxUR0xXUFVVWVhFNkdWMVNYUiQlQCN0PWcu&amp;amp;ctx=%7B%22roomname%22:%22AI%20for%20Everyone%20-%20Arafat%20Tehsin%22%7D"&gt;AI for Everyone&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This blog was &lt;a href="https://techcommunity.microsoft.com/t5/microsoft-learn-blog/break-out-of-your-comfort-zone-and-start-learning-new-skills/ba-p/3841176"&gt;originally published&lt;/a&gt; by Arafat Tehsin at the Microsoft Tech Community.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>motivation</category>
      <category>productivity</category>
      <category>career</category>
      <category>learning</category>
    </item>
    <item>
      <title>Unlock the power of OpenAI's GPT with No-code AI Builder</title>
      <dc:creator>Arafat Tehsin</dc:creator>
      <pubDate>Tue, 04 Jul 2023 12:00:52 +0000</pubDate>
      <link>https://dev.to/arafattehsin/unlock-the-power-of-openais-gpt-with-no-code-ai-builder-fk3</link>
      <guid>https://dev.to/arafattehsin/unlock-the-power-of-openais-gpt-with-no-code-ai-builder-fk3</guid>
      <description>&lt;p&gt;As you know, Microsoft's &lt;a href="https://learn.microsoft.com/en-us/ai-builder/overview?WT.mc_id=AI-MVP-5003464"&gt;AI Builder&lt;/a&gt; now &lt;a href="https://cloudblogs.microsoft.com/powerplatform/2023/03/06/build-solutions-faster-with-microsoft-power-platform-and-next-generation-ai/"&gt;supports prebuilt Azure OpenAI&lt;/a&gt; models, allowing users to easily integrate powerful AI capabilities into their applications and processes without the need for &lt;b&gt;any &lt;/b&gt;coding or data science expertise. In this post, we will explore the benefits and potential use cases of leveraging these prebuilt models with AI Builder.&lt;/p&gt;

&lt;p&gt;Microsoft's AI Builder simplifies the integration of &lt;a href="https://learn.microsoft.com/azure/cognitive-services/openai/overview?WT.mc_id=AI-MVP-5003464"&gt;Azure OpenAI&lt;/a&gt; models by providing a user-friendly interface for connecting to the OpenAI API. This enables users to access advanced AI capabilities, such as GPT and incorporate them into their applications and workflows with minimal effort. By utilizing prebuilt Azure OpenAI models, users can add powerful AI-driven features to their applications and processes, such as natural language processing, text generation, and more. This can lead to improved efficiency, better decision-making, and more personalized user experiences.&lt;/p&gt;

&lt;h2&gt;Helpful conversational experiences with Power Virtual Agents&lt;/h2&gt;

&lt;p&gt;These latest capabilities of Azure OpenAI in AI Builder are native to Power Automate and Power Apps. This basically means that anything in Power Platform can utilise the capabilities of the GPT models without going to external sources. As a result of that, I created a Power Virutal Agents (PVA) bot using the &lt;a href="https://arafattehsin.com/power-virtual-agents-get-the-biggest-update-ever/"&gt;unified authoring canvas&lt;/a&gt; which calls a Power Automate flow to get the answers from Azure OpenAI 's GPT model.&lt;/p&gt;

&lt;blockquote&gt;Recently, I got the access to the prebuilt Azure Open AI models in AI Builder. You may not see this in your environment but you can always sign up for a &lt;a href="https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2LogRPRiTJDo1Rd8KnmcFRUMzlLTDZVQlJKSzNIWkVCMzE0VDFYVzk2QS4u"&gt;limited preview&lt;/a&gt;.&lt;/blockquote&gt;

&lt;p&gt;In this post, we choose a use-case which normally does not get covered yet it is interesting and demanding. Hence, we will work on a &lt;strong&gt;Handyman Services&lt;/strong&gt; business and build a bot around their tasks and related information. I've named this bot as a &lt;strong&gt;HandyBot &lt;/strong&gt;and conveniently divided this blog post into two different use-cases. First one is fairly simple while other one is a bit governed. Both are useful in my opinion.&lt;/p&gt;

&lt;h3&gt;Power Automate with AI Builder's action&lt;/h3&gt;

&lt;p&gt;Let's create a simple flow that accepts a text input from PVA and then uses the latest AI Builder's action of creating a text with GPT on Azure OpenAI Service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XEdrBzx3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/03/AI-Builder-OpenAI-PowerAutomate.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XEdrBzx3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/03/AI-Builder-OpenAI-PowerAutomate.png" alt="Power Automate flow with AI Builder's OpenAI action" width="634" height="774"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Coming back to our bot, we utilise the power of Copilot (Preview) and create a topic and plug it with the flow we created above. We know that our topic is simple but what it helps us in is, writing trigger phrases, which were on point. Below is the video example of it (&lt;em&gt;I try not to use heavy .gifs in my blog posts)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Check out the video &lt;a href="https://arafattehsin.com/wp-content/uploads/2023/03/Topic-Creation.webm"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In case you want to test it. You just have to click on the Test button and write your query (just like I did).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bn_vnWj9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/03/Test-OpenAI-PVA-e.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bn_vnWj9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/03/Test-OpenAI-PVA-e.gif" alt="PVA test with OpenAI's GPT native integration with Power Automate" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's how simple it is to integrate GPT without writing a single line of code and connecting to the external sources using Custom Connectors.&lt;/p&gt;

&lt;p&gt;One could now argue the use-case, exploitation or vulnerability of this as there's no check and balance in creation and may point at its practicality. &lt;em&gt;I still think it is practical but I will take their point for the sake of an argument. &lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;Approvals for the social media posts in Teams&lt;/h2&gt;

&lt;p&gt;Let's suppose you run a marketing department of a business called &lt;strong&gt;Handyman Services. &lt;/strong&gt;Just like any other business, this also needs some social presence, we need to build a Power Virtual Agents bot that'd take a topic from the internal staff in Microsoft Teams, generate a text with GPT and send it for approval before showing it to the internal staff. Once the department lead approves it then the internal staff gets a private message from a Power Automate bot with the generated text. &lt;em&gt;Yes, it could have been improved but you get my point, right?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So first of all, we'd build a flow again. This time, it is going to be a bit different (not a lot) from the last one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FcSiGp06--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/03/SocialHandy-Flow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FcSiGp06--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/03/SocialHandy-Flow.png" alt="Social Media Post Generation with AI Builder's GPT-3" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You may notice one obvious hack here that we are using a fixed (or hardcoded) Recipient. Ideally, we should have got the email address from the Power Virtual Agents bot and we should use that email here. Despite the fact that my video walkthrough below shows you that I am passing an email address to the flow, there's a &lt;a href="https://powerusers.microsoft.com/t5/General/If-I-add-user-system-variable-in-PVA-Chatbot-is-not-working-in/m-p/2019583"&gt;known bug&lt;/a&gt; currently in the authentication part which would restrict us to perform that operation. That is the reason we may go with the hardcoded email address.&lt;/p&gt;

&lt;p&gt;Now as the flow is ready, let's see what how we build the Power Virtual Agents bot. We utilise the Copilot feature again as it helps us setup some trigger phrases quickly then we have to just tweak a few things to make it work. You can check the video below for everything I did in PVA.&lt;/p&gt;

&lt;p&gt;Check out the video &lt;a href="https://arafattehsin.com/wp-content/uploads/2023/03/SocialMediaPostCreation.webm"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the bot is created, we may have to go to &lt;strong&gt;Settings ➡️ Security, &lt;/strong&gt;choose Authentication and select &lt;strong&gt;Only for Teams. &lt;/strong&gt;Now publish the bot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y0-HH3Gh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/03/Security-Teams.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y0-HH3Gh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/03/Security-Teams.png" alt="Authentication for Microsoft Teams in PVA" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;Internal staff request for social media post&lt;/h3&gt;

&lt;p&gt;You have an internal staff named Adele. She works for a marketing department. Now we have this capability to use GPT model for post generation, Adele asks this question to the &lt;strong&gt;HandyBot.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NJPsngWR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/03/Post-generation-AI.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NJPsngWR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/03/Post-generation-AI.gif" alt="Social media post generation by AI" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the post is requested. Your marketing lead receives an approval request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G1DwczWm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/03/Teams-Approval.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G1DwczWm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/03/Teams-Approval.png" alt="Teams approval for the social media post generated by AI" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the post is approved, Power Automate bot sends a message back to &lt;strong&gt;Adele &lt;/strong&gt;with the GPT generated content by AI Builder's Power Automate action.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dWiYqZpP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/03/AI-Generated-Content.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dWiYqZpP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://arafattehsin.com/wp-content/uploads/2023/03/AI-Generated-Content.png" alt="AI Generated content posted by Power Automate bot" width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The integration of Power Virtual Agents with AI Builder (using Azure OpenAI's GPT) marks a significant step forward in the world of chatbots and AI-driven customer support. By leveraging the advanced natural language processing capabilities of GPT, businesses can create chatbots that provide more accurate, context-aware, and human-like responses to user queries. This leads to improved customer satisfaction, reduced burden on support staff, and more efficient operations. As we continue to explore the potential of AI and machine learning, it's exciting to see how innovative solutions like this will shape the future of customer support and user experience.&lt;/p&gt;

&lt;p&gt;Until next time.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>promptengineering</category>
      <category>powerfuldevs</category>
      <category>azure</category>
    </item>
    <item>
      <title>Create personalized experiences for your apps, bots and websites with Azure Personalizer - Part 3</title>
      <dc:creator>Arafat Tehsin</dc:creator>
      <pubDate>Tue, 10 Jan 2023 12:51:14 +0000</pubDate>
      <link>https://dev.to/arafattehsin/create-personalized-experiences-for-your-apps-bots-and-websites-with-azure-personalizer-part-3-i7h</link>
      <guid>https://dev.to/arafattehsin/create-personalized-experiences-for-your-apps-bots-and-websites-with-azure-personalizer-part-3-i7h</guid>
      <description>&lt;p&gt;Personalization in customer service is the practice of tailoring interactions with customers to their individual needs and preferences. This can be achieved through the use of customer data and technology such as machine learning and artificial intelligence.&lt;/p&gt;

&lt;p&gt;One way that personalization can be used in customer service is to provide tailored responses to customer inquiries. For example, a customer service agent can use information about a customer's previous interactions and purchases to provide more personalized and relevant answers to their questions. This can help to improve the customer's overall experience and make them feel valued and respected.&lt;/p&gt;

&lt;p&gt;Another way that personalization can be used in customer service is to provide personalized recommendations and suggestions to customers. For example, a customer service agent can use information about a customer's interests and preferences to recommend products or services that they might be interested in. This can help to increase customer satisfaction and loyalty, as well as boost sales and revenue.&lt;/p&gt;

&lt;p&gt;Personalization can also be used to improve the efficiency of customer service operations. By using customer data and technology, customer service agents can more easily and quickly access the information they need to resolve customer issues. This can help to reduce response times and improve overall customer satisfaction.&lt;/p&gt;

&lt;p&gt;So far, I've already written two parts on Azure Personalizer. The &lt;a href="https://www.arafattehsin.com/reinforcement-learning-in-apps-bots-websites-with-azure-personalizer-part-1/"&gt;first one&lt;/a&gt; talks about building your own library. The &lt;a href="https://www.arafattehsin.com/reinforcement-learning-in-apps-bots-websites-with-azure-personalizer-part-2/"&gt;second one&lt;/a&gt; talks about building a simulator for your learning loop. This will help you train your model ahead of production deployment. This is going to be my last post on this topic. In this post, we're going to see how we make use of Azure Personalizer into a chat bot.&lt;/p&gt;

&lt;h3&gt;What do we need?&lt;/h3&gt;

&lt;p&gt;As we're going to use the Azure Personalizer into a chat bot, we have to choose the right platform for that. Whilst there are many options in the market, I decided to go with the Microsoft Power Platform. There were two reasons to choose this platform over others.&lt;/p&gt;

&lt;p&gt;One is that Power Platform allows you to easily create a chat bot using Power Virtual Agents (PVA) and integrate the Azure Personalizer APIs using Power Automate. Both without any code, even with the custom connector 🤘. The second reason is the &lt;a href="https://www.arafattehsin.com/power-virtual-agents-get-the-biggest-update-ever/"&gt;recent update&lt;/a&gt; we witnessed about the Power Virtual Agents around unified authoring canvas. I was quite excited about it so thought to use it in one of our bots.&lt;/p&gt;

&lt;p&gt;It totally depends upon you as how you want to secure Power Platform environment. It can be your sandbox, your trial instance or also a Developer Plan. I have got a M365 Developer Plan and I will be using that. My complete solution (Custom Connector + Power Virtual Agents) is available on my usual &lt;a href="https://github.com/arafattehsin/CognitiveRocket/tree/master/AI-For-Every-Developer/Demos/Personalizer"&gt;GitHub repo&lt;/a&gt; for the &lt;a href="https://www.arafattehsin.com/tag/ai-for-every-developer/" rel="noopener"&gt;AI for Every Developer&lt;/a&gt; series. You can play with it.&lt;/p&gt;

&lt;h2&gt;The YUM Bot&lt;/h2&gt;

&lt;p&gt;The overall idea is to create a Power Virtual Agents bot for the customers to suggest the best dessert based upon the &lt;code&gt;occasion&lt;/code&gt; and &lt;code&gt;variety&lt;/code&gt;. This best action will be delivered by the Personalizer API via Power Automate and displayed it as an Adaptive Card back to the customer / user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IMK83QuK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/Highlevel-architecture-1024x393.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IMK83QuK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/Highlevel-architecture-1024x393.png" alt="High Level Architecture for Personalised Chatbot" width="880" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Custom Connector&lt;/h2&gt;

&lt;p&gt;In our previous posts, we worked on the library well as the simulator based on .NET and how you can easily use Personalizer SDK to train your learning loop. This time, we're going to use a Personalizer endpoint to interact with the service. Initially, I looked at the Power Automate to find out if there was any connector available for Azure Personalizer. Unfortunately, I did not find one, so I thought to create a custom connector with the name of &lt;strong&gt;Azure Personalizer (Community)&lt;/strong&gt;. If that's something new to you then you may want to check out some of our awesome Microsoft Business Applications MVPs on YouTube. They've explained this topic quite well.&lt;/p&gt;

&lt;p&gt;Azure Personalizer (Community) connector has got two actions, &lt;code&gt;Rank&lt;/code&gt; and &lt;code&gt;Reward&lt;/code&gt;. It needs an API key of your Personalizer resource in the Azure to create the valid connection.&lt;/p&gt;

&lt;h3&gt;Rank Action&lt;/h3&gt;

&lt;p&gt;This action expects the same inputs as it receives from our .NET library i.e. &lt;code&gt;Context Features&lt;/code&gt;, &lt;code&gt;Actions&lt;/code&gt;, &lt;code&gt;EventId&lt;/code&gt; and more. I've kept it as generalised as possible and that's why, all you have to do is to switch the layout of the input to entire array and paste your JSON.&lt;/p&gt;

&lt;blockquote&gt;Our &lt;a href="https://www.arafattehsin.com/reinforcement-learning-in-apps-bots-websites-with-azure-personalizer-part-1/"&gt;first post&lt;/a&gt; about Azure Personalizer has all the details about Rank and Reward API. Both of the actions are dependent upon the same API.&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RgicLvKJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/rank-action.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RgicLvKJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/rank-action.png" alt="Rank Action Request" width="662" height="708"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;It sends back the &lt;code&gt;eventId&lt;/code&gt; and the best action in the form of &lt;code&gt;rewardActionId&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MPQBJkz3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/rank-action-response.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MPQBJkz3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/rank-action-response.png" alt="Rank Action Response" width="639" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;Reward Action&lt;/h3&gt;

&lt;p&gt;This action expects the &lt;code&gt;eventId&lt;/code&gt; and &lt;code&gt;reward&lt;/code&gt; value. The reward can be anything between 0 and 1. For the simplicity, we will be sending 0 (bad) or 1 (good).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X5dgD3nU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/reward-action.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X5dgD3nU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/reward-action.png" alt="Reward Action Request" width="691" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;blockquote&gt;As the independent publisher certification process requires a good time and effort for the submission therefore, it's not my focus for now but will surely visit later.&lt;/blockquote&gt;

&lt;h2&gt;Chat bot&lt;/h2&gt;

&lt;p&gt;Now, let's talk about the Power Virtual Agents. We are going to use the public preview of the latest unified authoring canvas. They have got a &lt;a href="https://www.arafattehsin.com/power-virtual-agents-get-the-biggest-update-ever/"&gt;bunch of new features&lt;/a&gt; and I've already covered &lt;a href="https://www.arafattehsin.com/give-your-power-virtual-agents-a-voice-with-speech-and-telephony/"&gt;some of them&lt;/a&gt; earlier. The idea over here is to enquire about two sets of information which is required by the Rank Action, &lt;code&gt;Occasion&lt;/code&gt; and &lt;code&gt;Variety&lt;/code&gt;. For the simplicity of this use case, we'll create a simple bot and switch off all the Custom Topics except Greeting. It is not mandatory but just saying!&lt;/p&gt;

&lt;h3&gt;Entities&lt;/h3&gt;

&lt;p&gt;After this, let's create two entities as a &lt;code&gt;ClosedList&lt;/code&gt;. Start with &lt;code&gt;Occasion&lt;/code&gt; and add synonyms for smart matching.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rXRnJlTz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/occasion.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rXRnJlTz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/occasion.png" alt="Occasion Entity in Power Virtual Agents" width="854" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Do the same for Variety as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JmH07SmX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/variety.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JmH07SmX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/variety.png" alt="Variety Entity for Power Virtual Agents" width="871" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Topic (Book Order)&lt;/h3&gt;

&lt;p&gt;Just add at least 5 phrases to train the &lt;code&gt;BookOrder&lt;/code&gt; topic. The more phrases you add, the better but 5 are also OK. As an example, you can add &lt;em&gt;I want to place an order. &lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Before asking a question about the &lt;code&gt;Occasion&lt;/code&gt; and &lt;code&gt;Variety&lt;/code&gt;, let's add a variable and set its value as a table of items. You can do this by using the latest capabilities of PowerFx. This will be used later in our bot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lBoDQMmE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/PowerFx-variables-PVA.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lBoDQMmE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/PowerFx-variables-PVA.gif" alt="Usage of variables with PowerFx in Power Virtual Agents" width="880" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Later on, just add two questions with multiple choice options to ask about the &lt;code&gt;Occasion&lt;/code&gt; and &lt;code&gt;Variety&lt;/code&gt; and store each of them into variables. Once we get these two values, we call our custom connector we created above, specifically Rank Action.&lt;/p&gt;

&lt;p&gt;After its processing, we get a response in &lt;code&gt;eventId&lt;/code&gt; and &lt;code&gt;rewardActionId&lt;/code&gt;. Our concern is more about &lt;code&gt;rewardActionId&lt;/code&gt; as this is an action (choice of a dessert) suggested by the Personalizer. So, what we do is, we create another variable and we apply &lt;code&gt;LookUp&lt;/code&gt; on the table of items we created above to get the information about the specific &lt;code&gt;rewardActionId&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The reason we do this &lt;code&gt;LookUp&lt;/code&gt; and set it up in a separate variable is because we want to populate the Adaptive Card dynamically right within the Power Virtual Agents canvas &lt;em&gt;&lt;strong&gt;not in Power Automate&lt;/strong&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VuwOo4xq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/AdaptiveCards-PowerVirtualAgents.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VuwOo4xq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/AdaptiveCards-PowerVirtualAgents.gif" alt="Adaptive Cards in Power Virtual Agents" width="880" height="581"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Customer or user can choose yes or no depending upon their preferred choice. This submission is also handled by Adaptive Cards &lt;em&gt;(I know it's not common yet).&lt;/em&gt; However, the data property inside Action.Submit can only contain string to work properly for now. Otherwise, it won't send an event back to PVA and you may wonder what you did wrong. 🤷‍♂️&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TyJrQL9M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/Call-Reward-Action-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TyJrQL9M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/Call-Reward-Action-1.png" alt="Call Reward Action" width="828" height="760"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So that's a wrap of a bot. Now if you run it, it should behave in a similar way as this one below. Bonus takeaway for you is the proactive slot filling in Power Virtual Agents. This means that if you provide an entity's name (&lt;code&gt;birthday&lt;/code&gt;, &lt;code&gt;meal&lt;/code&gt; or &lt;code&gt;cake&lt;/code&gt;) already in your question, it will skip that entity's question to speed up the process, automatically. Super cool, isn't it?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uk8u_PPA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/PVA-demo.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uk8u_PPA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/12/PVA-demo.gif" alt="Power Virtual Agents Demo" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In conclusion, personalization in customer service has many potential applications. It can be used to improve customer experiences, increase customer satisfaction, and improve the efficiency of customer service operations. By using customer data and technologies &lt;span&gt;like Azure Personalizer&lt;/span&gt;, you can provide more personalised and relevant interactions for your customers, which can help to build stronger and more lasting relationships.&lt;/p&gt;

&lt;p&gt;Until next time.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;P.S: The introductory 3 paragraphs and the conclusion except for the highlighted text of this blog post is purely written by &lt;a href="https://chat.openai.com/"&gt;ChatGPT&lt;/a&gt;. I could have made it better or added more of my thoughts but word on the street compelled me to keep the way it is. &lt;/em&gt;🙂&lt;/p&gt;

</description>
      <category>azure</category>
      <category>machinelearning</category>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Create personalized experiences for your apps, bots and websites with Azure Personalizer - Part 2</title>
      <dc:creator>Arafat Tehsin</dc:creator>
      <pubDate>Sun, 11 Dec 2022 08:27:25 +0000</pubDate>
      <link>https://dev.to/arafattehsin/create-personalized-experiences-for-your-apps-bots-and-websites-with-azure-personalizer-part-2-31dn</link>
      <guid>https://dev.to/arafattehsin/create-personalized-experiences-for-your-apps-bots-and-websites-with-azure-personalizer-part-2-31dn</guid>
      <description>&lt;p&gt;In today's digital age, it is not easy for any business to work without personalization. Especially for those in the retail or consumer market, personalization has now become the fundamental aspect of their branding. It improves the customer’s experience and helps drive sales. Personalization increases the customer loyalty when you suggest them something of their interests. Let's start building the model for the similar experience for our customers.&lt;/p&gt;

&lt;p&gt;In our &lt;a href="https://www.arafattehsin.com/reinforcement-learning-in-apps-bots-websites-with-azure-personalizer-part-1/" rel="noopener noreferrer"&gt;last post&lt;/a&gt;, we talked about the basic concepts of Azure Personalizer and how you can build your own library (or use mine). In this post, we will try to build a simulator which is going to train our Personalizer model so that we can then use it in our actual apps, bots or the websites.&lt;/p&gt;

&lt;h2&gt;What do I need?&lt;/h2&gt;

&lt;p&gt;If you're not using the existing library which I have shared or you just want to copy it, then you may need a .NET SDK for Personalizer.&lt;/p&gt;

&lt;blockquote&gt;In my &lt;a href="https://github.com/arafattehsin/CognitiveRocket/tree/master/AI-For-Every-Developer/Demos/Personalizer" rel="noopener noreferrer"&gt;Personalizer Library&lt;/a&gt;, I am using the preview version of the .NET SDK&lt;/blockquote&gt;

&lt;h2&gt;Simulator&lt;/h2&gt;

&lt;p&gt;Let's say you own an app that lets people buy paintings of their choice. The app contains thousands of paintings of all sorts, ranging from pets to landscape to streets and so on. You integrate Personalizer in this app so that your customers can start seeing the relevant paintings. You may have to do this by fetching the right category of paintings. Every time a painting is shown to the user, the Rank and Reward calls are made to the Personalizer to learn and improve.&lt;/p&gt;

&lt;p&gt;However, the above discussed process is quite tedious. It can take weeks if not months before Personalizer starts suggesting the right action. Therefore, we need a way to train the Personalizer faster and the constructive way to achieve this, is by creating the Simulator.&lt;/p&gt;

&lt;blockquote&gt;Just to highlight that actual actions are already defined in the PersonalizeLibrary code.&lt;/blockquote&gt;

&lt;h3&gt;Personalized Desserts&lt;/h3&gt;

&lt;p&gt;Let's understand the basic goal or objective we're trying to achieve in this simulator. Our business problem resolves around the Personalized desserts. For example, if someone wants to go to a &lt;code&gt;birthday&lt;/code&gt; party but that person wants to take &lt;code&gt;cupcakes&lt;/code&gt; for their friends then it'd be &lt;code&gt;vanilla cupcakes&lt;/code&gt;. However, if someone wants to go just for the &lt;code&gt;catch-up&lt;/code&gt; or a &lt;code&gt;meal&lt;/code&gt; and wants to take the &lt;code&gt;cupcakes&lt;/code&gt; with them then it'd be &lt;code&gt;nutella cupcakes&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This means that based upon the different occasions and varieties, a specific dessert will be suggested as the top action. Below is a simple depiction of how it would look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2022%2F10%2FDessert-Best-Action.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2022%2F10%2FDessert-Best-Action.gif" alt="Dessert Best Action by Personalizer" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can argue that it was too simple to put and there are not many variations in the suggestions. I agree and the reason behind it was to make it simple so that people who are starting it for the first time can understand well.&lt;/p&gt;

&lt;p&gt;The simulator will randomly pick-up the context features and send it to the Rank API. Once it receives a reward, our code will determine that actual award against a particular occasion and variety. If the determined award is same as the award returned by the Rank API, then it is sent as 1 into the Reward API with an event ID.&lt;/p&gt;

&lt;blockquote&gt;In our previous post, we have discussed briefly that your reward can be anything between 0 and 1.&lt;/blockquote&gt;

&lt;h3&gt;UserSimulator.cs&lt;/h3&gt;

&lt;p&gt;In this class, we will just set-up the possible combinations as defined above. These will be stored as a part of dictionary where we will have a format of &lt;code&gt;ocassion_variety&lt;/code&gt; as a key, whereas &lt;code&gt;possible action ID&lt;/code&gt; as a value. This is how it is going to look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2022%2F10%2Fsimulated-responses-personalizer.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2022%2F10%2Fsimulated-responses-personalizer.png" alt="Simulated Responses for Personalizer" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It also contains the code to get the response for the simulated action. Instead of explaining the basic code line by line, here you go with all the class:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;Simulator.cs&lt;/h3&gt;

&lt;p&gt;In this class, we're going to setup our &lt;code&gt;occassion&lt;/code&gt; and &lt;code&gt;variety&lt;/code&gt; values so that we can simulate the context randomly. Once we get the randomized combination of the context, we then pass it to the Rank API. The response of the rank API is then stored in custom class called &lt;code&gt;PersonalizerResponse&lt;/code&gt; which is a part of &lt;code&gt;PersonalizerLibrary&lt;/code&gt;. If the returned response matches the response of the simulated context, then a reward of &lt;code&gt;1&lt;/code&gt; is sent back to Reward API.&lt;/p&gt;

&lt;p&gt;Note that &lt;code&gt;SimulateEvents&lt;/code&gt; function is called from outside (such as Program.cs) so that it can run for N number of times to train the model.&lt;/p&gt;

&lt;blockquote&gt;In our previous post, we've kept our Model Update Frequency in Azure setting as 5 minutes&lt;/blockquote&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Once you start running the simulator, you may see that it will give you return random responses. This goes back to our topic of explore vs. exploitation I discussed in my previous post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2022%2F11%2Fstarting-personalizer-training-01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2022%2F11%2Fstarting-personalizer-training-01.png" alt="Simulated Responses for Personalizer" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the start, you may experience very random responses, but you will notice the improvement in your model later on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2022%2F11%2Fend-personalizer-training-01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2022%2F11%2Fend-personalizer-training-01.png" alt="Simulated Responses for Personalizer" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I noticed that whenever you run around 500-1000 requests, you may need to wait a little more (not just 5 minutes as defined above) to see the improvement of the trained model. This will also show you an increase in the reward count at the later stages. So, that's one of the ways to create a Simulator so that when you can test your application's behaviors. This is clearly not a way to push it for production and then later train on that data as it will have a bunch of false results all the time, unless you're using an Apprentice Mode. &lt;em&gt;It is a topic of some other day &lt;/em&gt;🙂&lt;/p&gt;

&lt;p&gt;Personalised experiences add more value to the overall experience than adding half-baked features into your solutions. In our next post, we'll be looking at the applications of the Personaliser. Primarily, our focus will be on the integration of Personalizer with either a web app, mobile or a chat bot.&lt;/p&gt;

&lt;p&gt;Until next time.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>womenintech</category>
    </item>
    <item>
      <title>Create personalized experiences for your apps, bots and websites with Azure Personalizer - Part 1</title>
      <dc:creator>Arafat Tehsin</dc:creator>
      <pubDate>Mon, 31 Oct 2022 13:00:00 +0000</pubDate>
      <link>https://dev.to/arafattehsin/create-personalized-experiences-for-your-apps-bots-and-websites-with-azure-personalizer-part-1-2nlm</link>
      <guid>https://dev.to/arafattehsin/create-personalized-experiences-for-your-apps-bots-and-websites-with-azure-personalizer-part-1-2nlm</guid>
      <description>&lt;p&gt;In the last couple of years, Artificial Intelligence (AI) has shown tremendous growth opportunities. It is rapidly improving and changing the way we interact, shop and work. AI has become so ubiquitous in our daily lives that it has eliminated a lot of laborious tasks. Many organisations ranging from large enterprises to small businesses &lt;span&gt;are using AI to create productive and smart solutions. These solutions do not just boost the human creativity but also optimise the overall costs. However, the results of these AI based solutions are tailored for masses rather than a specific set of people. They clearly lack the relevance or personalisation aspect. As a result of that, in this post (and the next ones), we'll learn how you can easily create personalised experiences for your web or mobile apps, bots and much more just through Azure.&lt;/span&gt;&lt;/p&gt;

&lt;h2&gt;Why do we need this?&lt;/h2&gt;

&lt;p&gt;In the machine learning (or broadly AI based solutions) world, the baseline approach has always been the Supervised Learning. It does have great successes in many use-cases but that's not super useful everywhere.&lt;/p&gt;

&lt;p&gt;Let's take an example of yourself. You use a shopping app to buy some clothes. This app shows the lists of the products which are usually featured or on sale. Now imagine, that the list which you see does not show any item of your interest. This means that it shows some different colors, size or style. You may click on it or maybe you don't. Therefore, almost every time you may want to search or filter out your favorites from the list and it is not going to be an easy process always. The supervised learning here fails because it's too much of a work to identify what's right for a specific set of contexts (or people). You may also want to avoid self-fulfilling prophecy problem. &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0Lx8BTKr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/08/supervised-learning.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0Lx8BTKr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/08/supervised-learning.gif" alt="Supervised Learning" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So how would you feel as opposed to the list of your choice of clothing, based upon your own interests? I am sure you'd want that. Everyone wants that. So, the question is, can we really optimise for the best outcome, that too with real-time learning and amongst a given set of choices? The answer is yes with &lt;em&gt;&lt;strong&gt;Reinforcement Learning&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;Reinforcement Learning is an approach to machine learning that learns behaviors by getting feedback from its use.&lt;/blockquote&gt;

&lt;p&gt;Personalisation is the hallmark of a customer centric focus. It makes customers feel valued and helps businesses provide the CX which are narrowed or specific to their needs and wants.&lt;/p&gt;

&lt;p&gt;Based upon some of the &lt;a href="https://us.epsilon.com/power-of-me?hsCtaTracking=0aefe51b-e6f0-4ce5-954b-ecafd7fb9087%7Cf845de6b-3c40-4eab-b934-a78cf8cd8a49"&gt;latest trends&lt;/a&gt; and our above example, 80% of the customers are those who are more likely to buy from those who provide personalised experience. In addition to this 72% of the customers are more interested in engaging with &lt;a href="https://www.wunderkind.co/blog/article/smarterhq-wunderkind-audiences/"&gt;personalised messaging&lt;/a&gt; by the bots. Lastly, just to re-iterate its importance, 50% of the Australians think that they would switch to a competitor after one bad experience which means it is very critical for businesses to start thinking about it.&lt;/p&gt;

&lt;h2&gt;Azure Personalizer&lt;/h2&gt;

&lt;p&gt;Whilst there can be many ways, we as a matter of our interest, will use &lt;a href="https://azure.microsoft.com/en-us/products/cognitive-services/personalizer/#overview"&gt;Azure Personalizer&lt;/a&gt; to build the personalised experiences for the customers. Personalizer uses reinforcement learning to make some smart decisions for your applications. We will also see how you can boost conversion and engagement as well as add real-time relevance to your product recommendations. &lt;span&gt;Unlike the usual recommendation engines which offer a few options from the large dataset, Personalizer gives the single best outcome for the user every time. gives customer relevant experiences that improve over the time based upon their behavior. It improves the click-through rates and remain on of changing trends. &lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Personalizer can get you the best outcomes for variety of business use-cases. Some of them are:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;News article based upon the interests of the reader&lt;/li&gt;
    &lt;li&gt;Food order on the basis of occasions and dietary requirements&lt;/li&gt;
    &lt;li&gt;Perfect gift for the Mother's Day&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Personalizer service is not as simple as Vision or Language, it requires some good understanding of the concepts. I will try my best to ensure the clarity both from the writing and covering the technical bits from the .NET SDK. &lt;em&gt;Please feel free to continue reading even if you're not going to use Personalizer with C# .NET - It is &lt;span&gt;&lt;strong&gt;definitely going to help you&lt;/strong&gt;&lt;/span&gt;, promise! &lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;In rest of the post, I've taken a personalized dessert example, just to explain you the concepts in an easier way&lt;/blockquote&gt;

&lt;h2&gt;Learning Loop&lt;/h2&gt;

&lt;p&gt;Each Personalizer resource you create from your portal is considered as one Learning Loop. This is essentially for a single personalization use-case. If you have got more than one in your app then you may want to create more than one. In our case, we will just create one. It's as simple as below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cRhnUBSM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/09/azure-setup-personalizer.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cRhnUBSM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/09/azure-setup-personalizer.png" alt="Creating Azure Personalizer resource" width="577" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;Rank&lt;/h2&gt;

&lt;p&gt;Personalizer learns over the time what (&lt;em&gt;actions&lt;/em&gt;) you should show your users in a given situation (or a &lt;em&gt;context&lt;/em&gt;). Rank is basically an answer / single best decision a personalizer determines and responds back. It's also considered as one of the two primary APIs of Personalizer. Every time your app has to make a decision, the Rank API is called. This API expects &lt;strong&gt;actions&lt;/strong&gt; (&lt;em&gt;with an ID&lt;/em&gt;),&lt;strong&gt; features&lt;/strong&gt; associated with&lt;strong&gt; each action&lt;/strong&gt; and &lt;strong&gt;features&lt;/strong&gt; that describe the&lt;strong&gt; current context. &lt;/strong&gt;Each API call is known as an event and distinguished with an &lt;code&gt;event ID&lt;/code&gt; (&lt;em&gt;you can assume it as &lt;code&gt;guid&lt;/code&gt;)&lt;/em&gt;. Personalizer service then returns the ID of the best action which is determined by its model. &lt;em&gt;Now I have spoken about quite a few new terminologies here, let's learn about each of them. &lt;/em&gt;&lt;/p&gt;

&lt;pre&gt;string eventId = Guid.NewGuid().ToString();
var request = new PersonalizerRankOptions(actionList, currentContextFeatures, null, eventId);
PersonalizerRankResult result = client.Rank(request);&lt;/pre&gt;

&lt;h3&gt;Actions&lt;/h3&gt;

&lt;p&gt;Actions are the unique set of items which are meant to be chosen or selected. They can be anything from products to promotions to news articles to ad placements on the website and much more. Actions have got the list of features which describe them. Basically, they are the metadata about the content. For example, if the action is a dessert, then its features are comprised of sweet level, cuisine and category.&lt;/p&gt;

&lt;pre&gt;...
new PersonalizerRankableAction(
    id: "banoffee_pie",
    features:
    new List&amp;lt;object&amp;gt;() { new { variety = "dessert", sweetlevel = "medium" }, new { nutritionalLevel = 5 } }
),
new PersonalizerRankableAction(
    id: "mango_delight",
    features:
    new List&amp;lt;object&amp;gt;() { new { variety = "shake", sweetlevel = "medium" }, new { nutritionLevel = 2 }, new { drink = true } }
),
new PersonalizerRankableAction(
    id: "tresleches_cake",
    features:
    new List&amp;lt;object&amp;gt;() { new { variety = "cake", sweetlevel = "high" }, new { nutritionLevel = 3, cuisine = "mexican" } }
),
...&lt;/pre&gt;

&lt;p&gt;There's a limit of 50 actions but no limit on the number of features one action can have. It is also advised that features should be meaningful and effective for the specific action. It is also not necessary that all the actions should have the similar set of features.&lt;/p&gt;

&lt;h3&gt;Context&lt;/h3&gt;

&lt;p&gt;Context or Context Features are quite interchangeable (hence confusing). It is basically the metadata about the context in which the content (&lt;em&gt;e.g., a personalised product&lt;/em&gt;) is presented. Context can literally be anything such as user, demographic information, weather, time, historical data, device and so on.&lt;/p&gt;

&lt;p&gt;The recommendation is to use the name for context features that are UTF-8 based and start with the different letters, such as &lt;code&gt;variety&lt;/code&gt; and &lt;code&gt;occasion&lt;/code&gt;, &lt;code&gt;device&lt;/code&gt;. Having similar namespaces (or names) of the context features can result in collisions in indexes used for machine learning.&lt;/p&gt;

&lt;p&gt;The Rank API expects four parameters;&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;List of all the Actions&lt;/li&gt;
    &lt;li&gt;Current context features&lt;/li&gt;
    &lt;li&gt;Features to exclude&lt;/li&gt;
    &lt;li&gt;Event ID (&lt;code&gt;guid&lt;/code&gt;)&lt;/li&gt;
&lt;/ol&gt;

&lt;pre&gt;{
    "contextFeatures": [
        {
          "occasion": "birthday",
          "variety": "cake",
          "weather": "sunny"
        }
        ...
    ]
}&lt;/pre&gt;

&lt;blockquote&gt;Both for action and context, it is always recommended to have more dense features rather than a sparse feature. For example, there can be many bakers' confectioneries which can be classified as cakes and cupcakes. That's considered as a &lt;em&gt;dense feature.&lt;/em&gt; Contrary to this, same features can also have a &lt;code&gt;Name&lt;/code&gt; attribute which will be quite unique among every feature. This is a considered as a &lt;em&gt;sparse feature.&lt;/em&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once the request is sent to the Rank API, then the API decides to either &lt;strong&gt;exploit &lt;/strong&gt;or &lt;strong&gt;explore &lt;/strong&gt;to return the top action (&lt;code&gt;RewardActionId&lt;/code&gt;). Now what's that? Let's learn about them!&lt;/p&gt;

&lt;h3&gt;Exploit&lt;/h3&gt;

&lt;p&gt;When Personalizer service chooses an action which matches the highest probability in the rank. This is called exploitation. This happens &lt;strong&gt;(100 - exploration) %&lt;/strong&gt; of the time.&lt;/p&gt;

&lt;h3&gt;Explore&lt;/h3&gt;

&lt;p&gt;When Personalizer service chooses an action which does not match the highest probability in the rank. The percentage of exploring the outcome is based upon the configuration in Azure. Currently, personalizer uses &lt;strong&gt;epsilon greedy&lt;/strong&gt; algorithm to explore. &lt;em&gt;This is different than the behavior in some A/B frameworks that lock a treatment on specific user IDs. &lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DILuzJ8b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/10/Exploration.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DILuzJ8b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/10/Exploration.png" alt="Exploration Settings in Azure Personalizer" width="828" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;It is always recommended that exploration percentage is set between 10% to 20% and:&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;Not to have 0% as it will not use any benefits of Personalizer as in this case, it will lead to model stagnation, drift and eventually lower the performance.&lt;/li&gt;
    &lt;li&gt;Not to have 100% will negate the benefits of learning from user behavior as in this case, it will always try to get a random response rather than a top action item.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Tgfxsv5n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/10/personalizer-explore.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Tgfxsv5n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/10/personalizer-explore.gif" alt="Personalizer with Exploit and Explore" width="511" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to understand the top action item (&lt;code&gt;RewardActionId&lt;/code&gt;) returned by the Rank API, it is important for us to learn about the Rewards.&lt;/p&gt;

&lt;h2&gt;Reward&lt;/h2&gt;

&lt;p&gt;Reward is a score that your business logic determines as a measure of success, just to validate the decision taken by the Rank API (or broadly Personalizer) is good (&lt;em&gt;leaning towards 1&lt;/em&gt;) or bad (&lt;em&gt;leaning towards 0&lt;/em&gt;). For example, if the Rank API suggests a news article or promotional offer and user clicks on it, then the score can be considered as high as 1 because the user chooses it.&lt;/p&gt;

&lt;p&gt;Reward is considered as second of the two primary APIs for Personalizer. Reward API expects two parameters&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;Event ID (&lt;em&gt;the same &lt;code&gt;guid&lt;/code&gt; you send to Rank API&lt;/em&gt;). &lt;em&gt;This is an optional parameter, if it's not sent then Personalizer will generate automatically generate one.&lt;/em&gt;
&lt;/li&gt;
    &lt;li&gt;Reward (between 0 or 1, depending upon the business logic)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Reward API needs to be called within the timeframe specified on the above screenshot of configuration (&lt;strong&gt;Reward Wait Time&lt;/strong&gt;) after the Rank API call is made for a given event. If Reward API is not called within the due time, then a Default score will be allocated instead, if it is defined in Azure. This data is then used to train the model (according to the learning policy as shown below) by recording the features and reward scores of each rank call.&lt;/p&gt;

&lt;pre&gt;public void SubmitReward(string eventId, float reward)
{
    client.Reward(eventId, reward);
}&lt;/pre&gt;

&lt;h3&gt;Model&lt;/h3&gt;

&lt;p&gt;Every learning loop (defined above) has a model that learns about the user behavior. It gets training data from the combination of the results you send to Rank and Reward APIs. This training happens as per the model update frequency defined in the configuration. In my case, as I was training it using the simulator which I built to test my model. I set it up to update every 5 minutes as I was testing. Ideally, it can be hours or even days based upon your needs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Cxxl36ic--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/10/model-update-frequency.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Cxxl36ic--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/10/model-update-frequency.png" alt="Model Update Frequency" width="435" height="143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Learning Policy&lt;/h3&gt;

&lt;p&gt;Personalizer trains a model on the basis of its learning behavior defined in the configuration. This enables the functionality of how the machine learning works. There are two basic modes and you can see that I've chosen the first one as my use-case of suggestion the best dessert was OK to use it.&lt;/p&gt;

&lt;blockquote&gt;It takes at least 100 to 10,000 calls for Personalizer to start behaving well so you need to be mindful of tests&lt;/blockquote&gt;

&lt;h4&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0RYrBkhy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2022/10/Learning-Policy.png" alt="Learning Policy for Azure Personalizer" width="694" height="689"&gt;&lt;/h4&gt;

&lt;h4&gt;&lt;/h4&gt;

&lt;h4&gt;&lt;strong&gt;Online Mode &lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;This is the default mode (or behavior) of a personalizer where it uses machine learning to build the model that predicts the top action (or content) for use-case. This is improved time by time when the Rank and Reward calls are made.&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Apprentice Mode&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Just like in real life where an apprentice learns from their experience, it's the same in case of Personalizer. This is different than an online mode where it tries to predict the random values right from the start impacting the application with the cold start problem. Rank calls, in this case, always returns the application's default actions i.e., the action your app would have taken without using the Personalizer. Once the Personalizer is trained well, the learning behavior can then be switched to the online mode for better results.&lt;/p&gt;

&lt;blockquote&gt;If you want to create your own library, then you may need to download the preview SDK for Personalizer from NuGet package manager&lt;/blockquote&gt;

&lt;p&gt;I have now created a &lt;a href="https://github.com/arafattehsin/CognitiveRocket/tree/master/AI-For-Every-Developer/Demos/Personalizer"&gt;complete library&lt;/a&gt; which can be used easily in your projects despite my dessert use-case. It's been one of the longest posts I have written so far on this topic, but I truly think that is really valuable for you to understand the basic concepts before you get your hands with it. Personalizer is a complex service and require some patience too (as it does not train in seconds or even minutes). In my next post I will try to discuss the simulation aspect of it as well as application of this service. If you've got any questions, please write down in the comments section.&lt;/p&gt;

&lt;p&gt;Until next time.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>azure</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Object Detection Simplified with AI Builder</title>
      <dc:creator>Arafat Tehsin</dc:creator>
      <pubDate>Tue, 08 Feb 2022 12:13:05 +0000</pubDate>
      <link>https://dev.to/arafattehsin/computer-vision-simplified-with-ai-builder-ljj</link>
      <guid>https://dev.to/arafattehsin/computer-vision-simplified-with-ai-builder-ljj</guid>
      <description>&lt;p&gt;Few months back, I posted a &lt;a href="https://www.linkedin.com/posts/arafattehsin_from-custom-machine-learning-model-to-its-activity-6818494208110821376--3aF/"&gt;small video clip&lt;/a&gt; on my LinkedIn to show how I used ML.NET (w/ Azure), WinML &amp;amp; UWP to build a custom real-time object detection application. I received a few messages of folks on how I actually did that therefore, I decided to submit my talk &amp;amp; spoke to &lt;a href="https://www.arafattehsin.com/speaking/real-time-object-detection-with-your-custom-models/"&gt;NDC Sydney, 2021&lt;/a&gt; to actually showcase the working of it.&lt;/p&gt;

&lt;p&gt;After its implementation, one of former colleagues asked if I could achieve this using Power Platform i.e. AI Builder (&lt;em&gt;because of its object detection capability&lt;/em&gt;) and Canvas Apps. I responded in negative because I remember this limitation when I wrote about using &lt;a href="https://www.arafattehsin.com/beginners-lets-build-covid-safe-solutions-with-ai-power-platform/"&gt;AI Builder with Camera Stream&lt;/a&gt; in one of my previous posts.&lt;/p&gt;

&lt;p&gt;The limitation was that we could not pass the image (or encoded base 64) to the &lt;strong&gt;AI Model&lt;/strong&gt; in Canvas Apps because you had just 4-5 built-in controls which were actually not super helpful. Even if you had a requirement to pass an image (via camera stream) to your object detection model, you had to send it to Power Automate and process its response. So whenever I got a chance, I spoke about this limitation with the awesome team at Microsoft and to my fortune (with many others), we didn't have to wait long enough. 🙂&lt;/p&gt;

&lt;p&gt;On November 8, at Microsoft Ignite 2021, Lobe &amp;amp; AI Builder team &lt;a href="https://powerapps.microsoft.com/en-us/blog/new-ways-to-add-intelligence-to-your-business-with-ai-builder/"&gt;announced&lt;/a&gt; fantastic features to add intelligence to your Power Apps. Now, not only you can easily add your model as a data source to your Canvas Apps but also you can use them with PowerFx. This became the reason for me to re-visit my problem which I couldn't solve because of the discussed limitation.&lt;/p&gt;

&lt;blockquote&gt;The purpose of this post is to show how you can achieve near real-time object detection using AI Builder models with Power Apps. In case you're interested in building the AI Builder Object Detection model, you may want to check out this &lt;a href="https://www.arafattehsin.com/beginners-lets-build-covid-safe-solutions-with-ai-power-platform/"&gt;post&lt;/a&gt; where I have discussed its process step by step.&lt;/blockquote&gt;

&lt;h1&gt;Object detection using Camera Stream&lt;/h1&gt;

&lt;p&gt;The capability of using the AI into your apps with Power Fx allows you to actually access your AI models by any control, be it a Gallery, DataTable and so on. Therefore, we will try to build (or mimic) the same experience as I did for my .NET app. Not because I couldn't think of any other idea but because I want to show the limitless capabilities of Power Platform. &lt;em&gt;Yes I know one works offline while other does not but I think you got my point, didn't you?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So I already have got a template app ready which I will be using to add necessary controls to achieve above. 👆&lt;/p&gt;

&lt;h2&gt;Camera&lt;/h2&gt;

&lt;p&gt;Let's start by adding a &lt;strong&gt;Camera&lt;/strong&gt;, set its &lt;code&gt;StreamRate&lt;/code&gt; and &lt;code&gt;OnStream&lt;/code&gt; properties with a variable that is going to capture the image from the stream. The property we call is &lt;code&gt;captureTrigger&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.arafattehsin.com/wp-content/uploads/2022/01/add-camera-to-canvasapps.webm"&gt;Video Walkthrough&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Timer&lt;/h2&gt;

&lt;p&gt;Add the &lt;strong&gt;Timer&lt;/strong&gt; control to the screen and hide it. Set its &lt;code&gt;Duration&lt;/code&gt; to &lt;code&gt;2000&lt;/code&gt; and &lt;code&gt;OnTimerEnd&lt;/code&gt; should have &lt;code&gt;Set(captureTrigger,100);&lt;/code&gt;which would set the trigger's value to 100 after every 2 seconds to take a picture from the camera.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.arafattehsin.com/wp-content/uploads/2022/01/timer.webm"&gt;Video Walkthrough&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: Do not forget to add toggle &lt;code&gt;Repeat&lt;/code&gt; and &lt;code&gt;Auto start&lt;/code&gt;. I missed this in the video.&lt;/p&gt;

&lt;h2&gt;Gallery&lt;/h2&gt;

&lt;p&gt;Now, at the same position (X and Y axes), add the &lt;strong&gt;Vertical Gallery &lt;/strong&gt;and align its &lt;code&gt;height&lt;/code&gt; and &lt;code&gt;width&lt;/code&gt; similar to the Camera. The reason for this is that Gallery will be used to create &lt;code&gt;Bounding Boxes&lt;/code&gt; on top of the Camera control. Change the &lt;code&gt;TemplateSize&lt;/code&gt; to &lt;code&gt;0&lt;/code&gt; otherwise it is going stack the items vertically one by one (if there are more than one object identified in an image). Delete all the items inside your Gallery as we're going to add some new controls here.&lt;/p&gt;

&lt;p&gt;Lastly, add go to Add Data and add your AI Model, set the &lt;code&gt;Items&lt;/code&gt; property of a Gallery to &lt;code&gt;'Your Model Name'.Predict(YourCameraControl.Stream).results&lt;/code&gt; - that's the beauty of using your AI Model along with PowerFx in any control.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.arafattehsin.com/wp-content/uploads/2022/01/add-gallery-and-data-source.webm"&gt;Video Walkthrough&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add a &lt;code&gt;Container&lt;/code&gt; with &lt;code&gt;Text Label&lt;/code&gt; and &lt;code&gt;Rectangle&lt;/code&gt; inside your gallery control.&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Container&lt;/strong&gt;&lt;/h4&gt;

&lt;ul&gt;
    &lt;li&gt;Change its &lt;code&gt;X&lt;/code&gt; value to &lt;code&gt;ThisItem.boundingBox.left * YourCameraControl.Width&lt;/code&gt;
&lt;/li&gt;
    &lt;li&gt;Change its &lt;code&gt;Y&lt;/code&gt; value to &lt;code&gt;ThisItem.boundingBox.top * YourCameraControl.Height&lt;/code&gt;
&lt;/li&gt;
    &lt;li&gt;Change its &lt;code&gt;Width&lt;/code&gt; value to &lt;code&gt;ThisItem.boundingBox.width * YourCameraControl.Width&lt;/code&gt;
&lt;/li&gt;
    &lt;li&gt;Change its &lt;code&gt;Height&lt;/code&gt; value to &lt;code&gt;ThisItem.boundingBox.height * YourCameraControl.Height&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;&lt;strong&gt;Text Label&lt;/strong&gt;&lt;/h4&gt;

&lt;ul&gt;
    &lt;li&gt;Choose your choice of &lt;code&gt;Color&lt;/code&gt; with the &lt;code&gt;Border&lt;/code&gt; and set its value to &lt;code&gt;1&lt;/code&gt;
&lt;/li&gt;
    &lt;li&gt;Change its &lt;code&gt;PaddingTop&lt;/code&gt; to &lt;code&gt;1&lt;/code&gt;
&lt;/li&gt;
    &lt;li&gt;Change its &lt;code&gt;Height&lt;/code&gt; to something of your own choice, I kept it as &lt;code&gt;28&lt;/code&gt;
&lt;/li&gt;
    &lt;li&gt;Keep the &lt;code&gt;Width&lt;/code&gt; as &lt;code&gt;Parent.Width&lt;/code&gt;
&lt;/li&gt;
    &lt;li&gt;Also, please ensure you set &lt;code&gt;X&lt;/code&gt; and &lt;code&gt;Y&lt;/code&gt; both to &lt;code&gt;0&lt;/code&gt;
&lt;/li&gt;
    &lt;li&gt;Set its &lt;code&gt;Text&lt;/code&gt; property to &lt;code&gt;ThisItem.tagName&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;&lt;strong&gt;Rectangle&lt;/strong&gt;&lt;/h4&gt;

&lt;ul&gt;
    &lt;li&gt;Choose your choice of &lt;code&gt;BorderColor&lt;/code&gt; with the &lt;code&gt;Border&lt;/code&gt; and set its value to &lt;code&gt;8&lt;/code&gt;
&lt;/li&gt;
    &lt;li&gt;Keep the &lt;code&gt;Height&lt;/code&gt; as &lt;code&gt;Parent.Height&lt;/code&gt;
&lt;/li&gt;
    &lt;li&gt;Keep the &lt;code&gt;Width&lt;/code&gt; as &lt;code&gt;Parent.Width&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: Please do not set the &lt;code&gt;Color&lt;/code&gt; property, just set the color of the &lt;code&gt;Border&lt;/code&gt; instead.&lt;/p&gt;

&lt;blockquote&gt;You may notice a bit of a lag it is because unlike .NET (or any other framework which runs offline), this is being run online and it takes a second or two to get the response back.&lt;/blockquote&gt;

&lt;h3&gt;Bounding Boxes (Bonus Control)&lt;/h3&gt;

&lt;p&gt;There's a bonus custom control for you to use instead of the above discussed Gallery and that is &lt;code&gt;Bounding Boxes&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Motivation&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Few days back I had a chat with Jake Cohen and Bill Barnes from AI Builder &amp;amp; Lobe Team at Microsoft on the above topic and we discussed a couple of options to achieve that. During our discussion, Bill went ahead and instead of Gallery approach, he came up with a smart idea of using a &lt;code&gt;HtmlText&lt;/code&gt; instead. He then converted &lt;code&gt;HtmlText&lt;/code&gt; into a PowerApps Component called &lt;strong&gt;Bounding Boxes&lt;/strong&gt;. As the name suggests, this custom component exposes a property called Boxes which actually takes a Bounding Box of an AI Model as an argument and shows the bounding box on the screen. The best part of this control is its weight which is not heavy as the Gallery and simple for anyone to use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.arafattehsin.com/wp-content/uploads/2022/01/bounding-box.webm"&gt;Video Walkthrough&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The good news is, Bill has just shared his &lt;a href="https://github.com/billba/PowerApps-AI-Components"&gt;repo&lt;/a&gt; with me so you can have this (and several other controls) added to your Canvas Apps easily. Here's how you add the Bounding Boxes to your application. 👆&lt;/p&gt;

&lt;p&gt;Add the Component to your app and then set the Boxes property of the component to &lt;span&gt;&lt;span&gt;'Your Model Name'.Predict(StreamCamera.Stream).results&lt;/span&gt;&lt;/span&gt; so after every 2 seconds, it will start showing the bounding box as they are processed by the model.&lt;/p&gt;

&lt;p&gt;As I discussed above that the purpose of this post is to show you the limitless capabilities of AI Builder. It is now up to your business needs where you really want to use this, for example, as a business owner of a store, you'd like to keep a periodic (~10 minutes) check on people wearing masks within a location, counting the number of allowed people in a café and so on. In case you get stuck with whatever I have shown you here - please feel free to comment down and I'll be happy to assist you.&lt;/p&gt;

&lt;p&gt;Until next time.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>powerfuldevs</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Hacktoberfest for .NET Developers (C# Edition)</title>
      <dc:creator>Arafat Tehsin</dc:creator>
      <pubDate>Sun, 03 Oct 2021 05:23:11 +0000</pubDate>
      <link>https://dev.to/arafattehsin/hacktoberfest-for-net-developers-c-edition-3bha</link>
      <guid>https://dev.to/arafattehsin/hacktoberfest-for-net-developers-c-edition-3bha</guid>
      <description>&lt;p&gt;Hacktoberfest is celebrated during the month of October when open-source software enthusiasts, beginners, and the developer community participate by contributing to open-source projects. This is its 8th year and since then, it has been growing every year. Thousands of developers, creators and opensource maintainers have been doing a commendable job in educating the beginners, guiding fellow developers or learning from others, on day to day basis. Opensource is truly amazing!&lt;/p&gt;

&lt;p&gt;Although I created &lt;a href="https://github.com/arafattehsin/CognitiveRocket" rel="noopener noreferrer"&gt;CognitiveRocket&lt;/a&gt; repo for the purpose of sharing my demos, slide decks etc. but I still remember submitting my &lt;a href="https://github.com/jamesemann/JamesMann.BotFramework/pull/1" rel="noopener noreferrer"&gt;first ever pull request&lt;/a&gt; on my friend's (&lt;a href="https://twitter.com/jamesemann" rel="noopener noreferrer"&gt;James Mann&lt;/a&gt;) repo which later became &lt;a href="https://www.arafattehsin.com/bot-builder-community-project/" rel="noopener noreferrer"&gt;one of the reasons&lt;/a&gt; to start &lt;a href="https://github.com/BotBuilderCommunity" rel="noopener noreferrer"&gt;Bot Builder Community Project&lt;/a&gt; and to enable all the Microsoft Bot Framework enthusiasts and developers to join hands together to build fantastic tools for everyone. 🛠&lt;/p&gt;

&lt;p&gt;As I recently registered for the Hacktoberfest, I thought to look for the projects of my interest and skillset (&lt;em&gt;as a C# developer&lt;/em&gt;) where I can contribute my time and effort. Hence, I thought to share with you as well, in case you also want to participate in the similar projects.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Please note that this list is not exhaustive so it's a high chance that you may find a better project of your interest than something from this list. Just click on the project name and it will directly take you to the appropriate issues (with the exception of first one).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2021%2F10%2Fhacktoberfest2021.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2021%2F10%2Fhacktoberfest2021.png" alt="Hacktoberfest for .NET 2021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3 id="-sentimentanalyzer-https-github-com-arafattehsin-sentimentanalyzer-"&gt;&lt;a href="https://github.com/arafattehsin/SentimentAnalyzer" rel="noopener noreferrer"&gt;SentimentAnalyzer&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.arafattehsin.com/sentimentanalyzer-ondevice-machine-learning/" rel="noopener noreferrer"&gt;SentimentAnalyzer&lt;/a&gt; is an on-device (offline) .NET Standard library to find out what customers think of your brand or topic by analyzing raw text for clues about positive or negative sentiment. I created this library 2 years back using ML.NET and I try to keep it updated enough so that it does not break for anyone who is using it. This library can be used by any of your .NET apps be it your bot, mobile, web or even an IoT app.&lt;/p&gt;

&lt;blockquote&gt;As it hit around 10,000 downloads from NuGet package manager, I moved it to a separate project for better maintainability.&lt;/blockquote&gt;

&lt;p&gt;There's already an &lt;a href="https://github.com/arafattehsin/SentimentAnalyzer/issues?q=is%3Aopen+is%3Aissue+label%3A%22up+for+grabs%22" rel="noopener noreferrer"&gt;issue opened&lt;/a&gt; as an enhancement with the appropriate tags, please feel free to contribute.&lt;/p&gt;

&lt;h3 id="-grok-net-https-github-com-marusyk-grok-net-labels-good-20first-20issue-"&gt;&lt;a href="https://github.com/Marusyk/grok.net/labels/good%20first%20issue" rel="noopener noreferrer"&gt;grok.NET&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;Cross platform .NET grok implementation as a NuGet package.&lt;/p&gt;

&lt;h3 id="-bot-builder-community-https-github-com-botbuildercommunity-"&gt;&lt;a href="https://github.com/BotBuilderCommunity" rel="noopener noreferrer"&gt;Bot Builder Community&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;A collection of repos led by the community, containing extensions, including middleware, dialogs, recognizers and more for the Microsoft Bot Framework SDK.&lt;/p&gt;

&lt;h3 id="-uno-platform-https-github-com-unoplatform-uno-labels-good-20first-20issue-"&gt;&lt;a href="https://github.com/unoplatform/uno/labels/good%20first%20issue" rel="noopener noreferrer"&gt;Uno Platform&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;The only platform for building native mobile, desktop and WebAssembly with C#, XAML from a single codebase. Open source and professionally supported.&lt;/p&gt;

&lt;h3 id="-azure-functions-openapi-extensions-https-github-com-azure-azure-functions-openapi-extension-"&gt;&lt;a href="https://github.com/Azure/azure-functions-openapi-extension" rel="noopener noreferrer"&gt;Azure Functions OpenAPI Extensions&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;This extension provides an Azure Functions app with Open API capability for better discoverability to consuming parties.&lt;/p&gt;

&lt;h3 id="-dotnet-rosyln-https-github-com-dotnet-roslyn-"&gt;&lt;a href="https://github.com/dotnet/roslyn" rel="noopener noreferrer"&gt;dotnet/Rosyln&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;The Roslyn .NET compiler provides C# and Visual Basic languages with rich code analysis APIs.&lt;/p&gt;

&lt;h3 id="-dotnet-maui-https-github-com-dotnet-maui-issues-"&gt;&lt;a href="https://github.com/dotnet/maui/issues" rel="noopener noreferrer"&gt;dotnet/Maui&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;.NET MAUI is the .NET Multi-platform App UI, a framework for building native device applications spanning mobile, tablet, and desktop.&lt;/p&gt;

&lt;h3 id="-dotnet-docs-https-github-com-dotnet-docs-labels-up-for-grabs-"&gt;&lt;a href="https://github.com/dotnet/docs/labels/up-for-grabs" rel="noopener noreferrer"&gt;dotnet/Docs&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;The official Microsoft .NET repo that contains documentation. While it's not directly related to your development stack, I think it is still good (and active for Hacktoberfest).&lt;/p&gt;

&lt;h3 id="-mudblazor-https-github-com-mudblazor-mudblazor-issues-q-is-3aissue-is-3aopen-label-3ahacktoberfest-"&gt;&lt;a href="https://github.com/MudBlazor/MudBlazor/issues?q=is%3Aissue+is%3Aopen+label%3Ahacktoberfest" rel="noopener noreferrer"&gt;MudBlazor&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;Blazor Component Library based on Material design. The goal is to do more with Blazor, utilizing CSS and keeping Javascript to a bare minimum.&lt;/p&gt;

&lt;h3 id="-materialdesigninxamltoolkit-https-github-com-materialdesigninxaml-materialdesigninxamltoolkit-issues-q-is-3aissue-is-3aopen-label-3ahacktoberfest-"&gt;&lt;a href="https://github.com/MaterialDesignInXAML/MaterialDesignInXamlToolkit/issues?q=is%3Aissue+is%3Aopen+label%3AHacktoberfest" rel="noopener noreferrer"&gt;MaterialDesignInXamlToolkit&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;Comprehensive and easy to use Material Design theme and control library for the Windows desktop. Built in with XAML &amp;amp; WPF, for C# &amp;amp; &lt;a href="http://vb.net/" rel="noopener noreferrer"&gt;V&lt;/a&gt;B.NET.&lt;/p&gt;

&lt;h3 id="-silk-net-https-github-com-dotnet-silk-net-issues-q-is-3aissue-is-3aopen-label-3ahacktoberfest-"&gt;&lt;a href="https://github.com/dotnet/Silk.NET/issues?q=is%3Aissue+is%3Aopen+label%3Ahacktoberfest" rel="noopener noreferrer"&gt;Silk.NET&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;The high-speed OpenGL, OpenCL, OpenAL, OpenXR, GLFW, SDL, Vulkan, Assimp, and DirectX bindings library your mother warned you about.&lt;/p&gt;

&lt;h3 id="-fluentassertions-https-github-com-fluentassertions-fluentassertions-issues-q-is-3aopen-is-3aissue-label-3a-22good-first-issue-22-"&gt;&lt;a href="https://github.com/fluentassertions/fluentassertions/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22" rel="noopener noreferrer"&gt;FluentAssertions&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;A very extensive set of extension methods that allow you to more naturally specify the expected outcome of a TDD or BDD-style unit tests. Targets .NET Framework 4.7, .NET Core 2.1 and 3.0, as well as .NET Standard 2.0 and 2.1. Supports the unit test frameworks MSTest2, NUnit3, XUnit2, MSpec, and NSpec3.&lt;/p&gt;




&lt;p&gt;As I wrote earlier, this is not an exhaustive list, it's just I recently came across and found out that they are willing to accept the contributions and mark them as &lt;code&gt;hacktober-accepted.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you think I should include your repo or something which you have found interesting, please let me know and I will add it to the list.&lt;/p&gt;

&lt;p&gt;Keep hacking.&lt;/p&gt;

</description>
      <category>hacktoberfest</category>
      <category>dotnet</category>
      <category>azure</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Chatterbox with OpenAI’s GPT-3 and Bot Framework Composer</title>
      <dc:creator>Arafat Tehsin</dc:creator>
      <pubDate>Fri, 06 Aug 2021 08:35:41 +0000</pubDate>
      <link>https://dev.to/arafattehsin/chatterbox-with-openai-s-gpt-3-and-bot-framework-composer-4o35</link>
      <guid>https://dev.to/arafattehsin/chatterbox-with-openai-s-gpt-3-and-bot-framework-composer-4o35</guid>
      <description>&lt;p&gt;Last week my level of excitement just went up to another level when I got an access to OpenAI's API. The API features a powerful general purpose language model, GPT-3, and has received tens of thousands of applications to date. The main reason for my excitement was to explore what it has to offer in regards to the conversational AI capabilities.&lt;/p&gt;

&lt;blockquote&gt;We're still in the lockdown and already ran out of all kind of combinations of games we had to play our little one, so I thought to build something for her, at least we can now get to the point, one liner answers from Google Assistant.&lt;/blockquote&gt;

&lt;p&gt;Unlike many other AI systems which are designed for one use-case, OpenAI’s API today provides a general-purpose “text in, text out” interface, allowing users to try it on virtually any English language task. GPT-3 is the most powerful model behind the API today, with 175 billion parameters. There are several other models available via the API today, as well as other technologies and filters that allow developers to customize GPT-3 and other language models for their own use.&lt;/p&gt;

&lt;blockquote&gt;As of today, the API remains in a limited beta as OpenAI and academic partners test and assess the capabilities and limitations of these powerful language models.&lt;/blockquote&gt;

&lt;p&gt;In this post, rather than going into the details of API and their offerings, I'd take you through how easily you can connect your bot built using Microsoft Bot Framework Composer and surface OpenAI's capabilities to your customers. While there can be many use-cases ranging from the classification to QnA to key points extraction, I'd keep it quite simple and just add the chitchat capability which involves questions and answers functionality of OpenAI's famous model called Davinci.&lt;/p&gt;

&lt;blockquote&gt;Davinci is the most capable engine and can perform any task the other models can perform and often with less instruction. For applications requiring a lot of understanding of the content, like summarization for a specific audience and creative content generation, Davinci is going to produce the best results.&lt;/blockquote&gt;

&lt;p&gt;The API works in a unique way and to build a whole conversation around that needs a few tricks with the Composer but we'll talk about that after the bot's part.&lt;/p&gt;

&lt;h2&gt;The real part, Chatterbox&lt;/h2&gt;

&lt;p&gt;Now, let's focus on our main motive of this post; to integrate OpenAI's API with Bot Framework Composer bot which we call &lt;strong&gt;Chatterbox. &lt;/strong&gt;This will try to answer any question you'd put in your chat window or speak to your favorite smart assistant (&lt;em&gt;Google or Alexa&lt;/em&gt;). In order to achieve this, you need to have an API key. Hold on, if you do not have it, it still is good for you to continue reading this post. Why? Because you'll learn about integration of any authenticated API along with connecting your bot with Google Assistant. &lt;em&gt;Yes, you read that right, we'll also integrate it with the Google Assistant (Alexa is &lt;a href="https://docs.microsoft.com/en-us/azure/bot-service/bot-service-channel-connect-alexa?view=azure-bot-service-4.0" rel="noopener noreferrer"&gt;easier&lt;/a&gt; though)&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;It's assumed that you're &lt;a href="https://www.arafattehsin.com/build-a-flight-tracker-using-bot-framework-composer-part-1/" rel="noopener noreferrer"&gt;familiar with Bot Framework Composer&lt;/a&gt; and the basic concepts of creating a good experience with chatbots.&lt;/blockquote&gt;

&lt;p&gt;Once you have the API key, you first have to go to the &lt;strong&gt;Guides&lt;/strong&gt; section of OpenAI's Documentation, skim it and move to the &lt;b&gt;Completion &lt;/b&gt;section of your API reference. Over there, you will learn about the API specification such as endpoint, verb, request and much more. &lt;em&gt;Protip: &lt;/em&gt;&lt;em&gt;Copy the example request for usage, tailor it according to your needs later.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After going through the docs, open up the Bot Framework Composer, create a new empty Bot. Remove &lt;strong&gt;Send a response&lt;/strong&gt; action and add &lt;strong&gt;Begin a new dialog&lt;/strong&gt; action instead. This will help you create a new dialog (you can name it as &lt;strong&gt;TalkingDialog&lt;/strong&gt;) and then you can start working in that particular dialog. Please note that this is not the best way to create your bots and you may want to add some natural language capabilities to your bot. It is also possible that your bot may have some sophisticated / complex flow such as ordering food, tracking a parcel and so on. The main reason I did is to demonstrate you the working of API rather than connecting it with any NLP services or building some sophisticated capabilities into this bot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2021%2F08%2FCreatingDialog.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2021%2F08%2FCreatingDialog.gif" alt="Creating Dialog in Bot Framework Composer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now in &lt;strong&gt;TalkingDialog, &lt;/strong&gt;you first have to create an experience that'd keep asking question after one another. As Bot Framework Composer lacks a while loop functionality for now therefore, we'd try to achieve this using some basic properties (such as &lt;strong&gt;dialog.isLaunched&lt;/strong&gt;) and adding two question (text) prompts.&lt;strong&gt; &lt;/strong&gt;Note that &lt;strong&gt;isLaunched&lt;/strong&gt; property needs to passed as option from the Greeting trigger.&lt;/p&gt;

&lt;p&gt;Then, you just have to add a text prompt. In the text prompt, you'd write a simple introductory message. I am writing the exact same message as I'd pass it to the API. On the input section, you can see that I'm setting up a property &lt;strong&gt;user.question&lt;/strong&gt; which means that whatever user will write in response to the introductory message (&lt;em&gt;which is kind of a question, but not really&lt;/em&gt;) will be stored in that property. This response of a user is actually considered as a &lt;strong&gt;question&lt;/strong&gt; for the OpenAI's API. Then after receiving a response from the API (which is an &lt;strong&gt;answer &lt;/strong&gt;in our case), that response will be set as a text prompt again. As soon as user asks another question, the &lt;strong&gt;Repeat this Dialog&lt;/strong&gt; action will be called and it will then follow the different path rather than going into the introduction one.&lt;/p&gt;

&lt;p&gt;This &lt;a href="https://www.arafattehsin.com/wp-content/uploads/2021/08/Dialogflow_1.webm" rel="noopener noreferrer"&gt;video&lt;/a&gt; clip may help. 🙂&lt;/p&gt;

&lt;h3&gt;What's in the API?&lt;/h3&gt;

&lt;p&gt;The API is super simple which is authenticated with the bearer (key); the docs are really well-written and contain many examples. However, it gets a little tricky when it comes to sending prompts. The reason of that is, it expects a newline (\n) escape sequence, as well as the prefix such as &lt;strong&gt;Q&lt;/strong&gt; and &lt;strong&gt;A&lt;/strong&gt;. This is different for different kind of use-cases. Such as for lengthy, contextual chats, you may have to set prefixes as &lt;strong&gt;Human&lt;/strong&gt; and&lt;strong&gt; AI&lt;/strong&gt;. This is all good when you're sending a request using clients like Postman (example below).&lt;/p&gt;

&lt;p&gt;&lt;span&gt;However, it gets really tricky when you have to keep the last prompt, concatenate with the new one and then send it again and repeat. This is valid (&lt;em&gt;and working&lt;/em&gt;) for the case when you want to build some contextual conversation. &lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2021%2F08%2FPostman.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2021%2F08%2FPostman.png" alt="Postman Request and Response for OpenAI API for Completions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our case, we're not doing it because we're just &lt;span&gt;sending question and receiving an answer and showing it to the user. Now, let's see what's going on inside the &lt;strong&gt;Send an HTTP Request&lt;/strong&gt; action. This is the place where we're filling in the request and sending it to the OpenAI's API. Purposefully, I have tried to avoid any variables, just to show you the raw request rather than keeping it in a object. You can see that I am passing &lt;strong&gt;user.question &lt;/strong&gt;dynamically into the prompt. Also, in header, I am filling up with Content-Type and Authorization keys. This is how you can pass the authenticated request. &lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: I've removed the authentication key on purpose. &lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can check out &lt;a href="https://www.arafattehsin.com/wp-content/uploads/2021/08/HttpAction.webm" rel="noopener noreferrer"&gt;this&lt;/a&gt; video to see how the HTTP Action looks like. &lt;/p&gt;

&lt;p&gt;The response is store in &lt;strong&gt;dialog.result&lt;/strong&gt; and with the little string manipulation, I am able to send it back as a prompt to the user.&lt;/p&gt;

&lt;pre&gt;${substring(dialog.result.content.choices[0].text, 2, length(dialog.result.content.choices[0].text) - 2)}&lt;/pre&gt;

&lt;p&gt;This is how it is going to look in the &lt;a href="https://www.arafattehsin.com/wp-content/uploads/2021/08/Webchat-Emulator.webm" rel="noopener noreferrer"&gt;Webchat / Emulator&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id="google"&gt;Surface your Bot to Google Assistant&lt;/h3&gt;

&lt;p&gt;In my last post, I've &lt;a href="https://www.arafattehsin.com/bot-framework-component-model/" rel="noopener noreferrer"&gt;talked about the Bot Framework Component model&lt;/a&gt;. This has added a whole set of new capabilities to the Composer. You can now use nuget or npm packages to your bots, without writing any code or moving away from the Composer. In your Composer client, you can go to Packages area to add multiple first party and community (third party) packages. You can also build it locally and add them to your bot (&lt;em&gt;this is out of scope for this post so I will skip it). &lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2021%2F08%2FPackageManager.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2021%2F08%2FPackageManager.png" alt="Package Manager for Bot Framework Composer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;So, in this post, I'm going to talk about adding an opensource community &lt;a href="https://github.com/BotBuilderCommunity/botbuilder-community-dotnet/tree/develop/libraries/Bot.Builder.Community.Components.Adapters.ActionsSDK" rel="noopener noreferrer"&gt;component of Google Actions SDK&lt;/a&gt;. This component is powered by the Actions SDK adapter which is primarily built by &lt;a href="https://twitter.com/GaryPretty" rel="noopener noreferrer"&gt;Gary Pretty&lt;/a&gt; (big thanks to him). Once you're in the &lt;strong&gt;Package Manager &lt;/strong&gt;section, find Bot.Builder.Community.Components.Adapter.ActionsSDK and click on install on the right.&lt;/p&gt;

&lt;blockquote&gt;
&lt;a href="https://www.arafattehsin.com/bot-builder-community-project/" rel="noopener noreferrer"&gt;Bot Builder Community&lt;/a&gt; is a collection of repos led by the community, containing extensions, including middleware, dialogs, recognizers and more for the Microsoft Bot Framework SDK.&lt;/blockquote&gt;

&lt;p&gt;Before moving any further with the Package or the Bot, let's look at the &lt;a href="https://console.actions.google.com/" rel="noopener noreferrer"&gt;Google Actions&lt;/a&gt; which will be used to communicate with your Google Assistant. So instead of re-inventing another guide or if you do not know what Google Actions are and how you can use them with your Bot Framework Composer bot, I'd recommend you to follow &lt;a href="https://github.com/BotBuilderCommunity/botbuilder-community-dotnet/blob/develop/libraries/Bot.Builder.Community.Components.Adapters.ActionsSDK/README.md" rel="noopener noreferrer"&gt;this comprehensive guide.&lt;/a&gt; Follow this till the 8th point of &lt;strong&gt;Configure your Actions SDK Project. &lt;/strong&gt;Once you're done configuring your project, you may see all the options (left-hand side) like below in your Action.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2021%2F08%2FGoogle-Action-Chatterbox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2021%2F08%2FGoogle-Action-Chatterbox.png" alt="Chatterbox Google Actions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;If you still get stuck in any way of creating Google Actions for your bot then please let me know in the comments and I'll be happy to help you out.&lt;/blockquote&gt;

&lt;p&gt;Let's go back to our Bot Framework Composer bot and add the configurations to your Action. 🔧&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2021%2F08%2FConfigureAction.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2021%2F08%2FConfigureAction.gif" alt="Configure Google Actions in Bot Framework Composer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're like me who is running this bot locally for testing purposes (or haven't really made up a mind to publish this action) then you may want to use &lt;strong&gt;ngrok&lt;/strong&gt; as this will give you an ability to test complete functionality of your bot by running it completely local. &lt;a href="https://docs.microsoft.com/en-us/azure/bot-service/bot-service-debug-channel-ngrok?view=azure-bot-service-4.0&amp;amp;WT.mc_id=AI-MVP-5003464" rel="noopener noreferrer"&gt;This doc&lt;/a&gt; covers everything around it. Once you've setup the tunnel of your local port with ngrok then you can copy that link of your bot and set it to the Webhook url of your Google Action.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2021%2F08%2FWebhook.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.arafattehsin.com%2Fwp-content%2Fuploads%2F2021%2F08%2FWebhook.png" alt="Google Action Webhook"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Now! You're absolutely OK to test your Bot with your Action. Click on Start (or Restart) bot in your Composer and move to your Google Actions.&lt;/p&gt;

&lt;h3&gt;Actions Simulator 👎 - Google Assistant 👍&lt;/h3&gt;

&lt;p&gt;The experience with the Simulator is quite limited (read: bad) and that's why I never recommend you to test your bot with that. Once you're done with the first or second response of your bot, you must test it with the actual assistant.&lt;/p&gt;

&lt;p&gt;However, this is how the &lt;a href="https://www.arafattehsin.com/wp-content/uploads/2021/08/GoogleAction-Test.mp4" rel="noopener noreferrer"&gt;simulator experience&lt;/a&gt; should look like.&lt;/p&gt;

&lt;p&gt;You do not necessarily need a Google Nest Hub or Home to work with this. It's just that I have one so I love using it. If you own an Android device (or even an iPhone), you can still use Google Assistant.&lt;/p&gt;

&lt;blockquote&gt;My use-case was more of an educational but as you can see, it is good enough to work with any industry and on any scale. Therefore, solving customer service problems, extracting useful information or tailoring the whole experience for your business shouldn't be a big problem.&lt;/blockquote&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/9uXiW-ZSqnc"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;OpenAI has been really fruitful for the scenarios I have used here and offline and when it is combined with tools like Bot Framework Composer, it can surely bring the game changing experiences to the different business verticals. I'd be happy to see your comments on your thoughts and challenges if you have come across with this one, so far.&lt;/p&gt;

&lt;p&gt;Until next time.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>botframework</category>
      <category>gpt3</category>
      <category>openai</category>
    </item>
    <item>
      <title>Introducing Bot Framework Component Model for better sharing and extensibility</title>
      <dc:creator>Arafat Tehsin</dc:creator>
      <pubDate>Tue, 08 Jun 2021 07:53:48 +0000</pubDate>
      <link>https://dev.to/arafattehsin/introducing-bot-framework-component-model-for-better-sharing-and-extensibility-27b2</link>
      <guid>https://dev.to/arafattehsin/introducing-bot-framework-component-model-for-better-sharing-and-extensibility-27b2</guid>
      <description>&lt;p&gt;Hey there! 👋 I am back with the bot post again.&lt;/p&gt;

&lt;p&gt;The Conversational AI has witnessed a tremendous growth just in past 2-3 years. According to latest research, Gartner &lt;a href="https://www.gartner.com/smarterwithgartner/top-cx-trends-for-cios-to-watch/#:~:text=Chatbots%2C%20virtual%20assistants%20and%20robots,up%20from%2015%25%20in%202018"&gt;predicts&lt;/a&gt; that by 2022, 70% of customer interactions will involve bots — an increase of 15% from 2018. This does not come as a surprise, considering a &lt;a href="https://www.businessinsider.com/chatbot-market-stats-trends"&gt;good amount&lt;/a&gt; of consumers say they prefer chatbots to other virtual agents.&lt;/p&gt;

&lt;p&gt;Last couple of months must have been super overwhelming for the overall Conversational AI team at Microsoft. Whether it's the &lt;a href="https://docs.microsoft.com/en-us/azure/bot-service/what-is-new?view=azure-bot-service-4.0&amp;amp;WT.mc_id=AI-MVP-5003464"&gt;Bot Framework SDK&lt;/a&gt; or &lt;a href="https://docs.microsoft.com/en-us/composer/what-is-new"&gt;Composer&lt;/a&gt; with tons of new features, Power Virtual Agents &lt;a href="https://docs.microsoft.com/en-us/power-virtual-agents/advanced-bot-framework-composer?WT.mc_id=AI-MVP-5003464"&gt;seamless integration&lt;/a&gt; with Composer or the &lt;a href="https://docs.microsoft.com/en-us/azure/bot-service/bot-service-manage-channels?view=azure-bot-service-4.0&amp;amp;WT.mc_id=AI-MVP-5003464"&gt;new channels&lt;/a&gt; and improvements at the Azure Bot Service; there have been significant updates for businesses. Cherry on the top is &lt;a href="https://techcommunity.microsoft.com/t5/azure-ai/build-2021-conversational-ai-update/ba-p/2375203"&gt;Build 2021 goodness&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;As I said in my previous &lt;a href="https://www.arafattehsin.com/generate-dialogs-automatically-using-form-dialogs-for-bot-framework-composer/"&gt;blog post&lt;/a&gt; where I discussed the automatic generation of the Form Dialogs that every new feature deserves the whole blog post because of their explorative nature and productive use-case.&lt;/p&gt;

&lt;p&gt;Therefore, today I am going to write about something which is freshly baked, super hot from the oven and that is the way of creating packages for your bots for better sharing, re-usability and cost-effective solutions. Before we get into the details of packages and their implementation, let us first understand why is it so important that a dedicated team worked on this new model for months to bring this goodness to you.&lt;/p&gt;

&lt;blockquote&gt;
If you're relatively new to the Bot Framework (or Composer) then I'd suggest you to have a look &lt;a href="https://docs.microsoft.com/en-us/composer/tutorial/tutorial-introduction"&gt;here&lt;/a&gt; as in this post, it is assumed that you're familiar with the concepts and implementation of Bot Framework.&lt;/blockquote&gt;

&lt;h3&gt;Challenges with the Current State&lt;/h3&gt;

&lt;p&gt;If you've built a sophisticated bot using Bot Framework Composer (&amp;lt; v2.0.0) which is responsible for booking and tracking the food orders from your customers and you want to extend your bot to incorporate some middleware, custom adapters and other actions; you may have to face a lot hurdles to sort that out and that won't be straight forward at all.&lt;/p&gt;

&lt;p&gt;Even if you get successful by addressing all of these challenges, you may have to go through the same issues again for another bot. &lt;strong&gt;Lacks re-usability. &lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Furthermore, if you create your bot using (Enterprise) Virtual Assistant Template, QnA Maker or just simple Echo bot using Visual Studio, you may end up having different experiences with each of them and building a common functionality around them may lead to a dead end.&lt;/p&gt;

&lt;p&gt;Therefore, the Bot Framework team realized and this became the reason of the evolution of Bot Framework Component Model.&lt;/p&gt;

&lt;h3&gt;Meet Bot Framework Component Model&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/BotBuilderCommunity/botbuilder-community-dotnet/releases/tag/v4.13.0"&gt;Bot Framework Component&lt;/a&gt; Model is the new framework to share and re-use bot's functionality in the form of bot templates (&lt;em&gt;Empty bot, Core bot with Language&lt;/em&gt;), packages (&lt;em&gt;Custom Adapters, Middleware&lt;/em&gt;) or complete bots as skills. You can now create the bots with confidence as you won't find yourself in a situation where you can't customize or find it difficult to share the portions of your bot with others (&lt;em&gt;unless it's poorly designed &lt;/em&gt;🤐)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m8JvGThq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2021/05/component-model.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m8JvGThq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2021/05/component-model.png" alt="Bot Framework Component Model"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;blockquote&gt;Bot Framework Composer is now considered as the standard go-to approach for all sort of bots.&lt;/blockquote&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;This model is comprised of a few essential pieces;&lt;/p&gt;

&lt;h3&gt;Adaptive Runtime&lt;/h3&gt;

&lt;p&gt;Adaptive Runtime is the core of this model which wraps up the Bot Framework SDK and provides the extensibility points to add any functionality by importing packages, adding your custom coded extensions or even connecting to skills. This means that you just have to take care of your business logic rather than worrying about the runtime code.&lt;/p&gt;

&lt;h3&gt;Templates&lt;/h3&gt;

&lt;p&gt;Templates such as Empty Bot, Core Bot with QnA Maker etc. are created on top of Component Model. Since they're built by composing the packages (&lt;em&gt;discussed below&lt;/em&gt;) therefore you can always edit / extend them according to your needs. For example, if you create a bot using Core Bot with QnA Maker, you can then add other packages such as Custom Adapters (Google, Zoom etc.) or Declarative Assets (LG) such as Welcome, Help &amp;amp; Cancel. &lt;strong&gt;Flexibility is the key. &lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oorQhMqR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2021/05/templates.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oorQhMqR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2021/05/templates.png" alt="Bot Framework Composer Templates"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Skills&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.arafattehsin.com/add-skills-to-your-bot/"&gt;Skills&lt;/a&gt; are just regular bots with the plug and play functionality. These bots can be connected to your existing bots to process the messages for you. This is done through the Skill manifest which establishes the contract that bots can follow and communicate with each other.&lt;/p&gt;

&lt;h3&gt;Packages&lt;/h3&gt;

&lt;p&gt;Packages are the components (or &lt;em&gt;bits&lt;/em&gt; as per the product team) of a bot you want to re-use and share. Packages can be in the form of Declarative Dialogs (LG templates such as Welcome, Help etc.), Custom Adapters (Google, RingCentral, WhatsApp or Zoom etc.), Custom Actions (Adaptive Cards, Microsoft Teams etc.), Middleware and more. They can also be built in the combination of one another such as Custom Actions with Declarative Assets and so on.&lt;/p&gt;

&lt;blockquote&gt;Packages are just regular NuGet / npm packages depending upon the code language of your bot. On NuGet / npm, these packages are tagged with &lt;em&gt;msbot-component&lt;/em&gt; to be displayed in Bot Framework Composer.&lt;/blockquote&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;With Bot Framework Composer v2.0.0 release, the creation experience is now based upon Adaptive Runtime and that's why you can see a new little icon on your left hand side of the Composer which indicates a whole section of Packages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d28DxT9B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2021/05/package-manager.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d28DxT9B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2021/05/package-manager.png" alt="Package Manager for Bot Framework Composer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;These packages can easily be added to your bot by just clicking on the Install button. If they're custom actions, you'll be able to see them right away in the Menu. For other types of packages such as Middleware, Custom Adapters, Skills etc. they can be used as described by their creators.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.arafattehsin.com/wp-content/uploads/2021/05/BotFramework-Packages.gif.mp4"&gt;Check out the video on this link to see how you can add the packages&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Did you notice that the team at Microsoft has done a great job in bringing up all of those packages upfront which were essential for bot development. For example, Adaptive Cards, Teams, Telephony and so much more.&lt;/p&gt;

&lt;p&gt;But is it just Microsoft who is doing it? What if I want to connect it to some external middleware (as we discussed in start) or surface our bot to Google Assistant? Do we have something for that? Short answer is &lt;strong&gt;yes&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;Community Packages&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.arafattehsin.com/bot-builder-community-project/"&gt;Bot Builder Community&lt;/a&gt;, with the awesome collaboration and contributions from Microsoft, has done a great job in developing all of these fantastic packages which includes several Custom Adapters, Recognizers and Hand-off packages. You can easily have a look at the community packages by selecting the &lt;strong&gt;community &lt;/strong&gt;options from the list. &lt;em&gt;Right now, they are not filtered properly but you can always identify the Community packages with &lt;strong&gt;BotBuilder.Community.Components.&amp;lt;type&amp;gt;.&amp;lt;name&amp;gt; &lt;/strong&gt;pattern. &lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;Community Packages are 100% opensource and you're most welcome to get some inspiration from the existing implementation or provide your feedback by opening an issue&lt;/blockquote&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5PoC3uF5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2021/05/community-packages.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5PoC3uF5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://www.arafattehsin.com/wp-content/uploads/2021/05/community-packages.gif" alt="Community Packages for Bot Framework Component Model"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;Local Packages&lt;/h4&gt;

&lt;p&gt;Now if you are like me and you'd like to create your own or use packages developed by your team internally, then you'd want to test them before you actually publish them to either npm or NuGet. So, to address this issue, we've got to set-up the local feed (&lt;em&gt;same as we do for other .NET / js packages) &lt;/em&gt;and we just have to add a new feed by clicking on the &lt;strong&gt;Edit feeds&lt;/strong&gt; button inside Package manager. Once you've setup the local feed, your packages will start showing up and you can easily install / use them in a similar way as shown above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.arafattehsin.com/wp-content/uploads/2021/05/2021-05-25_19-55-12.mp4"&gt;Check out the video on how you can add the locally developed packages to your bots&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Packages are the game changer esp. for Composer first bots. They definitely eliminate the need of re-writing code and increase the extensibility with better sharing capabilities. Not only this, it also allows you to connect with Skills seamlessly. In my upcoming post, I'll walk you through how you can create different types of Packages using Bot Framework Component Model.&lt;/p&gt;

&lt;p&gt;Until next time.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>botframework</category>
      <category>chatbots</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
