<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ruth Yakubu</title>
    <description>The latest articles on DEV Community by Ruth Yakubu (@ruthieyakubu).</description>
    <link>https://dev.to/ruthieyakubu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ruthieyakubu"/>
    <language>en</language>
    <item>
      <title>How to debug AI image models to identify societal risks and harms</title>
      <dc:creator>Ruth Yakubu</dc:creator>
      <pubDate>Thu, 28 Mar 2024 23:22:41 +0000</pubDate>
      <link>https://dev.to/azure/how-to-debug-ai-image-models-to-identify-societal-risks-and-harms-3pdj</link>
      <guid>https://dev.to/azure/how-to-debug-ai-image-models-to-identify-societal-risks-and-harms-3pdj</guid>
      <description>&lt;p&gt;Did you know AI generated images can have unintended biases, stereotypes, and harms? If you prompt an AI image generator for images of tech developers, you are likely going to get male dominated images. You're not going to see images of females or diverse ethnicities. Scenarios like this reveal challenges where image models do not always reflect every aspect of reality. The problem is multimodal AI systems such as DALL-E that can generate text-to-image, and other data types are paving the way for exciting applications in areas such as productivity, health care, creativity, and automation. That's why identifying and mitigating responsible AI risks are essential.&lt;/p&gt;

&lt;p&gt;✨ Join the #MarchResponsibly challenge by learning responsible AI tools and services available to you.&lt;/p&gt;

&lt;p&gt;In this article, Besmira Nushi, an AI researcher at Microsoft, explores several techniques and challenges on how to ensure multimodal models are responsibly developed, evaluated and deployed. She discusses how to unmask hidden societal biases across modalities. She shares strategies of evaluating, mitigating and improving AI image model for issues such as spurious correlation where an AI model incorrectly predicts an object based on other objects associated with it; or how text-to-image models often misinterpret surrounding cues in an image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe21g0pbntni75tvxy4f2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe21g0pbntni75tvxy4f2.png" alt="Image description" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉🏽 Checkout Besmira Nushi's article: &lt;a href="https://aka.ms/march-rai/evaluate-multimodal-models"&gt;https://aka.ms/march-rai/evaluate-multimodal-models&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🎉Happy Learning :)&lt;/p&gt;

</description>
      <category>responsibleai</category>
      <category>beginners</category>
      <category>developer</category>
      <category>azure</category>
    </item>
    <item>
      <title>Challenges of Evaluating and Understanding Foundation models</title>
      <dc:creator>Ruth Yakubu</dc:creator>
      <pubDate>Thu, 28 Mar 2024 17:20:35 +0000</pubDate>
      <link>https://dev.to/azure/challenges-of-evaluating-and-understanding-foundation-models-320g</link>
      <guid>https://dev.to/azure/challenges-of-evaluating-and-understanding-foundation-models-320g</guid>
      <description>&lt;p&gt;The process of evaluating and understanding foundation model such as LLMs or SLMs is complex.  This involves finding Benchmarks needed to established performance thresholds as well as accelerating model improvement. It is vital to keep responsible AI practices in mind throughout the analysis cycle.&lt;/p&gt;

&lt;p&gt;✨ Join the &lt;strong&gt;#MarchResponsibly&lt;/strong&gt; challenge by learning responsible AI tools and services available to you.&lt;/p&gt;

&lt;p&gt;In this video, Besmira Nushi, an AI researcher at Microsoft, discusses critical factors to consider when understanding or evaluating foundation models. In her talk, she addresses risks such as ensuring the data represents the real-world, and the model produces factual information as well as non-toxic content. In addition, she illustrates some performance issues that can arise from long open-ended generative outputs. When building generative AI apps, different prompt variants are sometimes needed.  She shares the negative effects on the variants that can occur when a model is updated.     &lt;/p&gt;

&lt;p&gt;💡Learn responsible AI harms to consider when working with foundation models and some practices AI researchers use in order to understand, evaluate and improve foundation models:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/p3QNY9AvNAM"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;👉🏽 Checkout Besmira Nushi's video: &lt;a href="https://aka.ms/march-rai/evaluate-foundation-models"&gt;https://aka.ms/march-rai/evaluate-foundation-models&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🎉Happy Learning :)&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Expert guide on advancing responsible AI Maturity</title>
      <dc:creator>Ruth Yakubu</dc:creator>
      <pubDate>Mon, 25 Mar 2024 22:27:17 +0000</pubDate>
      <link>https://dev.to/azure/expert-guide-on-advancing-responsible-ai-maturity-21an</link>
      <guid>https://dev.to/azure/expert-guide-on-advancing-responsible-ai-maturity-21an</guid>
      <description>&lt;p&gt;Responsible AI is top of mind for every organization and the AI practitioners that work for them, whether it is large or small. One challenge is assessing and identifying what responsible AI risks or harms the AI system can inflict on people or society. Even when the AI risks are discovered, the next dilemma is how to mitigate them.  The solutions are not always straightforward and sometimes tradeoffs need to be made.&lt;/p&gt;

&lt;p&gt;✨ Join the &lt;strong&gt;#MarchResponsibly&lt;/strong&gt; challenge by learning responsible AI tools and services available to you.&lt;/p&gt;

&lt;p&gt;Checkout &lt;a href="https://aka.ms/march-rai/guide-to-advance-ai"&gt;best practice guide&lt;/a&gt; from Mihaela Vorvoreanu, a responsible AI researcher at Microsoft, on tips, tools and practices organizations and developers can use to reach responsible AI maturity. The article discusses responsible AI red teaming guide on how to identify potential adversarial attacks for testing security vulnerabilities.  Next, after the harms are identified, how it is important to examine the frequency; putting mitigations in place; and continuously monitoring for the emergence of new harms.  Since all of this requires team collaboration and leadership buy-in, the article recommends great interactive tools and framework for teams to leverage in order to reach higher responsible AI maturity stages.  &lt;/p&gt;

&lt;p&gt;The article covers the following practical responsible AI tools &amp;amp; guides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Red teaming guide&lt;/li&gt;
&lt;li&gt;Making complex AI harm trade-off&lt;/li&gt;
&lt;li&gt;Human-AI experience (HAX) toolkit&lt;/li&gt;
&lt;li&gt;Human-AI Interaction guide&lt;/li&gt;
&lt;li&gt;Reaching maturity levels&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉🏽 Checkout Mihaela Vorvoreanu's article: &lt;a href="https://aka.ms/march-rai/guide-to-advance-ai"&gt;https://aka.ms/march-rai/guide-to-advance-ai&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🎉Happy Learning :)&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to protect PII information from your AI apps?</title>
      <dc:creator>Ruth Yakubu</dc:creator>
      <pubDate>Mon, 25 Mar 2024 20:45:59 +0000</pubDate>
      <link>https://dev.to/azure/how-to-protect-pii-information-from-your-ai-apps-5eb3</link>
      <guid>https://dev.to/azure/how-to-protect-pii-information-from-your-ai-apps-5eb3</guid>
      <description>&lt;p&gt;Is your Generative AI app exposing PII information?  With the rise in popularity with creating generative AI apps or building AI solutions, security/privacy tends to be overlooked.  There are many harms that an AI system can cause if PII data is not extracted or masked.  For example, information such as name, race, gender or age can lead to bias in employment decisions-making based on someone's name, race gender or age.  In addition, anyone getting access to people's social security and credit card information can lead to privacy violations or security breach issues.&lt;/p&gt;

&lt;p&gt;✨ Join the &lt;strong&gt;#MarchResponsibly&lt;/strong&gt; challenge and try out the hands-on tutorial.&lt;/p&gt;

&lt;p&gt;Check my colleague &lt;a class="mentioned-user" href="https://dev.to/dikosomeleze"&gt;@dikosomeleze&lt;/a&gt;'s &lt;a href="https://aka.ms/march-rai/pii-extraction"&gt;blog tutorial&lt;/a&gt; on how to use AI language services and Microsoft Fabric Lakehouse to protect your PII information.  Data is an important part of AI. The blog covers many valuable information from using data cleansing pipelines to handle PII information. In addition, how to identify, extract and mask PII data.&lt;/p&gt;

&lt;p&gt;Whether you are using LLMs or training AI models, the tutorial discusses the responsible AI impacts and some of the best practices in protecting your PII information. &lt;/p&gt;

&lt;p&gt;👉🏽 See &lt;a class="mentioned-user" href="https://dev.to/dikosomeleze"&gt;@dikosomeleze&lt;/a&gt;'s blog: &lt;a href="https://aka.ms/march-rai/pii-extraction"&gt;https://aka.ms/march-rai/pii-extraction&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🎉Happy Learning :)&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to build trustworthy AI Systems with Responsible AI tools</title>
      <dc:creator>Ruth Yakubu</dc:creator>
      <pubDate>Mon, 25 Mar 2024 20:00:23 +0000</pubDate>
      <link>https://dev.to/azure/how-to-build-trustworthy-ai-systems-with-responsible-ai-tools-31p1</link>
      <guid>https://dev.to/azure/how-to-build-trustworthy-ai-systems-with-responsible-ai-tools-31p1</guid>
      <description>&lt;p&gt;Responsible AI has been a hot topic in the recent months due to the rapid AI innovations and breakthroughs. From daily news headlines to grandma, people are excited and concern about the use or misuse of AI.  Governments are responding by implementing regulations on AI.  As a result, responsible AI is now a competitive skill that is in demand for the AI developer community to have.&lt;/p&gt;

&lt;p&gt;✨ Join the &lt;strong&gt;#MarchResponsibly&lt;/strong&gt; challenge by completing hands-on workshop and share your earned badges.&lt;/p&gt;

&lt;p&gt;Checkout my colleague Lee Scott's &lt;a href="https://aka.ms/march-rai/build-trustworthy-ai"&gt;blog post&lt;/a&gt; on the 3-part Azure Responsible AI hands-on workshop that we deliver to startups. Large tech cooperations are no longer the only ones that need to worry about violating responsible AI standards. Startup also need to address AI risks or harms from their AI solution when working with investors, partners and customers.&lt;/p&gt;

&lt;p&gt;The blog includes the labs on the following practical responsible AI tools &amp;amp; service:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure Content Safety&lt;/li&gt;
&lt;li&gt;Azure Prompt Flow&lt;/li&gt;
&lt;li&gt;Azure Responsible AI dashboards.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉🏽 Checkout Lee Scott's blog with the videos &amp;amp; workshop links:  &lt;a href="https://aka.ms/march-rai/build-trustworthy-ai"&gt;https://aka.ms/march-rai/build-trustworthy-ai&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🎉Happy Learning :)&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Responsible AI dashboard hands-on MS Learn module is now available!</title>
      <dc:creator>Ruth Yakubu</dc:creator>
      <pubDate>Mon, 25 Mar 2024 19:12:58 +0000</pubDate>
      <link>https://dev.to/azure/responsible-ai-dashboard-hands-on-ms-learn-module-is-now-available-2g6n</link>
      <guid>https://dev.to/azure/responsible-ai-dashboard-hands-on-ms-learn-module-is-now-available-2g6n</guid>
      <description>&lt;p&gt;Did you know that AI models can be biased? Data scientists, AI developers and organizations understand the importance of responsible AI, however the challenges they face are finding the right tools to help them identify, debug, and mitigate erroneous behavior from AI models. &lt;/p&gt;

&lt;p&gt;Researchers, organizations, open-source community, and Microsoft have been instrumental in developing tools and services to empower AI developers.  Traditional machine learning model performance metrics are based on aggregate calculations, which are not sufficient in pinpointing AI issues that are human-centric. For example, mitigating issues such as unfairness in employment, loan approvals or medical diagnoses using AI Systems that could have negative impact on people, society or human-rights.&lt;/p&gt;

&lt;p&gt;👩🏽‍💻Join the &lt;strong&gt;#MarchResponsibly&lt;/strong&gt; challenge and enhance you AI debugging skills with #MSLearn's new FREE hands-on module on Responsible AI dashboard!&lt;/p&gt;

&lt;p&gt;💡Check out the mind-blowing features and capabilities:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/G-nBfBNvtg4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;👉🏽 Get startup and share your completion badge everyone: &lt;a href="https://aka.ms/march-rai/lresponsible-rai-dashboard"&gt;https://aka.ms/march-rai/lresponsible-rai-dashboard&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🎉Happy Learning! :)&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Introducing new MS Learn module on how to detect anomalies in time-series data from IoT devices.</title>
      <dc:creator>Ruth Yakubu</dc:creator>
      <pubDate>Mon, 07 Mar 2022 22:59:27 +0000</pubDate>
      <link>https://dev.to/ruthieyakubu/introducing-new-ms-learn-module-on-how-to-detect-anomalies-in-from-iot-devices-3e63</link>
      <guid>https://dev.to/ruthieyakubu/introducing-new-ms-learn-module-on-how-to-detect-anomalies-in-from-iot-devices-3e63</guid>
      <description>&lt;p&gt;We are excited to announce a new AI module on MS Learn: &lt;a href="https://aka.ms/learnAnomalyDetector"&gt;Identify abnormal time-series data with Anomaly Detector&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Training a Machine Learning model for Anomaly Detection is challenging. There are multiple types of data patterns: Spikes/Dips, periodic wavy pattern, inclining pattern, declining pattern etc.  In real life, data patterns from time series data are inconsistent and unpredictable.  There is no one algorithm that fits all the patterns.  As a result, developers or data scientists have to continuously train a new model to accurately detect issues.  In this learn module, you will explore how Azure Anomaly Detector is able to automatically select an algorithm or model based on the data pattern it observes in real-time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05a6kemleohodt9jyi6o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05a6kemleohodt9jyi6o.png" alt="Image description" width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Hands-on Exercises
&lt;/h1&gt;

&lt;p&gt;The module provides hands-on exercises to read time-series data from a smart meter device; send data to the cloud using IoT Hub; and use Azure Anomaly Detector service to spot issues in the real-time data.  The module provides a FREE sandbox environment for all the hands-on exercise. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kqg9oadmy2svwdawuk7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kqg9oadmy2svwdawuk7.png" alt="Image description" width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Benefits of Azure Anomaly Detector
&lt;/h1&gt;

&lt;p&gt;The following are some of the Anomaly Detector features that will be explored in the module:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Auto selection of the best algorithm in real-time based on the data pattern.
&lt;/li&gt;
&lt;li&gt;Root cause detection when you have multiple contributors of the anomaly.&lt;/li&gt;
&lt;li&gt;Custom Training &amp;amp; deploying AI model&lt;/li&gt;
&lt;li&gt;REST API &amp;amp; SDK support&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Let’s get started
&lt;/h1&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/tf5K4AGsjx4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;After completing this lesson, you will be able to use the Azure Anomaly Detector to build AI solutions that are reliable and ensure that your processes run smoothly.  You will understand how the Azure Anomaly Detector APIs work and how to apply the AI service to you time-series data analytics.  &lt;a href="https://aka.ms/learnAnomalyDetector"&gt;Get started here&lt;/a&gt;.  Happy Learning!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>azure</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
