<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: TrustyCore</title>
    <description>The latest articles on DEV Community by TrustyCore (@trustycore).</description>
    <link>https://dev.to/trustycore</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/trustycore"/>
    <language>en</language>
    <item>
      <title>Interpreting Loan Predictions with TrustyAI: Part 3</title>
      <dc:creator>Sohini Pattanayak</dc:creator>
      <pubDate>Wed, 08 Nov 2023 13:56:07 +0000</pubDate>
      <link>https://dev.to/trustycore/interpreting-loan-predictions-with-trustyai-part-3-2h4a</link>
      <guid>https://dev.to/trustycore/interpreting-loan-predictions-with-trustyai-part-3-2h4a</guid>
      <description>&lt;h2&gt;
  
  
  Implementing Visualizations
&lt;/h2&gt;

&lt;p&gt;Hello again, dear readers! In our previous sessions, we understood LIME explanations in TrustyAI and implemented a simple linear model to explain loan approvals. While seeing the saliency values of each feature is insightful, a graphical representation can offer a clearer understanding of model decisions. Let's delve deeper!&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a LIME Feature Impact Visualization
&lt;/h3&gt;

&lt;p&gt;Before we begin, ensure you've gone through Part 2 of our series.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Setting Up:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensure you've imported the necessary libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import matplotlib.pyplot as plt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Generating LIME Explanations:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Using the TrustyAI library, generate the LIME explanations for your model as shown in the previous tutorial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Visualizing Feature Impact:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each feature's impact can be visualized as a bar in a bar chart. The height (positive or negative) indicates the magnitude of influence, and the color (blue for positive, red for negative, and green for the most influential) signifies the nature of the impact.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Transform the explanation to a dataframe and sort by saliency
    exp_df = lime_explanation['output-0'].sort_values(by="Saliency")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Extract feature names and their saliencies&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; y_axis = list(exp_df['Saliency'])
    x_axis = ["Annual Income", "Number of Open Accounts", "Late Payments", "Debt-to-Income Ratio", "Credit Inquiries"]

    # Color-coding bars
    colors = ["green" if value == max(y_axis) else "blue" if value &amp;gt; 0 else "red" for value in y_axis]

    # Plotting
    fig, ax = plt.subplots()
    ax.set_facecolor("#f2f2f2")
    ax.bar(x_axis, y_axis, color=colors)
    plt.title('LIME: Feature Impact on Loan Approval Decision')
    plt.xticks(rotation=45, ha='right')
    plt.axhline(0, color="black")  # x-axis line
    plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And there you go!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw370qo8wfxz0owwtkp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw370qo8wfxz0owwtkp4.png" alt="Image description" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Like the blog? Do hit a Like, send me some unicorns, and don't forget to share it with your friends!&lt;/p&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>explainableai</category>
      <category>python</category>
      <category>ai</category>
    </item>
    <item>
      <title>Interpreting Loan Predictions with TrustyAI: Part 2</title>
      <dc:creator>Sohini Pattanayak</dc:creator>
      <pubDate>Sat, 28 Oct 2023 10:22:04 +0000</pubDate>
      <link>https://dev.to/trustycore/interpreting-loan-predictions-with-trustyai-part-2-3o6b</link>
      <guid>https://dev.to/trustycore/interpreting-loan-predictions-with-trustyai-part-2-3o6b</guid>
      <description>&lt;h2&gt;
  
  
  A Developer’s Guide
&lt;/h2&gt;

&lt;p&gt;In the previous blog, we gained a good overview of the use case of TrustyAI and developed an understanding of the goal of our tutorial today. If not, you can go through the previous blog again - &lt;a href="https://dev.to/trustycore/interpreting-loan-predictions-with-trustyai-part-1-1m5n"&gt;Part 1: An Overview&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Let’s get started now!&lt;/em&gt; 🚀&lt;/p&gt;

&lt;p&gt;Once we have our environment ready with our &lt;code&gt;demo.py&lt;/code&gt; file open, we’ll first import all the necessary libraries for this tutorial -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import numpy as np
from trustyai.model import Model
from trustyai.explainers import LimeExplainer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the first three lines, we're importing the necessary libraries:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;numpy: A library in Python used for numerical computations. Here, it will help us create and manipulate arrays for our linear model.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Model: This class from TrustyAI wraps our linear model, allowing it to be used with various explainers. TrustyAI library supports any type of model, it is enough to specify the predict function to invoke.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;LimeExplainer: The main attraction! &lt;strong&gt;LIME (Local Interpretable Model-Agnostic Explanations)&lt;/strong&gt; is a technique to explain predictions of machine learning models.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can learn more about LIME from here - &lt;a href="https://www.trustycore.com/post/how-does-the-lime-method-for-explainable-ai-work" rel="noopener noreferrer"&gt;How does the LIME Method for Explainable AI work?&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we'll define a set of weights for our linear model using the numpy function &lt;code&gt;np.random.uniform()&lt;/code&gt;. These weights are randomly chosen between &lt;code&gt;-5&lt;/code&gt; and &lt;code&gt;5&lt;/code&gt; for our five features. These weights determine the &lt;strong&gt;importance of each feature in the creditworthiness decision&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;weights = np.random.uniform(low=-5, high=5, size=5)
print(f"Weights for Features: {weights}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’ll build the Linear Model now, it’ll represent our predictive model. It will calculate the dot product between the input features x and the weights. This dot product gives a score, representing the creditworthiness of an applicant.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def linear_model(x):
    return np.dot(x, weights)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s time to wrap up our linear function, Using TrustyAI's Model class preparing it for explanation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model = Model(linear_model)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let us create a random sample of data for an applicant. The data is an array of five random numbers (each representing a feature like annual income, number of open accounts, etc.). We then feed this data to our model to get &lt;code&gt;predicted_credit_score&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;applicant_data = np.random.rand(1, 5)
predicted_credit_score = model(applicant_data)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this is done, the crucial part comes in. We initialize the LimeExplainer with specific parameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lime_explainer = LimeExplainer(samples=1000, normalise_weights=False)
lime_explanation = lime_explainer.explain(
    inputs=applicant_data,
    outputs=predicted_credit_score,
    model=model)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We then use this explainer to explain our model's prediction on the applicant's data. The lime_explanation object holds the results.&lt;/p&gt;

&lt;p&gt;And then we display the explanation -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print(lime_explanation.as_dataframe())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Based on the predicted_credit_score, we provide a summary. If the score is positive, it indicates the applicant is likely to be approved, and vice versa.&lt;/p&gt;

&lt;p&gt;And finally, we loop through our features and their respective weights, printing them out for clarity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print("Feature weights:")
for feature, weight in zip(["Annual Income", "Number of Open Accounts", "Number of times Late Payment in the past", "Debt-to-Income Ratio", "Number of Credit Inquiries in the last 6 months"], weights):
  print(f"{feature}: {weight:.2f}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that is it! You can now find the complete code below!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import numpy as np
from trustyai.model import Model
from trustyai.explainers import LimeExplainer

# Define weights for the linear model.

weights = np.random.uniform(low=-5, high=5, size=5)
print(f"Weights for Features: {weights}")

# Simple linear model

def linear_model(x):
    return np.dot(x, weights)

model = Model(linear_model)

# Sample data for an applicant

applicant_data = np.random.rand(1, 5)
predicted_credit_score = model(applicant_data)

lime_explainer = LimeExplainer(samples=1000, normalise_weights=False)
lime_explanation = lime_explainer.explain(
    inputs=applicant_data,
    outputs=predicted_credit_score,
    model=model)

print(lime_explanation.as_dataframe())

# Interpretation

print("Summary of the explanation:")
if predicted_credit_score &amp;gt; 0:
  print("The applicant is likely to be approved for a loan.")
else:
  print("The applicant is unlikely to be approved for a loan.")

# Display weights

print("Feature weights:")
features = ["Annual Income", "Number of Open Accounts", "Number of times Late Payment in the past", "Debt-to-Income Ratio", "Number of Credit Inquiries in the last 6 months"]
for feature, weight in zip(features, weights):
  print(f"{feature}: {weight:.2f}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Interpretation of the Output:
&lt;/h3&gt;

&lt;p&gt;Running the code gives us the following output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfcdphfmig175i8gjxci.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfcdphfmig175i8gjxci.jpg" alt="Image description" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These weights are influential in shaping the model's decision. For instance, the "Annual Income" has a weight of -2.56, suggesting that an increase in the annual income might negatively impact the creditworthiness in this model – a rather unexpected observation, highlighting an area Jane might want to reassess.&lt;/p&gt;

&lt;p&gt;Additionally, with the help of the &lt;em&gt;LimeExplainer, we obtain the saliency of each feature&lt;/em&gt;. A higher absolute value of saliency indicates a stronger influence of that feature on the decision.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion:
&lt;/h3&gt;

&lt;p&gt;Through TrustyAI, Jane not only developed a predictive model but also successfully interpreted its decisions, ensuring compliance with financial regulations. This tutorial underscores the importance of interpretability in machine learning models and showcases how developers can harness TrustyAI to bring transparency to their solutions.&lt;/p&gt;

&lt;p&gt;Developers keen on adopting TrustyAI should consider its vast range of capabilities that go beyond LIME, offering a comprehensive suite of tools to make AI/ML models trustworthy. As data-driven decisions become ubiquitous, tools like TrustyAI will become indispensable, ensuring a balance between model accuracy and transparency.&lt;/p&gt;

&lt;p&gt;Like the blog? Do hit a Like, send me some unicorns, and don't forget to share it with your friends! &lt;/p&gt;

</description>
      <category>explainableai</category>
      <category>ai</category>
      <category>machinelearning</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Interpreting Loan Predictions with TrustyAI: Part 1</title>
      <dc:creator>Sohini Pattanayak</dc:creator>
      <pubDate>Fri, 27 Oct 2023 09:32:22 +0000</pubDate>
      <link>https://dev.to/trustycore/interpreting-loan-predictions-with-trustyai-part-1-1m5n</link>
      <guid>https://dev.to/trustycore/interpreting-loan-predictions-with-trustyai-part-1-1m5n</guid>
      <description>&lt;h2&gt;
  
  
  An Overview
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;AI/ML predictive models are essential for decision-making, but they need to be both accurate and interpretable, especially in regulated industries. TrustyAI provides the tools to understand and explain model decisions. &lt;/p&gt;

&lt;p&gt;Imagine a bank using a machine learning model as part of the logic to approve loans. The model is accurate and has helped the bank make more profitable decisions. However, the bank needs to be able to explain why the model approved or denied a loan to a particular applicant. This is where TrustyAI can help!&lt;/p&gt;

&lt;p&gt;TrustyAI can provide the bank with insights into the model's decision-making process, helping to ensure transparency and fairness.&lt;/p&gt;

&lt;h3&gt;
  
  
  Background
&lt;/h3&gt;

&lt;p&gt;Jane, a data scientist at a bank, is building a model to predict applicant’s loan creditworthiness based on specific features. But there's a catch. Regulatory mandates stipulate that any loan decisions made by the bank must be interpretable. Hence, declining an application isn't enough!&lt;/p&gt;

&lt;p&gt;There has to be an explanation behind this decision.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Predictive Model:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The features Jane considers for her model are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Annual Income&lt;/li&gt;
&lt;li&gt;Number of Open Accounts&lt;/li&gt;
&lt;li&gt;Number of times Late Payments in the past&lt;/li&gt;
&lt;li&gt;Debt-to-Income ratio&lt;/li&gt;
&lt;li&gt;Number of Credit Inquiries in the last 6 months&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Jane's initial model is a &lt;em&gt;straightforward linear one&lt;/em&gt;, with weights assigned to each feature based on their importance.&lt;/p&gt;

&lt;p&gt;Before we get started on the actual code in the next part of this blog, make sure to &lt;strong&gt;follow the prerequisites&lt;/strong&gt; before you plan on executing the code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Python&lt;/strong&gt;: Ensure you have Python version 3.8 or higher. If not, download and install it from the official Python website.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pip&lt;/strong&gt;: Pip, the package installer for Python, should be installed by default with Python &amp;gt;=3.8, or we can also use an online platform like &lt;a href="https://mybinder.org/" rel="noopener noreferrer"&gt;Binder&lt;/a&gt; or Google &lt;a href="https://colab.google/" rel="noopener noreferrer"&gt;Collab&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IDE&lt;/strong&gt;: An Integrated Development Environment (IDE) makes it easier to write and run Python code. Some popular options include &lt;a href="https://www.jetbrains.com/pycharm/" rel="noopener noreferrer"&gt;PyCharm&lt;/a&gt; or &lt;a href="https://code.visualstudio.com/docs/datascience/jupyter-notebooks" rel="noopener noreferrer"&gt;VsCode&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A virtual environment is a self-contained directory that contains a Python installation for a particular version of Python, plus a number of additional packages, hence please create that.&lt;/p&gt;

&lt;p&gt;With your virtual environment activated, it's time to install the &lt;code&gt;trustyAI&lt;/code&gt; package: &lt;code&gt;pip install trustyai&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You're now ready to proceed with the tutorial.&lt;/p&gt;

&lt;p&gt;For this tutorial, I have taken all references from the TrustyAI Python Documentation website - &lt;a href="https://trustyai-explainability-python.readthedocs.io/en/latest/" rel="noopener noreferrer"&gt;https://trustyai-explainability-python.readthedocs.io/en/latest/&lt;/a&gt;, you can follow this to build similar examples.&lt;/p&gt;

&lt;p&gt;Follow &lt;a href="https://dev.to/trustycore/interpreting-loan-predictions-with-trustyai-part-2-3o6b"&gt;the next blog&lt;/a&gt; to get started with the technical tutorial for this example! &lt;/p&gt;

</description>
      <category>explainableai</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
