<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aleksander Obuchowski</title>
    <description>The latest articles on DEV Community by Aleksander Obuchowski (@aleksanderobuchowski).</description>
    <link>https://dev.to/aleksanderobuchowski</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aleksanderobuchowski"/>
    <language>en</language>
    <item>
      <title>MedImageInsight: Open-Source Medical Image Embedding Model - now on HuggingFace</title>
      <dc:creator>Aleksander Obuchowski</dc:creator>
      <pubDate>Mon, 04 Nov 2024 19:13:50 +0000</pubDate>
      <link>https://dev.to/aleksanderobuchowski/medimageinsight-open-source-medical-image-embedding-model-now-on-huggingface-3b63</link>
      <guid>https://dev.to/aleksanderobuchowski/medimageinsight-open-source-medical-image-embedding-model-now-on-huggingface-3b63</guid>
      <description>&lt;p&gt;TLDR: check out the model at &lt;a href="https://huggingface.co/lion-ai/MedImageInsights" rel="noopener noreferrer"&gt;https://huggingface.co/lion-ai/MedImageInsights&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Making Medical Image AI Actually Accessible
&lt;/h2&gt;

&lt;p&gt;If you've ever tried implementing a medical imaging model, you know the drill: promising papers, complicated setup processes, and documentation that assumes you are a certified Zzure developer. When I came across the MedImageInsight model, it was the same story - great potential buried under layers of enterprise infrastructure. &lt;/p&gt;

&lt;p&gt;It took me 6 hours to access the model - first I had to register on azure, set up payment organization etc. Then although the code repository was there there was no option to clone or download the repo so I had to manually copy the content of each file! Not to mention that this model was shared as MLFlow artifact so there was a ton of unnecessary code. But, since the model is shared on MIT licence I decided to share my custom implementation on Huggingface so you don't have to through the same hell as I did&lt;/p&gt;

&lt;h2&gt;
  
  
  What's MedImageInsight Anyway?
&lt;/h2&gt;

&lt;p&gt;At its core, MedImageInsight is a dual-encoder model (think CLIP, but for medical images) that can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Convert medical images into meaningful embeddings&lt;/li&gt;
&lt;li&gt;Match images with text descriptions&lt;/li&gt;
&lt;li&gt;Perform zero-shot classification on medical conditions&lt;/li&gt;
&lt;li&gt;Handle multiple medical imaging modalities (X-rays, CT scans, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The model was trained on a massive dataset of medical images and their descriptions, learning to create a shared embedding space for both images and text. This means you can throw new medical conditions at it without retraining, and it'll do a decent job at identifying them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Another Implementation?
&lt;/h2&gt;

&lt;p&gt;The original implementation required:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Azure account&lt;/li&gt;
&lt;li&gt;MLflow setup&lt;/li&gt;
&lt;li&gt;Multiple enterprise-level configurations&lt;/li&gt;
&lt;li&gt;Dealing with undocumented dependencies&lt;/li&gt;
&lt;li&gt;Coffee. Lots of coffee.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After spending way too much time setting it up, I decided to strip it down to its essentials. No shade to the original authors - they created an amazing model. But not everyone needs enterprise-grade MLflow pipelines to run a few predictions.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Actually Works
&lt;/h2&gt;

&lt;p&gt;At its heart, MedImageInsight uses a technique called contrastive learning to create a shared understanding between medical images and their descriptions. Think of it as teaching the model to speak two languages fluently: the language of images and the language of medical terminology.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Power of Zero-Shot Learning
&lt;/h3&gt;

&lt;p&gt;Traditional machine learning models are like students who can only answer questions they've seen before. Zero-shot learning models, on the other hand, are like students who can apply their knowledge to entirely new situations.&lt;/p&gt;

&lt;p&gt;MedImageInsight achieves this through a clever architectural design:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;One part of the model learns to understand medical images&lt;/li&gt;
&lt;li&gt;Another part learns to understand medical terminology&lt;/li&gt;
&lt;li&gt;Both parts are trained to translate their understanding into the same "language" (a shared vector space)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This means if you show the model a chest X-ray and ask "Is there pneumonia?", it doesn't need to have seen pneumonia examples during training. Instead, it understands both what pneumonia means textually and what to look for in the image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Clone the repo:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://huggingface.co/lion-ai/MedImageInsights
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Install dependencies (we use uv because it's fast and deterministic):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv &lt;span class="nb"&gt;sync&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Run the example:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv run example.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. No Azure setup, no MLflow, no enterprise infrastructure required.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;We're working on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Better explainability (what is the model actually looking at?)&lt;/li&gt;
&lt;li&gt;HuggingFace's transformers library compatibility&lt;/li&gt;
&lt;li&gt;More example notebooks for specific use cases&lt;/li&gt;
&lt;li&gt;Performance optimizations&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Contributing
&lt;/h2&gt;

&lt;p&gt;Found a bug? Have an improvement in mind? The repository is actually open source (imagine that!), and we welcome contributions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Original paper: arXiv:2410.06542&lt;/li&gt;
&lt;li&gt;Code: &lt;a href="https://huggingface.co/lion-ai/MedImageInsights" rel="noopener noreferrer"&gt;https://huggingface.co/lion-ai/MedImageInsights&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Example notebooks: In the &lt;code&gt;examples&lt;/code&gt; directory&lt;/li&gt;
&lt;li&gt;FastAPI service: In &lt;code&gt;fastapi_app.py&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>computervision</category>
      <category>healthcare</category>
    </item>
    <item>
      <title>How to make your first chat bot in 50 lines of code — theory and practice</title>
      <dc:creator>Aleksander Obuchowski</dc:creator>
      <pubDate>Sun, 19 Apr 2020 18:15:13 +0000</pubDate>
      <link>https://dev.to/aleksanderobuchowski/how-to-make-your-first-chat-bot-in-50-lines-of-code-theory-and-practice-1ilh</link>
      <guid>https://dev.to/aleksanderobuchowski/how-to-make-your-first-chat-bot-in-50-lines-of-code-theory-and-practice-1ilh</guid>
      <description>&lt;h3&gt;
  
  
  How to make your first &lt;strong&gt;chat bot&lt;/strong&gt; in 50 lines of code — theory and practice
&lt;/h3&gt;

&lt;h3&gt;
  
  
  1. How chat bots work?
&lt;/h3&gt;

&lt;p&gt;Most of modern chat bot platforms consist of 3 main&lt;br&gt;
things — &lt;strong&gt;intent recognition&lt;/strong&gt; ,&lt;strong&gt;slot filling&lt;/strong&gt; and &lt;strong&gt;dialog graph.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.1 Intent recognition&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Intent recognition is a &lt;strong&gt;text&lt;/strong&gt; &lt;strong&gt;classification task&lt;/strong&gt; which goal is to&lt;br&gt;
capture specific intent behind a user query. This is motivated by the fact that&lt;br&gt;
users tend to formulate their request in a lot of different ways so we need to&lt;br&gt;
have a system that is able to tell if those messages relate to the same thing or&lt;br&gt;
not. Let’s illustrate this with an example of a bank chat bot, where users can&lt;br&gt;
ask the bot to withdraw money:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oky8vci1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/1%2AXoYTzJpDtyQPyDbeeA5xsg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oky8vci1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/1%2AXoYTzJpDtyQPyDbeeA5xsg.png" alt=""&gt;&lt;/a&gt;&lt;br&gt;
&lt;span class="figcaption_hack"&gt;Visualization of intent detection system&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;You can see that although the requests are formulated in a lot of different ways&lt;br&gt;
and in different styles they all mean basically the same thing and chat bot&lt;br&gt;
should react in the same way. Therefore we need **text classification **model&lt;br&gt;
that captures the semantics behind user sentences and assigns them to the&lt;br&gt;
specific predefined class.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.2 Slot filling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once we know what action the user wants to take we need to capture specific&lt;br&gt;
parameters of those actions. For example, if you want Alexa to play your&lt;br&gt;
favorite song, you want her to play this specific song not just any song, so&lt;br&gt;
besides detecting intent &lt;a href="https://chatbotslife.com/"&gt;chat bots&lt;/a&gt; also need to&lt;br&gt;
perform a task that is called slot-filling.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jXEkMwFu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/1%2AFvVQRyzpXnL05sDEzCAbCw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jXEkMwFu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/1%2AFvVQRyzpXnL05sDEzCAbCw.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.3 Dialog graph&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_14TBNw3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/1%2AoSwU5XlxuzRsZLt-zVeAbg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_14TBNw3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/1%2AoSwU5XlxuzRsZLt-zVeAbg.png" alt="Visualization of dialog graph"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another requirement for chat bot functionality is dialog graph. It’s goal is to steer conversation in the right direction. For example when you say “Check the weather” the chat bot could then ask “What day should I check the weather for?” and next it will be looking for intents like ‘tomorrow’ or ‘today’. The important part here is, there would be no point in asking the second question without the first one, so there is a need for a system that stores the information of the point in the conversation where we are and what are the possible next states.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.4 Our chat bot&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this tutorial our goal is to create a simple chat bot, so we are going to focus only on intent detection task and simple dialog graph model. This is enough to make a chat bot that is able to answer FAQ and conduct as simple conversation.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Word and sentence embedding
&lt;/h3&gt;

&lt;p&gt;Our goal in designing an intent detection is to create a system that, given a few examples for intent, can detect that a sentence given by the user is similar to these examples and therefore should have the same intent.&lt;/p&gt;

&lt;p&gt;The problem behind this system is that we have to design a system for checking if 2 sentences are similar. This could be achieved by eg. counting how many overlapping words are in the new sentence and the sentences in training dataset. This is however a naive approach because a user can use a word that has similar meaning, but is different from the ones in the train examples.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.1 Word embedding&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A solution here is to use word embedding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uXOuh20Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AAc7LP7t66teJc7be.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uXOuh20Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AAc7LP7t66teJc7be.png" alt=""&gt;&lt;/a&gt;&lt;br&gt;
Word vectors ref : &lt;a href="https://ruder.io/word-embeddings-1/%5D(https://ruder.io/word-embeddings-1/"&gt;https://ruder.io/word-embeddings-1/](https://ruder.io/word-embeddings-1/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Word embeddings are mathematical representations of words encoded as vectors in n-dimentional space. Similar (used in the same context) words are close to each other in this space. This means that we can compare 2 or more words to each other not by e.g. the number of overlapping characters but by how close they are to each other in they embedded form.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.2 Sentence embeddings&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From word embeddings we can construct embeddings for the whole sentence. This can be done in a variety of ways, we can simply take the average of the word vectors, use weighted average to check how important the words are by e.g &lt;a href="https://towardsdatascience.com/tf-idf-for-document-ranking-from-scratch-in-python-on-real-world-dataset-796d339a4089"&gt;tf-idf&lt;/a&gt;&lt;br&gt;
coefficient or even use more advanced methods like &lt;a href="https://towardsdatascience.com/transformers-141e32e69591"&gt;transformer neural&lt;br&gt;
networks&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.3 Similarity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once we have prepared embeddings for the sentences we have to design a way for comparing them. A simple widely used method here is cosine similarity that measures similarity between two vectors as the angle between them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ocd_DvNf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/1%2Az2dZYCgqQVpQM_Hbo9KUjw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ocd_DvNf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/1%2Az2dZYCgqQVpQM_Hbo9KUjw.png" alt=""&gt;&lt;/a&gt;&lt;br&gt;
&lt;span class="figcaption_hack"&gt;Cosine similarity ref: &lt;a href="https://bit.ly/2X5470I"&gt;https://bit.ly/2X5470I&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  3.Building the chat bot
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4OeRvIKv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AsprA7uJnsFTHk9vg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4OeRvIKv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AsprA7uJnsFTHk9vg.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To create the sentence embedding we are going to use flair library. This library is based not only on static word embeddings but also analyses the words character by character which helps in dealing with out-of-vocabulary words.&lt;/p&gt;

&lt;p&gt;In our model we are going to embed the examples for each intent and then, while processing the users message, find the most similar one. This approach is mainly taken as fast and simple one, illustrating how embedding work. Most of modern systems use neural networks (link to related articles can be found at the end),however this approach can still be used if you want to design a system that is fast and and doesn’t use a lot of resources.&lt;/p&gt;

&lt;p&gt;We begin our program with creating the outline of the model.&lt;/p&gt;

&lt;p&gt;&lt;span class="figcaption_hack"&gt;outline of the program&lt;/span&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
&lt;strong&gt;Description&lt;/strong&gt;

&lt;p&gt;&lt;strong&gt;1–9&lt;/strong&gt;: importing necessary libraries &lt;br&gt;
&lt;strong&gt;11&lt;/strong&gt; : initialization of the flair model for creating embeddings of sentences. We are using English word embeddings and mean polling method for creating sentence embeddings from word embeddings. &lt;br&gt;
&lt;strong&gt;13–20&lt;/strong&gt; : chatbot class, this class has two static methods one for creating embeddings and one for processing user message and answering it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.1 Preparing embeddings&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Firstly we need to prepare a file containing our intents and their examples.This is a json dictionary that uses intents as keys and tables of examples as values.&lt;/p&gt;

&lt;p&gt;intents.json:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Next we need to to create a function that constructs embeddings for the&lt;br&gt;
examples.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Description:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4&lt;/strong&gt; : Creating new python dictionary for the embeddings&lt;br&gt;
&lt;strong&gt;5–6&lt;/strong&gt;: Opening the input file and loading it to python dictionary &lt;strong&gt;7–8&lt;/strong&gt; : For each intent we create a table in the embeddings dictionary&lt;br&gt;
&lt;strong&gt;9–12&lt;/strong&gt; For each example in the intent, we create a Flair sentence object that we can later embed using the model specified earlier. Finally we add the embedded sentence to the table&lt;br&gt;
&lt;strong&gt;13–14&lt;/strong&gt;: If the file doesn’t exist, we create it&lt;br&gt;
&lt;strong&gt;15&lt;/strong&gt;: We save the embedded dict. We use pickle instead of json to store the numpy arrays&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.2 Answering the message&lt;/strong&gt;&lt;br&gt;
answers.json:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3&lt;/strong&gt;: We use the embeddings model&lt;br&gt;
&lt;strong&gt;4 -5&lt;/strong&gt; : We load load the embeddings file created earlier &lt;br&gt;
&lt;strong&gt;6–8&lt;/strong&gt; : Embedding of user message&lt;br&gt;
&lt;strong&gt;9–10&lt;/strong&gt; :Initializing best intent and best sore variables&lt;br&gt;
&lt;strong&gt;11–16&lt;/strong&gt; For each intent we loop through it’s embedded examples and check the cosine similarity between users message and those examples. We chose the intent, which example has the highest similarity with the new message&lt;br&gt;
&lt;strong&gt;17–18&lt;/strong&gt; : Loading the answers dict&lt;br&gt;
&lt;strong&gt;19&lt;/strong&gt;: Checking if intent chosen by the system is in the answers dict &lt;br&gt;
&lt;strong&gt;20&lt;/strong&gt; : Return random answer from the ones assigned to the chosen&lt;br&gt;
intent&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Whole code&lt;/strong&gt;
&lt;/h3&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  4. Possible improvements
&lt;/h3&gt;

&lt;p&gt;In this format the chat bot has to choose one of&lt;br&gt;
the intents provided. This means we have no way of detecting if user said&lt;br&gt;
something that doesn't belong to any of the intents. A possible solution is to check the numerical values of the cosine similarity and based on those observation assign a threshold value below which the bot will classify the message as the one it doesn’t know how to answer.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
