<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ivan</title>
    <description>The latest articles on DEV Community by Ivan (@karavanjo).</description>
    <link>https://dev.to/karavanjo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/karavanjo"/>
    <language>en</language>
    <item>
      <title>QGIS DevTools plugin for easier plugin development</title>
      <dc:creator>Ivan</dc:creator>
      <pubDate>Wed, 16 Jul 2025 21:45:19 +0000</pubDate>
      <link>https://dev.to/karavanjo/qgis-devtools-plugin-for-easier-plugin-development-2ib</link>
      <guid>https://dev.to/karavanjo/qgis-devtools-plugin-for-easier-plugin-development-2ib</guid>
      <description>&lt;p&gt;Just came across this new debugging plugin for QGIS called DevTools that was released by NextGIS.&lt;/p&gt;

&lt;h1&gt;
  
  
  What it does
&lt;/h1&gt;

&lt;p&gt;The plugin basically lets you connect VS Code to QGIS for debugging. Instead of adding logging statements everywhere or dealing with buggy setups, you can now set breakpoints, inspect variables, and step through your code directly from your IDE.&lt;/p&gt;

&lt;h1&gt;
  
  
  Main features
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;  Launches a &lt;code&gt;debugpy&lt;/code&gt; server from QGIS&lt;/li&gt;
&lt;li&gt;  Can be configured to start automatically when QGIS launches&lt;/li&gt;
&lt;li&gt;  Allows choosing a custom port for the debug server&lt;/li&gt;
&lt;li&gt;  Lets you connect from VS Code to debug your own plugins&lt;/li&gt;
&lt;li&gt;  Simple setup process&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Why it's helpful
&lt;/h1&gt;

&lt;p&gt;Before this, debugging QGIS plugins could be painful. Many developers relied on adding logging messages everywhere or used older plugins like &lt;code&gt;debug_vs_plugin&lt;/code&gt;, which was often buggy and had issues on Windows and macOS. This new plugin provides a much more streamlined approach to remote debugging.&lt;/p&gt;

&lt;p&gt;The plugin is available on the official &lt;a href="https://plugins.qgis.org/plugins/devtools/" rel="noopener noreferrer"&gt;QGIS plugin repository&lt;/a&gt; and the source code is on &lt;a href="https://github.com/nextgis/qgis_devtools" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.nextgis.com/docs_ngqgis/source/devtools.html" rel="noopener noreferrer"&gt;The documentation&lt;/a&gt; walks you through the setup process step by step.&lt;/p&gt;

&lt;p&gt;This seems like a valuable tool for anyone developing QGIS plugins, and its foundation on the modern debugpy library is a promising sign.&lt;/p&gt;

&lt;p&gt;One current limitation, however, is that debugging code in other threads (e.g., QgsTask) still requires some extra work. Hopefully, future versions will streamline this process.&lt;/p&gt;

&lt;p&gt;While it did crash QGIS on me once during testing, the core functionality is reliable, making it a clear upgrade from the alternatives.&lt;/p&gt;

&lt;p&gt;Thanks to the folks at &lt;a href="https://nextgis.com/" rel="noopener noreferrer"&gt;NextGIS&lt;/a&gt; for making this - looks like a really helpful tool.&lt;/p&gt;

</description>
      <category>qgis</category>
      <category>vscode</category>
      <category>python</category>
      <category>nextgis</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Ivan</dc:creator>
      <pubDate>Sat, 15 Feb 2025 00:28:01 +0000</pubDate>
      <link>https://dev.to/karavanjo/-f61</link>
      <guid>https://dev.to/karavanjo/-f61</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/karavanjo" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F201994%2F4cfb39f9-c9e0-4703-b193-a185605eacf5.jpg" alt="karavanjo"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/karavanjo/how-to-run-a-local-model-for-text-recognition-in-images-2d6a" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to Run a Local Model for Text Recognition in Images&lt;/h2&gt;
      &lt;h3&gt;Ivan ・ Feb 14&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#llama&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#windows&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#python&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#llm&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>llama</category>
      <category>windows</category>
      <category>python</category>
      <category>llm</category>
    </item>
    <item>
      <title>How to Run a Local Model for Text Recognition in Images</title>
      <dc:creator>Ivan</dc:creator>
      <pubDate>Fri, 14 Feb 2025 19:08:46 +0000</pubDate>
      <link>https://dev.to/karavanjo/how-to-run-a-local-model-for-text-recognition-in-images-2d6a</link>
      <guid>https://dev.to/karavanjo/how-to-run-a-local-model-for-text-recognition-in-images-2d6a</guid>
      <description>&lt;p&gt;Want to extract text from images without relying on cloud services? &lt;/p&gt;

&lt;p&gt;You can run a powerful optical character recognition (OCR) model right on your own computer. This local approach gives you full control over the process and keeps your data private. In this article, we'll walk you through setting up and using a popular open-source OCR engine. You'll learn how to install the necessary libraries, load pre-trained models, and process images to recognize text in various languages. Whether you're working on a personal project or developing an application, this guide will help you get started with local text recognition quickly and easily.&lt;/p&gt;

&lt;p&gt;This guide uses Windows 11, the Ollama model runner, the Llama 3.2 Vision model, and Python. Let's get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Install Ollama
&lt;/h2&gt;

&lt;p&gt;First, head to &lt;a href="https://ollama.com/download" rel="noopener noreferrer"&gt;https://ollama.com/download&lt;/a&gt;. Download the installer (it's about 768 MB) and run it to install Ollama.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Pull the Llama 3.2 Vision Model
&lt;/h2&gt;

&lt;p&gt;Open your command prompt or terminal. We'll download the Llama 3.2 Vision model using Ollama. You have two size options.&lt;/p&gt;

&lt;p&gt;The 11B and 90B parameters refer to the size of the Llama 3.2 Vision models, indicating the number of trainable parameters in each model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;11B model: This is the smaller version with 11 billion parameters.&lt;/li&gt;
&lt;li&gt;90B model: This is the larger version with 90 billion parameters.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both models are designed for multimodal tasks, capable of processing both text and images. They excel in various applications such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Document-level understanding&lt;/li&gt;
&lt;li&gt;Chart and graph analysis&lt;/li&gt;
&lt;li&gt;Image captioning&lt;/li&gt;
&lt;li&gt;Visual grounding&lt;/li&gt;
&lt;li&gt;Visual question answering&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The choice between the 11B and 90B models depends on the specific use case, available computational resources, and the desired level of performance for complex visual reasoning tasks.&lt;/p&gt;

&lt;p&gt;For the &lt;em&gt;smaller model&lt;/em&gt; (11B, needs at least 8GB of VRAM (video memory)):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama pull llama3.2-vision:11b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the &lt;em&gt;larger model&lt;/em&gt; (90B, needs a whopping 64GB of VRAM):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama pull llama3.2-vision:90b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For home use, running the 90B model locally is extremely challenging due to its massive hardware requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Run the Model
&lt;/h2&gt;

&lt;p&gt;Once the model is downloaded, run it locally with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama run llama3.2-vision
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Install ollama-ocr
&lt;/h2&gt;

&lt;p&gt;To easily process images, we'll use the ollama-ocr Python library. Install it using pip:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;ollama-ocr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Python Code for OCR
&lt;/h2&gt;

&lt;p&gt;Here's the Python code to recognize text in an image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;ollama_ocr&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OCRProcessor&lt;/span&gt;

&lt;span class="n"&gt;ocr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OCRProcessor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;llama3.2-vision:11b&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ocr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;process_image&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;image_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./your_image.jpg&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;format_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. Run the Code
&lt;/h2&gt;

&lt;p&gt;Replace "&lt;em&gt;./your_image.jpg&lt;/em&gt;" with the actual path to your image file. Save the code as a .py file (e.g., &lt;em&gt;ocr_script.py&lt;/em&gt;). Run the script from your command prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python ocr_script.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script will send the image to your locally running Llama 3.2 Vision model, and the recognized text will be printed in your terminal.&lt;/p&gt;

&lt;p&gt;To complement our guide on using Llama 3.2 Vision locally, we conducted performance tests on a home desktop computer. Here are the results:&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance Test Results
&lt;/h2&gt;

&lt;p&gt;We ran the Llama 3.2 Vision 11B model on a home desktop with the following specifications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Processor: 13th Gen Intel(R) Core(TM) i7-13700K&lt;/li&gt;
&lt;li&gt;Graphics Card: Gigabyte RTX 3060 Gaming OC 12G&lt;/li&gt;
&lt;li&gt;RAM: 64.0 GB DDR4&lt;/li&gt;
&lt;li&gt;Operating System: Windows 11 Pro 24H2&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Image for Testing
&lt;/h3&gt;

&lt;p&gt;For testing, we chose this amusing image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvrcc3pdwlh7qu82otha.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvrcc3pdwlh7qu82otha.jpg" alt="The image is a meme image for testing the locally run Llama 3.2 Vision 11B model" width="500" height="666"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Test Output
&lt;/h3&gt;

&lt;p&gt;Using our Python script, we tasked the model with recognizing text in an image using the standard system prompt. After running the script multiple times on a single test image, we observed processing times ranging &lt;strong&gt;from 16.78 to 47.23 seconds&lt;/strong&gt;. It's worth noting that these results were achieved with the graphics card running at default settings, without any additional tuning or optimizations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The image is a black-and-white meme featuring two panels with stick figures and speech bubbles.

**Panel 1:**
In the first panel, a stick figure on the left side of the image has its arms outstretched towards another stick figure in the center. The central figure holds a large circle labeled "WEEKEND" in bold white letters. The stick figure on the right side of the image is partially cut off by the edge of the frame.

**Panel 2:**
In the second panel, the same two stick figures are depicted again. However, this time, the central figure now holds a smaller circle labeled "MONDAY" instead of "WEEKEND." The stick figure on the left side of the image has its arms outstretched towards the central figure once more.

**Text and Labels:**
The text in both panels is presented in white letters with bold outlines. In the first panel, the labels read:

* "ME" (on the stick figure's chest)
* "WEEKEND" (inside the large circle)

In the second panel, the labels are:

* "MONDAY" (inside the smaller circle)
* "ME" (on the stick figure's chest)

**Overall:**
The meme humorously portrays the anticipation and excitement of approaching the weekend, as well as the disappointment that follows when it arrives. The use of simple yet expressive stick figures and speech bubbles effectively conveys this sentiment in a relatable and entertaining manner.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;That's it! You're now running a local image text recognition system using Ollama and Python. Remember to experiment with different images and adjust your approach as needed for best results.&lt;/p&gt;

&lt;p&gt;You can find the scripts referenced in this article in the repository at &lt;a href="https://github.com/karavanjo/dev-content/tree/main/llama-local-run" rel="noopener noreferrer"&gt;https://github.com/karavanjo/dev-content/tree/main/llama-local-run&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Demonstration of the model running in the video at the &lt;a href="https://youtu.be/Pe9EEdwZZr0?si=TcICVtXQNp2xMpZK" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>llama</category>
      <category>windows</category>
      <category>python</category>
      <category>llm</category>
    </item>
    <item>
      <title>My city's climate - historical weather data viewer</title>
      <dc:creator>Ivan</dc:creator>
      <pubDate>Thu, 13 Jan 2022 16:09:10 +0000</pubDate>
      <link>https://dev.to/karavanjo/my-citys-climate-historical-weather-data-viewer-1mcm</link>
      <guid>https://dev.to/karavanjo/my-citys-climate-historical-weather-data-viewer-1mcm</guid>
      <description>&lt;h3&gt;
  
  
  Overview of My Submission
&lt;/h3&gt;

&lt;p&gt;"My city's climate" is a free online viewer for historical weather data of Poland. The web application is a simple way to get information about temperature and precipitations in a long time.&lt;/p&gt;

&lt;p&gt;The application is a web map. The map displays weather stations as a points. If you click on the point will be opened color calendar. The calendar will be painted in different colors to indicate the respective weather characteristics.&lt;/p&gt;

&lt;p&gt;The application was published on GitHub Pages. You can open it by link &lt;a href="https://karavanjo.github.io/mcc-frontend/" rel="noopener noreferrer"&gt;https://karavanjo.github.io/mcc-frontend/&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Data
&lt;/h4&gt;

&lt;p&gt;The application uses public historical weather dataset IMGW-PIB. &lt;br&gt;
You can explore the dataset on a page of Institute of Meteorology and Water Management by &lt;a href="https://danepubliczne.imgw.pl/" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  Technical notes
&lt;/h4&gt;

&lt;p&gt;The app built with React and D3. It's uses MongoDB Atlas as back-end. The app uses Time Series Collections for weather observations data storing.&lt;/p&gt;

&lt;p&gt;To import initial data was created a console application using python and PyMongo.&lt;/p&gt;
&lt;h3&gt;
  
  
  Submission Category:
&lt;/h3&gt;

&lt;p&gt;Prime Time&lt;/p&gt;
&lt;h3&gt;
  
  
  Link to Code
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Front-end application
&lt;/h4&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/karavanjo" rel="noopener noreferrer"&gt;
        karavanjo
      &lt;/a&gt; / &lt;a href="https://github.com/karavanjo/mcc-frontend" rel="noopener noreferrer"&gt;
        mcc-frontend
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Free online viewer for historical weather data
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;"My city's climate" - historical weather data viewer&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;"My city's climate" is a free online viewer for historical weather data.&lt;/p&gt;
&lt;p&gt;The application requires a climate database created by &lt;a href="https://github.com/karavanjo/mcc-import" rel="noopener noreferrer"&gt;mcc-import&lt;/a&gt; application.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Run&lt;/h2&gt;

&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;Fill &lt;code&gt;.env&lt;/code&gt; file with your Realm data.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;npm install&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;npm run dev&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Open your browser (&lt;a href="http://localhost:3000/" rel="nofollow noopener noreferrer"&gt;http://localhost:3000/&lt;/a&gt;)&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Screenshots&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Weather stations map
&lt;a rel="noopener noreferrer" href="https://github.com/karavanjo/mcc-frontend./screenshots/map.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fkaravanjo%2Fmcc-frontend.%2Fscreenshots%2Fmap.png" alt="Weather stations map"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Calendar: average air temperature
&lt;a rel="noopener noreferrer" href="https://github.com/karavanjo/mcc-frontend./screenshots/tavg.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fkaravanjo%2Fmcc-frontend.%2Fscreenshots%2Ftavg.png" alt="Calendar: average air temperature"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Calendar: snow depth
&lt;a rel="noopener noreferrer" href="https://github.com/karavanjo/mcc-frontend./screenshots/snowdepth.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fkaravanjo%2Fmcc-frontend.%2Fscreenshots%2Fsnowdepth.png" alt="Calendar: snow depth"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Demo&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;&lt;a href="https://karavanjo.github.io/mcc-frontend/" rel="nofollow noopener noreferrer"&gt;https://karavanjo.github.io/mcc-frontend/&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/karavanjo/mcc-frontend" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;h4&gt;
  
  
  Import tool
&lt;/h4&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/karavanjo" rel="noopener noreferrer"&gt;
        karavanjo
      &lt;/a&gt; / &lt;a href="https://github.com/karavanjo/mcc-import" rel="noopener noreferrer"&gt;
        mcc-import
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Import tool for "My city's climate"
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Import tool for "My city's climate"&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;The command line application imports historical weather data in MongoDB Atlas database.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Technical notes&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;The application requires MongoDB &amp;gt; 5.0 with support time series.&lt;/p&gt;
&lt;p&gt;You should specify connection credentials. The application only supports X.509 Authentication with &lt;code&gt;.pem&lt;/code&gt; certificate.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;How to use&lt;/h3&gt;

&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;Copy your weather data files to &lt;code&gt;data/observations&lt;/code&gt; and &lt;code&gt;data/stations&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Apply your credentials to &lt;code&gt;config.yaml&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Install &lt;a href="https://python-poetry.org/docs/#installation" rel="nofollow noopener noreferrer"&gt;poetry&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;poetry install --no-dev&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;poetry run import&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;About data&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;It is assumed that input weather data format corresponds to public data IMGW-PIB format.&lt;/p&gt;
&lt;p&gt;You can explore the dataset on a page of Institute of Meteorology and Water Management by &lt;a href="https://danepubliczne.imgw.pl/" rel="nofollow noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/karavanjo/mcc-import" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;h3&gt;
  
  
  Additional Resources / Info
&lt;/h3&gt;

&lt;h4&gt;
  
  
  "My city's climate" in a web browser
&lt;/h4&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/etx4T0CiQsM"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>atlashackathon</category>
      <category>mongodb</category>
      <category>react</category>
    </item>
  </channel>
</rss>
