<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alexander Razvodovskii</title>
    <description>The latest articles on DEV Community by Alexander Razvodovskii (@alexlead).</description>
    <link>https://dev.to/alexlead</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alexlead"/>
    <language>en</language>
    <item>
      <title>Local AI Tools: Introducing LocalAI (Tool 3)</title>
      <dc:creator>Alexander Razvodovskii</dc:creator>
      <pubDate>Sat, 03 Jan 2026 18:12:34 +0000</pubDate>
      <link>https://dev.to/alexlead/local-ai-tools-introducing-localai-tool-3-3h67</link>
      <guid>https://dev.to/alexlead/local-ai-tools-introducing-localai-tool-3-3h67</guid>
      <description>&lt;p&gt;Continuing our exploration of tools for running AI models locally, the next solution to discuss is &lt;a href="https://localai.io/" rel="noopener noreferrer"&gt;LocalAI&lt;/a&gt;. Unlike some desktop-focused applications, &lt;a href="https://localai.io/" rel="noopener noreferrer"&gt;LocalAI&lt;/a&gt; is a full-featured AI runtime designed primarily for local and professional usage, with a strong focus on Docker-based deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  General Overview
&lt;/h2&gt;

&lt;p&gt;LocalAI does not provide a native desktop application for Windows or macOS in the traditional sense. Instead, it is designed to be run as a service — most commonly inside a Docker container. For users who already have basic experience with Docker, LocalAI is relatively easy to install and manage.&lt;/p&gt;

&lt;p&gt;One of its major advantages is the presence of a web-based interface, which allows users to manage models and configurations through a browser. This makes LocalAI more approachable than pure CLI-based solutions, while still remaining flexible and powerful.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Start with Docker
&lt;/h2&gt;

&lt;p&gt;For beginners, the simplest way to start is by running the latest LocalAI image directly via Docker. This approach requires minimal configuration and allows you to get a working system up and running quickly.&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://localai.io/installation/docker/" rel="noopener noreferrer"&gt;basic example using Docker&lt;/a&gt; looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -ti --name local-ai -p 8080:8080 localai/localai:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running this command, the LocalAI web interface becomes available in the browser. From there, users can explore the system, manage models, and interact with the API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcwb7hzoogxukx9fq1y9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcwb7hzoogxukx9fq1y9.png" alt="LocalAI chat interface" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This setup is ideal for initial testing, learning, and experimenting with local AI models.&lt;/p&gt;

&lt;h2&gt;
  
  
  Web Interface and Model Management
&lt;/h2&gt;

&lt;p&gt;Once deployed, LocalAI provides a convenient management interface. It allows you to connect and configure different AI models depending on your hardware capabilities. Lightweight models can be used on modest systems, while more powerful machines can take advantage of larger and more complex models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4a2qbc79p41koau1hrn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4a2qbc79p41koau1hrn.png" alt="LocalAI setting interface" width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The system is flexible and adapts well to different environments, making it suitable for both casual experimentation and more serious development tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9du28sf7pthqgoxyxein.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9du28sf7pthqgoxyxein.png" alt="Model Details" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Docker Compose
&lt;/h2&gt;

&lt;p&gt;For more structured setups or long-term usage, Docker Compose is often the preferred option. It makes configuration more transparent and easier to maintain.&lt;/p&gt;

&lt;p&gt;A minimal docker-compose.yml example for the latest version of LocalAI might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  local-ai:   
    image: localai/localai:latest
    container_name: local-ai  
    ports:
      - "8080:8080"           
    tty: true                 
    stdin_open: true         
    restart: always
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this configuration, models are stored persistently on the host machine, and the service can be easily restarted or extended as part of a larger system.&lt;/p&gt;

&lt;h2&gt;
  
  
  CPU and GPU Configuration
&lt;/h2&gt;

&lt;p&gt;LocalAI supports both CPU and GPU-based execution. Depending on your &lt;a href="https://localai.io/installation/docker/#all-in-one-aio-images" rel="noopener noreferrer"&gt;hardware configuration&lt;/a&gt;, you can &lt;a href="https://localai.io/installation/docker/#gpu-images-1" rel="noopener noreferrer"&gt;enable GPU acceleration&lt;/a&gt; or add additional system resources. The project provides multiple predefined configurations optimized for different setups.&lt;/p&gt;

&lt;p&gt;While simple models require very little tuning, using heavier models or more advanced setups does require additional knowledge about hardware resources, Docker configuration, and model optimization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strengths and Target Audience
&lt;/h2&gt;

&lt;p&gt;What I personally like about LocalAI is its balance between simplicity and flexibility. For basic usage, there are no complicated settings or hidden pitfalls. You can start quickly and experiment with models without deep technical expertise.&lt;/p&gt;

&lt;p&gt;At the same time, LocalAI scales well for professional usage. Developers can integrate it into workflows, automation systems, and backend services using its API. It works equally well for beginners learning about local AI and for professionals building more complex solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;LocalAI is a versatile and powerful tool for running AI models locally. It offers an accessible entry point for newcomers while still providing the depth required for advanced and professional use cases.&lt;/p&gt;

&lt;p&gt;If your goal is to run AI models locally with control over data, flexibility in deployment, and strong integration potential, LocalAI is definitely a tool worth considering.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>docker</category>
      <category>automation</category>
    </item>
    <item>
      <title>Local AI Tools: Exploring LM Studio (Tool 2)</title>
      <dc:creator>Alexander Razvodovskii</dc:creator>
      <pubDate>Sat, 13 Dec 2025 22:34:47 +0000</pubDate>
      <link>https://dev.to/alexlead/local-ai-tools-exploring-lm-studio-tool-2-3801</link>
      <guid>https://dev.to/alexlead/local-ai-tools-exploring-lm-studio-tool-2-3801</guid>
      <description>&lt;p&gt;Continuing our overview of tools for local AI usage, the next solution worth discussing is &lt;a href="https://lmstudio.ai/" rel="noopener noreferrer"&gt;LM Studio&lt;/a&gt;. This is a desktop application focused primarily on ease of use and fast access to local AI models, especially for users who want to run models on their own machines without complex setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simple Installation and User-Friendly Experience
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://lmstudio.ai/" rel="noopener noreferrer"&gt;LM Studio&lt;/a&gt; offers a very straightforward installation process. One of its key advantages is how easily AI models can be discovered, downloaded, and connected. For non-technical or less technical users, this significantly lowers the entry barrier to local AI usage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1f5wm7zb4s4uiblekw4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1f5wm7zb4s4uiblekw4.png" alt="LM Studio Interface" width="800" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my opinion, LM Studio provides a more polished and intuitive user interface compared to many alternative tools. It also introduces some additional capabilities, such as running models directly inside the application and interacting with them in a chat-like environment. This makes LM Studio particularly attractive for experimentation, learning, and everyday tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using LM Studio as a Local Service
&lt;/h2&gt;

&lt;p&gt;Beyond its desktop UI, LM Studio can also be used in a &lt;a href="https://lmstudio.ai/docs/developer/core/server" rel="noopener noreferrer"&gt;local server mode&lt;/a&gt;, which is especially interesting for developers. In this mode, LM Studio exposes an API that allows external tools and services to communicate with locally running models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsl766ttj662947ljuxtt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsl766ttj662947ljuxtt.png" alt=" " width="800" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The official website includes documentation for the &lt;a href="https://lmstudio.ai/docs/developer/rest/endpoints" rel="noopener noreferrer"&gt;built-in API&lt;/a&gt;, making it possible to integrate LM Studio into automation and workflow tools such as n8n. In this setup, LM Studio acts as a local AI service that can be queried just like a remote API endpoint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations for Containerization and Backend Use
&lt;/h2&gt;

&lt;p&gt;However, LM Studio has an important limitation from a development and infrastructure perspective. At the moment, the official documentation does not describe any way to run LM Studio inside a Docker container. This significantly narrows its use cases for server-side deployments, CI/CD pipelines, or cloud-based environments.&lt;/p&gt;

&lt;p&gt;As a result, LM Studio is best suited for local desktop usage, rather than being part of a fully containerized or scalable backend system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Library Integration for Developers
&lt;/h2&gt;

&lt;p&gt;On the other hand, LM Studio does offer integration options in the form of libraries available via &lt;a href="https://www.npmjs.com/package/lmstudio" rel="noopener noreferrer"&gt;npm&lt;/a&gt; and &lt;a href="https://pypi.org/project/lmstudio/" rel="noopener noreferrer"&gt;pip&lt;/a&gt;. This means developers working with JavaScript/TypeScript or Python can integrate LM Studio into their applications more directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2nr27o6jyhmt1gddp4f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2nr27o6jyhmt1gddp4f.png" alt=" " width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While this is a useful feature, it also highlights a limitation: developers outside the JS/TS and Python ecosystems may find fewer integration options available.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary and Personal Assessment
&lt;/h2&gt;

&lt;p&gt;To summarize, LM Studio is a solid and well-designed application for local AI usage. It excels as a tool for everyday users thanks to its intuitive interface and simplified model management.&lt;/p&gt;

&lt;p&gt;That said, it remains somewhat limited for advanced development scenarios and cannot yet be considered a universal solution for all AI workflows. The lack of Docker support, in particular, restricts its role in more complex or production-oriented environments.&lt;/p&gt;

&lt;p&gt;Still, LM Studio is actively evolving. Depending on your specific tasks and requirements, it may already be a very suitable tool — and it will be interesting to see how its capabilities expand in the future.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>automation</category>
      <category>docker</category>
    </item>
    <item>
      <title>Local AI Tools: Getting Started with Ollama (Tool 1)</title>
      <dc:creator>Alexander Razvodovskii</dc:creator>
      <pubDate>Tue, 09 Dec 2025 12:15:59 +0000</pubDate>
      <link>https://dev.to/alexlead/local-ai-tools-getting-started-with-ollama-tool-1-30bl</link>
      <guid>https://dev.to/alexlead/local-ai-tools-getting-started-with-ollama-tool-1-30bl</guid>
      <description>&lt;h4&gt;
  
  
  &lt;a href="https://dev.to/alexlead/artificial-intelligence-in-everyday-work-why-local-models-matter-introduction-2dm7"&gt;Introduction&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;When discussing accessible local AI tools, it’s hard not to begin with one of the most popular options available today — &lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt;. This tool has gained widespread recognition thanks to its simplicity, reliability, and availability across all major operating systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Easy Installation for Everyone
&lt;/h2&gt;

&lt;p&gt;Ollama stands out because, unlike some other local AI solutions, it offers a fully functional &lt;a href="https://ollama.com/download" rel="noopener noreferrer"&gt;desktop version&lt;/a&gt;. You can simply visit the official Ollama website, download the installer for your operating system, and launch it like any other application. Desktop versions are available for Windows, Linux, and macOS, making Ollama accessible to practically any user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmleq807i6e2m9x6cl8n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmleq807i6e2m9x6cl8n.png" alt="Ollama Desktop" width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once installed, Ollama provides access to &lt;a href="https://ollama.com/search" rel="noopener noreferrer"&gt;a collection of pre-trained AI models&lt;/a&gt;. A complete list of these models can be found on the official website, and many of them can be downloaded directly through the installer. You are free to choose which models run locally on your machine, and Ollama also supports the option to use certain models remotely if needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvcce39aksgsusxoxivd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvcce39aksgsusxoxivd.png" alt="Ollama with installed model" width="793" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I won’t go into the installation process here — it’s straightforward and follows the standard steps for your operating system. The only additional requirement is to complete a simple registration so the desktop app can connect with the Ollama service.&lt;/p&gt;

&lt;p&gt;Please note that Ollama has a paid subscription. If you're focusing on cloud-based use, you may need to upgrade your subscription.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond Desktop: Running Ollama in a Container
&lt;/h2&gt;

&lt;p&gt;Although the desktop version is the easiest entry point, Ollama is not limited to that. The tool can also be deployed in a Docker container, which is especially convenient for developers and users who want more control over their environment or plan to integrate Ollama into automated workflows.&lt;/p&gt;

&lt;p&gt;The documentation provides clear &lt;a href="https://docs.ollama.com/docker" rel="noopener noreferrer"&gt;instructions&lt;/a&gt; for installing and running the Docker version using a single command. Once deployed, Ollama runs as a local server accessible from your machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it ollama ollama run llama3.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:11434/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Ollama is running&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;However, unlike some other tools we’ll explore later in this series, Ollama does not include a web interface when running inside a container. Interaction with the system is performed exclusively through the &lt;a href="https://docs.ollama.com/api/introduction" rel="noopener noreferrer"&gt;API&lt;/a&gt;, which is thoroughly documented on the website. The API allows you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;List available models&lt;/li&gt;
&lt;li&gt;Submit a prompt&lt;/li&gt;
&lt;li&gt;Receive responses programmatically&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It should be noted here that the model's response is sent in stream form.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function askModel() {
  const response = await fetch("http://localhost:11434/api/generate", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      model: "llama3.2",
      prompt: "Who are you?"
    })
  });

  const reader = response.body.getReader();
  const decoder = new TextDecoder("utf-8");

  // output a stream response
  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    console.log(decoder.decode(value));
  }
}

askModel();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "model": "llama3.2",
    "created_at": "2025-12-09T10:20:54.622934272Z",
    "response": "I",
    "done": false
}{
    "model": "llama3.2",
    "created_at": "2025-12-09T10:20:54.761152451Z",
    "response": "'m",
    "done": false
}
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Communication happens through standard POST requests to the local server, and the API format will feel intuitive to anyone with basic experience integrating services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Ollama Through a Dockerfile
&lt;/h2&gt;

&lt;p&gt;To install Ollama inside a container, we begin by creating a Dockerfile. This file defines how the container will be built, including downloading and installing Ollama.&lt;/p&gt;

&lt;p&gt;Additionally, for convenience, we can preload a specific AI model during build time. This saves time later and ensures that the required model is already available when the container starts.&lt;/p&gt;

&lt;p&gt;Here’s an example structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dockerfile&lt;/li&gt;
&lt;li&gt;entrypoint.sh (the bash script that pulls the model)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Dockerfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ollama/ollama:latest
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
EXPOSE 11434
ENTRYPOINT ["/entrypoint.sh"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;entrypoint.sh&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
ollama serve &amp;amp;
sleep 3
ollama pull llama3.1
wait
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows us to include Ollama as a component of any project — web service, automation pipeline, or local development setup — by simply running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t my-ollama .
docker run -p 11434:11434 my-ollama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this setup, Ollama becomes a plug-and-play module that can be easily distributed, reused, and integrated into more complex systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrations
&lt;/h2&gt;

&lt;p&gt;The documentation also includes examples for connecting Ollama to modern development and workflow tools such as &lt;a href="https://docs.ollama.com/integrations/n8n" rel="noopener noreferrer"&gt;n8n&lt;/a&gt;, &lt;a href="https://docs.ollama.com/integrations/vscode" rel="noopener noreferrer"&gt;Visual Studio Code&lt;/a&gt;, &lt;a href="https://docs.ollama.com/integrations/jetbrains" rel="noopener noreferrer"&gt;JetBrains IDEs&lt;/a&gt;, &lt;a href="https://docs.ollama.com/integrations/codex" rel="noopener noreferrer"&gt;Codex&lt;/a&gt;, and others — making it easier to fit Ollama into broader automation or development pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strengths and Limitations
&lt;/h2&gt;

&lt;p&gt;In my view, the main advantage of Ollama is its simplicity. It’s easy to install, widely available, and accessible even to users who are not deeply technical. For those who want to experiment with AI models on their own machine without complicated setup, it’s an excellent starting point.&lt;/p&gt;

&lt;p&gt;That said, as we will explore in the next articles, Ollama is not always the most efficient or flexible solution — especially for developers who need deeper customization, advanced configuration, or who plan to integrate AI services heavily into complex workflows. In such scenarios, alternative tools may provide more performance or better integration capabilities.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>webdev</category>
      <category>docker</category>
    </item>
    <item>
      <title>Artificial Intelligence in Everyday Work: Why Local Models Matter (Introduction)</title>
      <dc:creator>Alexander Razvodovskii</dc:creator>
      <pubDate>Mon, 08 Dec 2025 10:38:34 +0000</pubDate>
      <link>https://dev.to/alexlead/artificial-intelligence-in-everyday-work-why-local-models-matter-introduction-2dm7</link>
      <guid>https://dev.to/alexlead/artificial-intelligence-in-everyday-work-why-local-models-matter-introduction-2dm7</guid>
      <description>&lt;p&gt;Artificial intelligence is rapidly expanding its presence in our daily and professional workflows. More and more tasks rely on AI-powered solutions, and a growing number of models are being integrated into both everyday consumer applications and complex business processes. As a result, the demand for AI models delivered as services continues to grow.&lt;/p&gt;

&lt;p&gt;At the same time, concerns about data protection are becoming increasingly important. Many users and organizations prefer to keep their information private rather than send sensitive data to large cloud-based AI providers. This naturally raises an important question: How can we use AI models directly on our personal computers or laptops, without depending on external cloud services?&lt;/p&gt;

&lt;p&gt;Fortunately, the ecosystem around local AI is evolving rapidly. A number of solutions already exist that make running models locally not only possible, but in many cases surprisingly simple. These tools allow users to maintain full control over their data while still benefiting from powerful AI capabilities. As demand increases, the market continues to offer more accessible, efficient, and user-friendly options.&lt;/p&gt;

&lt;p&gt;In this series of articles, I will explore the most available approaches for running AI models locally and discuss what opportunities and advantages localized AI systems provide. We’ll look at the different tools, compare their capabilities, and walk through the process of implementing such a system on your personal machine. We will also explore how to integrate local AI tools into broader workflows and automation pipelines.&lt;/p&gt;

&lt;p&gt;Additionally, we will dive into key practical topics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to choose the right AI model for your needs&lt;/li&gt;
&lt;li&gt;How to install and run models on your local computer&lt;/li&gt;
&lt;li&gt;How various tools and runtimes differ in performance and speed&lt;/li&gt;
&lt;li&gt;How to embed AI capabilities into your daily work processes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal of this series is to give a clear, beginner-friendly overview of the current local AI landscape and help you understand how to bring powerful AI tools directly to your own device.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>automation</category>
      <category>docker</category>
    </item>
    <item>
      <title>Developing Wordpress projects with Docker. Part 1. Create new project.</title>
      <dc:creator>Alexander Razvodovskii</dc:creator>
      <pubDate>Sat, 13 Sep 2025 16:12:50 +0000</pubDate>
      <link>https://dev.to/alexlead/developing-wordpress-projects-with-docker-part-1-create-new-project-4a9p</link>
      <guid>https://dev.to/alexlead/developing-wordpress-projects-with-docker-part-1-create-new-project-4a9p</guid>
      <description>&lt;h2&gt;
  
  
  Introduction.
&lt;/h2&gt;

&lt;p&gt;In this series of articles we will talk about deploying WordPress projects on a local machine and then transferring projects to the server, including various aspects related to deploying new and existing projects, backups, transferring existing projects, database access, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  PART 1. Create new project.
&lt;/h2&gt;

&lt;p&gt;Wordpress has been around for a long time, it has already celebrated its 20th anniversary. The project was so successful that today a huge number of sites have been created on it. There are probably developers who have been creating various projects on Wordpress for all these twenty years. In my practice, Wordpress has found application both as a ready-to-deploy CMS and as a base for creating non-trivial projects, such as a marketplace for wholesale supplies of agricultural products.&lt;br&gt;
Previously, I used LAMP to deploy and develop Wordpress projects, and later I simplified my development using ready-made local servers, such as &lt;a href="https://ospanel.io/" rel="noopener noreferrer"&gt;OpenServer&lt;/a&gt;. Modern development has provided new tools that allow you to quickly and easily deploy projects and install them on a server without wasting time. In my current work, I have transferred development to Docker, and I want to introduce readers of my blog to working with it.&lt;/p&gt;

&lt;p&gt;In the first part, we will get acquainted with the process of creating a Wordpress project from scratch.&lt;/p&gt;

&lt;p&gt;I hope that my reader already knows what Docker is. If you don't have a local version installed yet, you can download it from the official &lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker.com&lt;/a&gt; website&lt;/p&gt;
&lt;h3&gt;
  
  
  Project Structure
&lt;/h3&gt;

&lt;p&gt;Let's start. Now you can create a folder for your project. Open the folder and create below structure inside.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyrkjfkkvma0o7b4am6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyrkjfkkvma0o7b4am6k.png" alt="Wordpress project structure for Docker" width="224" height="115"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have 2 empty folders and 2 files&lt;br&gt;
The folder &lt;code&gt;mysql&lt;/code&gt; is for your database. The folder &lt;code&gt;wordpress&lt;/code&gt; is for Wordpress files which you will edit later. Right now you do not need copy anything into the folders. &lt;br&gt;
So, let's focus to the files.&lt;/p&gt;
&lt;h3&gt;
  
  
  Files
&lt;/h3&gt;

&lt;p&gt;The file &lt;code&gt;.env&lt;/code&gt; contains default settings details for your project. Usually we do not copy the file into repositories or share this file. This file is for your local system only. However, it is important to create the file anywhere you install the project. Change DB name, User and Password with your own data.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;.env&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# MySQL 
MYSQL_ROOT_PASSWORD=your_secure_root_password
MYSQL_DATABASE=wordpress_db
MYSQL_USER=wordpress_user
MYSQL_PASSWORD=your_secure_password

# WordPress settings
WORDPRESS_DEBUG=0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The file &lt;code&gt;docker-compose.yml&lt;/code&gt; is our main file. It contains detailed instructions for creating our project, parts, and relations. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose.yml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'

services:
  # MySQL
  mysql:
    image: mysql:8.0
    container_name: wordpress_mysql
    restart: unless-stopped
    env_file: .env
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    volumes:
      - ./mysql/data:/var/lib/mysql
      - ./mysql/init.sql:/docker-entrypoint-initdb.d/init.sql
    networks:
      - wordpress_network
    command: --default-authentication-plugin=mysql_native_password

  # WordPress
  wordpress:
    image: wordpress:latest
    container_name: wordpress_app
    restart: unless-stopped
    env_file: .env
    environment:
      WORDPRESS_DB_HOST: mysql:3306
      WORDPRESS_DB_USER: ${MYSQL_USER}
      WORDPRESS_DB_PASSWORD: ${MYSQL_PASSWORD}
      WORDPRESS_DB_NAME: ${MYSQL_DATABASE}
    volumes:
      - ./wordpress/wp-content:/var/www/html/wp-content
    ports:
      - "8000:80"
    depends_on:
      - mysql
    networks:
      - wordpress_network

  # phpMyAdmin
  phpmyadmin:
    image: phpmyadmin:latest
    container_name: wordpress_phpmyadmin
    restart: unless-stopped
    environment:
      PMA_HOST: mysql
      PMA_PORT: 3306
      PMA_ARBITRARY: 0
    ports:
      - "8080:80"
    depends_on:
      - mysql
    networks:
      - wordpress_network

networks:
  wordpress_network:
    driver: bridge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's focus on some parts of the file. &lt;br&gt;
As you see the file has 3 big blocks (Services): &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MySQL&lt;/li&gt;
&lt;li&gt;Wordpress&lt;/li&gt;
&lt;li&gt;phpMyAdmin&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In first part Docker create a DB for our project. It takes settings from our &lt;code&gt;.env&lt;/code&gt; file and put them as a default settings. Later, after building the project, you will find your DB files in the folder &lt;code&gt;mysql&lt;/code&gt;.&lt;br&gt;
Next part is our Wordpress project. Docker doesn't need having Wordpress files. It will download the files from Docker's library during building project. All files will be placed in the folder &lt;code&gt;wordpress&lt;/code&gt;. These files are for editing, the Docker placed in the folder the files from &lt;code&gt;wp-content&lt;/code&gt; folder only. So a developer can be sure the core of Wordpress is not editable and he can destroy nothing.&lt;br&gt;
The last part &lt;code&gt;phpMyAdmin&lt;/code&gt; is not required. I added it for easier way to access to DB. You can remove it and use any other tool for edit MySQL. &lt;/p&gt;

&lt;p&gt;Let's pay attention to the tail - &lt;code&gt;networks&lt;/code&gt;. This part is for connecting all parts to each others.&lt;/p&gt;
&lt;h3&gt;
  
  
  Run Containers
&lt;/h3&gt;

&lt;p&gt;Now our project is ready to run. Before building the project please check, whether the Docker has been run on your PC.&lt;br&gt;
Now open the folder with &lt;code&gt;docker-compose.yml&lt;/code&gt; in &lt;code&gt;terminal&lt;/code&gt; (&lt;code&gt;cmd&lt;/code&gt;, &lt;code&gt;bash&lt;/code&gt;) and run below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;During few minutes Docker will build the project and you can run it in your browser by following URLs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wordpress Project
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:8000/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;PhpMyAdmin
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:8080/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the folder &lt;code&gt;wordpress&lt;/code&gt; in your code editor.  Now you can continue developing you website with fun.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>tutorial</category>
      <category>learning</category>
      <category>docker</category>
    </item>
    <item>
      <title>Phishing campaign for tricking Booking.com's users and sharing malware.</title>
      <dc:creator>Alexander Razvodovskii</dc:creator>
      <pubDate>Sat, 16 Aug 2025 14:25:14 +0000</pubDate>
      <link>https://dev.to/alexlead/phishing-campaign-for-tricking-bookingcoms-users-and-sharing-malware-ko6</link>
      <guid>https://dev.to/alexlead/phishing-campaign-for-tricking-bookingcoms-users-and-sharing-malware-ko6</guid>
      <description>&lt;p&gt;Researchers have noticed a new phishing strategy used by threat actors. This strategy is seen around links sent by threat actors under the guise of links to the Booking.com website. However, users should be careful, as it is possible that the strategy could be used in other phishing campaigns.&lt;br&gt;
First spotted by security specialist &lt;a href="https://x.com/JAMESWT_WT" rel="noopener noreferrer"&gt;JAMESWT&lt;/a&gt;, the attack was described in &lt;a href="https://x.com/JAMESWT_WT/status/1955060839569870991" rel="noopener noreferrer"&gt;his blog&lt;/a&gt; - on August 12, he reported about &lt;a href="https://x.com/JAMESWT_WT/status/1955060839569870991" rel="noopener noreferrer"&gt;it&lt;/a&gt;. Also, he published the details on the online platform &lt;a href="https://bazaar.abuse.ch/browse/tag/www-account-booking--com/" rel="noopener noreferrer"&gt;Malware bazaar&lt;/a&gt;. &lt;br&gt;
The strategy base is in using of the letter "ん" (Unicode U+3093) in a fishing address. The letter is invisible to the superficial eye and is associated with the slash sign ("/"). In some fonts it looks like the Latin letter "/n" or "/~".&lt;br&gt;
In his &lt;a href="https://x.com/JAMESWT_WT/status/1955060839569870991" rel="noopener noreferrer"&gt;post&lt;/a&gt; on the X network, JAMESWT gave an example of a detected spoofed address&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffy7tb670crwnzsdpk9p9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffy7tb670crwnzsdpk9p9.png" alt=" " width="545" height="66"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the same time, the letter sent by the attackers does not contain the specified symbol.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxnci50musod0rmq2inc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxnci50musod0rmq2inc.png" alt="Fishing website" width="800" height="577"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And also he provided a link to a &lt;a href="https://app.any.run/tasks/35618d39-0189-4eec-87f0-ce918ecf95f4" rel="noopener noreferrer"&gt;demonstration&lt;/a&gt; of the phishing site.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgh631v2ewz9vc0ryhjo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgh631v2ewz9vc0ryhjo.png" alt=" " width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Victims who click on the link are ultimately redirected to the link that installed the malware.&lt;br&gt;
To avoid becoming a victim of phishing, users are advised to check the actual domain at the right end of the address before the first slash symbol ("/") - this is the real registered domain.&lt;/p&gt;

&lt;p&gt;Sources:&lt;br&gt;
[1] JAMESWT - &lt;a href="https://x.com/JAMESWT_WT/status/1955060839569870991" rel="noopener noreferrer"&gt;https://x.com/JAMESWT_WT/status/1955060839569870991&lt;/a&gt;&lt;br&gt;
[2] BleepingComputer &lt;a href="https://www.bleepingcomputer.com/news/security/bookingcom-phishing-campaign-uses-sneaky-character-to-trick-you/" rel="noopener noreferrer"&gt;https://www.bleepingcomputer.com/news/security/bookingcom-phishing-campaign-uses-sneaky-character-to-trick-you/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>fishing</category>
      <category>cybersecurity</category>
      <category>news</category>
    </item>
  </channel>
</rss>
