<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yhary Arias</title>
    <description>The latest articles on DEV Community by Yhary Arias (@yhary_arias).</description>
    <link>https://dev.to/yhary_arias</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yhary_arias"/>
    <language>en</language>
    <item>
      <title>Docker Offload and the advantages of using it in IA projects</title>
      <dc:creator>Yhary Arias</dc:creator>
      <pubDate>Wed, 16 Jul 2025 02:43:11 +0000</pubDate>
      <link>https://dev.to/yhary_arias/docker-offload-y-las-ventajas-de-usarlo-en-proyectos-de-ai-5dpa</link>
      <guid>https://dev.to/yhary_arias/docker-offload-y-las-ventajas-de-usarlo-en-proyectos-de-ai-5dpa</guid>
      <description>&lt;p&gt;Docker Offload is the new release of Docker released this July 10, 2025, this new functionality allows us to offload the execution of containers remotely, taking advantage of the cloud without configuring complex environments.&lt;/p&gt;

&lt;p&gt;Docker describes it as a way to run containers “as if it were local, but running on another machine”, all controlled from your own docker run as we have always done.&lt;/p&gt;

&lt;p&gt;How can this be an advantage in an IA project?&lt;br&gt;
In general we know that IA projects usually need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Powerful GPU's to train or run models.&lt;/li&gt;
&lt;li&gt;Lots and lots of RAM (otherwise the machine shuts down 😬)&lt;/li&gt;
&lt;li&gt;Scalability and efficiency in model testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Docker Offload, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use your local machine to develop, but running models on a more powerful infrastructure.&lt;/li&gt;
&lt;li&gt;Accelerate ML pipelines without manually mounting servers&lt;/li&gt;
&lt;li&gt;Test giant models without crashing your machine&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's get to the point! here are the advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simplicity: No need to set up complex clusters or remote servers.&lt;/li&gt;
&lt;li&gt;Scalability: You can easily leverage cloud or external servers&lt;/li&gt;
&lt;li&gt;Portability: You keep using the usual Docker ecosystem.&lt;/li&gt;
&lt;li&gt;Ideal for AI testing and development without overloading your team 😎&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's get to the good stuff, the practice:&lt;/p&gt;

&lt;p&gt;🚀 Step by step: basic Docker Offload configuration.&lt;/p&gt;

&lt;p&gt;Before you start, you need to have installed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker Desktop updated (I recommend you to remove the app from your machine and reinstall it, to go fixed).&lt;/li&gt;
&lt;li&gt;Docker Offload enabled (requires Docker account with beta program access).&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Enable Docker Offload&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In Docker Desktop:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to Settings &amp;gt; Features in development&lt;/li&gt;
&lt;li&gt;Enable the Docker Offload option&lt;/li&gt;
&lt;li&gt;Restart Docker Desktop&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Let's follow the "quickstart" of the official Docker documentation&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;According to the documentation as of today (07/15/2025) we must open Docker Desktop and log in.&lt;/li&gt;
&lt;li&gt;Now open the terminal of your machine (whatever it is) and run the following command:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;$ docker offload start&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It should show you the following message:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsk3uaazpjjp505xqs2fr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsk3uaazpjjp505xqs2fr.png" alt=" " width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hit Enter if the account you are going to use is correct. Then it will ask you if you need GPU support. For this example we will say “yes”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mks35hvhhslaaelalk2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mks35hvhhslaaelalk2.png" alt=" " width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are now ready to use Docker Offload! 🚀&lt;/p&gt;

&lt;p&gt;Docker Offload will run on an instance with an NVIDIA L4 &lt;strong&gt;GPU&lt;/strong&gt;, which is useful for machine learning or resource intensive workloads.&lt;/p&gt;

&lt;p&gt;Now if you go to Docker Desktop you are going to see a cloud icon ☁️ in the header of the panel, this tells us that you have successfully enabled Docker Offload.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqzg1b8hah0qi2hfq3ta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqzg1b8hah0qi2hfq3ta.png" alt=" " width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To check the status of Docker Offload let's run the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ docker offload status&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgna79tqxbrjtunrhjuw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgna79tqxbrjtunrhjuw.png" alt=" " width="800" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally we are going to run a container with Docker Offload.&lt;/p&gt;

&lt;p&gt;The documentation suggests the following example:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ docker run --rm --gpus all hello-world&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see the following after running the command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7wr2z0npuqj6czpxct4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7wr2z0npuqj6czpxct4.png" alt=" " width="800" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are doing quite well 🥳 now the following example is optional, so you can see how it works with another command creating a container in a simple way:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ &lt;br&gt;
docker run --platform linux/amd64 python:3.11-slim python -c "print('Hola desde Docker Offload!')"&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfo9m2n6457wufnxlyla.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfo9m2n6457wufnxlyla.png" alt=" " width="800" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait ‼️ don't go away, we must stop the use of Docker Offload to avoid unnecessary consumption of resources in the cloud and return to a local environment by running the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ docker offload stop&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;🛠️ And that's it!&lt;br&gt;
I hope this Docker Offload journey has saved you CPU, RAM... and headaches! 🧠💻&lt;/p&gt;

&lt;p&gt;Love you guys, thanks for making it this far. Bye.&lt;br&gt;
[Yhary Arias / &lt;a class="mentioned-user" href="https://dev.to/ia"&gt;@ia&lt;/a&gt;.fania]&lt;/p&gt;

</description>
      <category>programming</category>
      <category>docker</category>
      <category>python</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Orchestrating Models: Machine Learning with Docker Compose</title>
      <dc:creator>Yhary Arias</dc:creator>
      <pubDate>Tue, 25 Mar 2025 18:19:50 +0000</pubDate>
      <link>https://dev.to/yhary_arias/orchestrating-models-machine-learning-with-docker-compose-5dlo</link>
      <guid>https://dev.to/yhary_arias/orchestrating-models-machine-learning-with-docker-compose-5dlo</guid>
      <description>&lt;p&gt;&lt;strong&gt;Docker Compose&lt;/strong&gt; is a powerful tool for defining and managing multi-container Docker applications in a simple and efficient way. In this article, we will explore the basics of Docker Compose and how you can start using it to orchestrate your applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Docker Compose?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Docker Compose is a tool that allows you to define and run multi-container Docker applications using a YAML file to configure your application's services. Then, with a single command, you can create and run all the defined containers. This makes it easier to create and configure complex development and production environments where multiple services need to interact with each other.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Optional&lt;br&gt;
&lt;strong&gt;Installing Docker Compose&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Before getting started, make sure you have Docker Compose installed on your machine. You can install it by following the &lt;a href="https://docs.docker.com/compose/install/" rel="noopener noreferrer"&gt;official Docker instructions&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If your OS is Mac, you can use the following command to install Docker Compose. Before running it, ensure that Docker Desktop is installed on your machine.  &lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ brew install docker-compose&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, check the installed version:&lt;br&gt;
&lt;code&gt;$ docker-compose --version&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Before testing Docker Compose with a Machine Learning project, let's clarify the difference between Docker Compose and Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Compose:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;What it does:&lt;/strong&gt; Docker Compose is a tool that allows you to run multiple containers together. It is mainly designed for development and testing. It is ideal if you want to quickly spin up multiple services on your machine, such as a database, an API, and a web application, all running locally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt; You use a file called &lt;code&gt;docker-compose.yml&lt;/code&gt; to define which containers you will use and how they connect to each other. For example, you can say: "I want to launch my application and connect it to a database." Compose will take care of that for you with a single command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ideal for:&lt;/strong&gt; Small or development projects where you don't need a very complex system and just want to test things quickly on your machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;What it does:&lt;/strong&gt; Kubernetes is much larger and more powerful than Docker Compose. Not only does it help you launch containers, but it also helps manage applications in production, on real servers, efficiently and at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt; Kubernetes ensures that your application is always running, has enough resources, and can handle many users. If one of your containers fails, Kubernetes automatically replaces it. It can also scale (increase or decrease the number of containers) based on what your application needs at any moment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ideal for:&lt;/strong&gt; Large-scale production applications that need to be always available and handle heavy traffic. Big companies or projects expecting significant growth often use Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In summary:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Docker Compose is for launching and managing multiple containers quickly on your local machine, ideal for development or testing.&lt;br&gt;&lt;br&gt;
Kubernetes is for managing large-scale applications in production, where you need more control, stability, and scalability.&lt;br&gt;&lt;br&gt;
Compose is like a small engine that helps you work on your project. Kubernetes is like a massive system that keeps your application running smoothly, even with many users and high traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now, let's get to the real deal. 😎&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;I'll show you how to apply Docker Compose in an ML project. We will create a simple application that trains a machine learning model and exposes a web service for making predictions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project Objective&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Our project will consist of two services:  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ML Service:&lt;/strong&gt; A trained machine learning model using &lt;strong&gt;scikit-learn&lt;/strong&gt; that is exposed via a Flask web API.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Database Service:&lt;/strong&gt; A PostgreSQL database to store prediction results.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project Structure&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The basic file structure will be as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ml_project/
│
├── docker-compose.yml
├── ml_service/
│   ├── Dockerfile
│   ├── app.py
│   ├── model.py
│   ├── requirements.txt
└── db/
    ├── init.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;1. Define &lt;code&gt;docker-compose.yml&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The first step is to define the services in a &lt;code&gt;docker-compose.yml&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3'
services:
  ml_service:
    build: ./ml_service
    ports:
      - "5000:5000"
    depends_on:
      - db
  db:
    image: postgres
    environment:
      POSTGRES_DB: ml_results
      POSTGRES_USER: ml_user
      POSTGRES_PASSWORD: ml_password
    volumes:
      - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
    ports:
      - "5432:5432"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Create the Machine Learning service&lt;/strong&gt;&lt;br&gt;
Inside the &lt;code&gt;ml_service/&lt;/code&gt; folder, we create a &lt;code&gt;Dockerfile&lt;/code&gt; that will install the necessary Python dependencies, train a model, and expose the service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dockerfile&lt;/strong&gt; (for the ML service)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.8

WORKDIR /app

COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt

COPY . .

CMD ["python", "app.py"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;requirements.txt&lt;/code&gt;&lt;br&gt;
Here we define the dependencies we will use, such as Flask for the web server and scikit-learn for the ML model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Flask==2.1.0
scikit-learn==1.0.2
psycopg2-binary==2.9.3  # Para conectar con PostgreSQL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;model.py&lt;/code&gt;&lt;br&gt;
This file contains the code to train the machine learning model. We will use a simple classification model like Logistic Regression.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
import pickle

def train_model():
    # Load sample dataset
    iris = load_iris()
    X, y = iris.data, iris.target

    # Train model
    model = LogisticRegression()
    model.fit(X, y)

    # Save model to file
    with open('model.pkl', 'wb') as f:
        pickle.dump(model, f)

if __name__ == "__main__":
    train_model()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code trains a simple classification model and saves it as a &lt;code&gt;model.pkl&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;app.py&lt;/code&gt;&lt;br&gt;
This is the Flask file that creates the API to make predictions using the trained model. Additionally, it stores the predictions in the database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from flask import Flask, request, jsonify
import pickle
import psycopg2

app = Flask(__name__)

# Load trained ML model
with open('model.pkl', 'rb') as f:
    model = pickle.load(f)

# Connect to PostgreSQL database
conn = psycopg2.connect(
    dbname="ml_results", user="ml_user", password="ml_password", host="db"
)
cur = conn.cursor()

@app.route('/predict', methods=['POST'])
def predict():
    data = request.json
    X_new = [data['features']]

    # Make prediction
    prediction = model.predict(X_new)[0]

    # Save prediction to database
    cur.execute("INSERT INTO predictions (input, result) VALUES (%s, %s)", (str(X_new), int(prediction)))
    conn.commit()

    return jsonify({"prediction": int(prediction)})

if __name__ == "__main__":
    app.run(host='0.0.0.0', port=5000)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Flask service listens on port 5000 and, upon receiving a POST request with the input characteristics, returns a prediction and stores the result in PostgreSQL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Configure the Database&lt;/strong&gt;&lt;br&gt;
In the &lt;code&gt;db/&lt;/code&gt; directory, we create an &lt;code&gt;init.sql&lt;/code&gt; file to initialize the database with a table to store predictions.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;init.sql&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE predictions (
    id SERIAL PRIMARY KEY,
    input TEXT,
    result INTEGER
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script will run automatically when the PostgreSQL container is started, and will create a table called &lt;code&gt;predictions&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Run the Project&lt;/strong&gt;&lt;br&gt;
Now that everything is set up, you can run the entire project with Docker Compose. From the root project directory, run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ docker-compose up&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will make Docker Compose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build the machine learning service image.&lt;/li&gt;
&lt;li&gt;Start the ML and database containers.&lt;/li&gt;
&lt;li&gt;Run the Flask model and web server on port 5000.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Test the API&lt;/strong&gt;&lt;br&gt;
To make a prediction, send a POST request to &lt;code&gt;http://localhost:5000/predict&lt;/code&gt; with a JSON body containing dataset features.&lt;/p&gt;

&lt;p&gt;Command example &lt;code&gt;curl&lt;/code&gt;:&lt;br&gt;
&lt;code&gt;$ curl -X POST http://localhost:5000/predict -H "Content-Type: application/json" -d '{"features": [5.1, 3.5, 1.4, 0.2]}'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If everything is configured correctly, you will get a response like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "prediction": 0
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
With Docker Compose, we have created a machine learning project that includes a web service to make predictions with an ML model, as well as a PostgreSQL database to store the results. Docker Compose simplifies the management of these services in development and production environments, allowing you to work with multiple containers in a coordinated manner.&lt;/p&gt;

&lt;p&gt;This example is just a starting point, and you can expand it by adding more services, connecting other machine learning models or integrating tools like Redis for caching or Celery for asynchronous tasks.&lt;/p&gt;

&lt;p&gt;Now you're ready to use Docker Compose in more complex machine learning projects!&lt;/p&gt;

&lt;p&gt;Author: Yhary Arias.&lt;br&gt;
GitHub: &lt;a href="https://github.com/yharyarias" rel="noopener noreferrer"&gt;@yharyarias&lt;/a&gt;&lt;br&gt;
LinkedIn: &lt;a href="https://www.linkedin.com/in/yharyarias/" rel="noopener noreferrer"&gt;@yharyarias&lt;/a&gt;&lt;br&gt;
Instagram: &lt;a href="https://www.instagram.com/ia.fania/" rel="noopener noreferrer"&gt;@ia.fania&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>api</category>
      <category>flask</category>
      <category>python</category>
    </item>
  </channel>
</rss>
