<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: InterSystems</title>
    <description>The latest articles on DEV Community by InterSystems (@intersystems).</description>
    <link>https://dev.to/intersystems</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/intersystems"/>
    <language>en</language>
    <item>
      <title>IRIS Dockerization and Embedded Python for Data Science — One-Command Setup for Reproducible ML Workflows</title>
      <dc:creator>InterSystems Developer</dc:creator>
      <pubDate>Thu, 30 Apr 2026 15:41:55 +0000</pubDate>
      <link>https://dev.to/intersystems/iris-dockerization-and-embedded-python-for-data-science-one-command-setup-for-reproducible-ml-am4</link>
      <guid>https://dev.to/intersystems/iris-dockerization-and-embedded-python-for-data-science-one-command-setup-for-reproducible-ml-am4</guid>
      <description>&lt;p&gt;1-command only required for an entire IRIS instance for Data Science projects, and leveraging this to compare query methods' speed (Dynamic SQL, Pandas Query, and Globals).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0hxffduy4g1brhtt646.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0hxffduy4g1brhtt646.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before joining InterSystems, I worked in a team of web developers as a data scientist. Most of my day-to-day work involved training and embedding ML models in Python-based backend applications through microservices, mainly built with the Django framework and using Postgres SQL for sourcing the data. During development, testing, and deployment, I realized the importance of repeatability of results, both for the model’s inferences and for the performance inside the application, regardless of the hardware being used to run the code.&lt;/p&gt;

&lt;p&gt;This naturally went hand in hand with adopting good coding practices, such as modularization to reduce code repeatability and boilerplate, making maintenance easier and speeding up development. For this reason, Docker in particular became an essential tool in our workflow, not only for scalability and ease of deployment, but also to reduce human error and ensure that code behaves the same way everywhere, regardless of the underlying machine.&lt;/p&gt;

&lt;p&gt;When I joined InterSystems, I was immediately impressed by the robustness of IRIS as a data platform. Its resilience to human error when following guidelines to create services through productions, the multi-model nature of how information can be stored, and, in particular, the lightning-fast access to data through globals opened my eyes to a different way of thinking about performance and data access patterns, especially when compared to a traditional relational-only mindset.&lt;/p&gt;

&lt;p&gt;I was also lucky to join the company (September 2025) at a time when a rich ecosystem of tools was already in place, significantly flattening the learning curve. The VS Code ObjectScript Extension Pack, Embedded Python, the official IRIS Docker images, and the InterSystems Package Manager (IPM) for easily importing ObjectScript packages (&lt;a href="https://github.com/intersystems/ipm" rel="noopener noreferrer"&gt;https://github.com/intersystems/ipm&lt;/a&gt;) quickly became my everyday toolbelt.&lt;/p&gt;

&lt;p&gt;After about three months, I felt confident enough working with this stack that I started standardizing my own development environment. In this article, I’d like to share how I set up a fully containerized IRIS instance for Data Science projects using Docker—ready to use Embedded Python out of the box, with all required dependencies installed from both Python’s &lt;code&gt;pip&lt;/code&gt; and IPM.&lt;/p&gt;

&lt;p&gt;I’ll also use this setup to share some insights on the incredible speed of using globals to query tables, in a practical scenario where the popular gradient boosting model &lt;strong&gt;LightGBM&lt;/strong&gt; is used to train and make inferences on a mock dataset. This allows us to measure inference speed while comparing the different querying approaches available in IRIS.&lt;/p&gt;

&lt;p&gt;Some important highlights that will be addressed in this article are how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Link custom Python packages during the Docker build process, so they can be imported naturally (e.g. &lt;code&gt;from mypythonpackage import myclassorfunc&lt;/code&gt;) inside any Embedded Python methods living on ObjectScript classes, without repetitive boilerplate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Automatically execute IRIS terminal commands as soon as the container starts, which in this scenario is used to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Import custom ObjectScript packages into IRIS.&lt;/li&gt;
&lt;li&gt;Install IPM and, through it, Shavrov’s &lt;code&gt;csvgenpy&lt;/code&gt; utility
(&lt;a href="https://community.intersystems.com/post/csvgenpy-import-any-csv-intersystems-iris-using-embedded-python" rel="noopener noreferrer"&gt;https://community.intersystems.com/post/csvgenpy-import-any-csv-intersystems-iris-using-embedded-python&lt;/a&gt;),
used to create and populate new tables from a single CSV file.&lt;/li&gt;
&lt;li&gt;Check whether an IRIS table already exists and, if it doesn’t, populate it using &lt;code&gt;csvgenpy&lt;/code&gt; with a CSV file mounted into the container via Docker volumes.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;All of this by only running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  docker-compose up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, the repository accompanying this article uses this setup to create a complete IRIS environment with all the tools and data needed to compare different ways of querying the same IRIS table and converting the results into a Pandas DataFrame (NumPy-based), which is typically what gets passed to Python-based machine learning models.&lt;/p&gt;

&lt;p&gt;The comparison includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dynamic SQL queries&lt;/li&gt;
&lt;li&gt;Pandas querying the table directly&lt;/li&gt;
&lt;li&gt;Direct access through globals&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For each approach, execution time is measured to quantitatively compare the performance of the different querying methods. This analysis shows that direct global access provides the lowest-latency data retrieval for machine learning inference workloads by far.&lt;/p&gt;

&lt;p&gt;At the same time, consistency across querying methods is validated by asserting equality of the resulting Pandas DataFrames, ensuring that identical dataframes (and therefore identical downstream ML predictions) are produced regardless of the query mechanism used.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── docker-compose.yml             # Docker orchestration configuration
├── dockerfile                     # Multi-stage build with IRIS + Python
├── iris_autoconf.sh               # Auto-configuration script for IRIS terminal commands
├── requirements.txt               # Python libraries
├── MockPackage/                   # Custom package
│   ├── MockDataManager.cls        # Data management utilities
│   ├── MockModelManager.cls       # ML model training
│   └── MockInference.cls          # Data retrieval and inference benchmarks
├── python_utils/                  # Custom Python packages
│   ├── __init__.py
│   ├── utils.py                   # ML preprocessing &amp;amp; inference
|   └── querymethods.py            # Methods for Querying IRIS tables
└── dur/                           # Volume for durable data on host machine and container
    ├── data/                      # CSV datasets
    └── models/                    # Trained LightGBM models
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Dockerization of IRIS
&lt;/h2&gt;

&lt;p&gt;This section describes the main building blocks used to dockerize a Python-ready IRIS instance. The goal here is not only to run IRIS inside a container, but to do so in a way that makes it immediately usable for Data Science workflows: Embedded Python enabled, Python dependencies installed, ObjectScript packages available through IPM, and data automatically loaded when the container starts.&lt;/p&gt;

&lt;p&gt;The setup relies on three main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker-compose.yml&lt;/code&gt; to define how the IRIS container is built and run&lt;/li&gt;
&lt;li&gt;a multi-stage &lt;code&gt;Dockerfile&lt;/code&gt; to prepare Embedded Python and dependencies&lt;/li&gt;
&lt;li&gt;an &lt;code&gt;iris_autoconf.sh&lt;/code&gt; script to automate IRIS-side configuration at startup&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  docker-compose.yml
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'

services:
  iris:
    build: # How is the image built
      context: . # Path to the directory containing the Dockerfile
      dockerfile: Dockerfile # Name of the Dockerfile
    container_name: iris-experimentation # Name of the container
    ports:
      - "1972:1972"    # SuperServer port
      - "52773:52773"  # Management Portal/Web Gateway
    volumes:
      - ./dur/.:/dur:rw # map host directory to container directory with read-write permissions
    restart: always # Always restart the container if it stops (unless explicitly stopped)
    healthcheck:
      test: ["CMD", "iris", "session", "iris", "-U", "%SYS", "##class(SYS.Database).GetMountedSize()"] # Health check command
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    command: --after "/usr/irissys/iris_autoconf.sh" # Run autoconf script after startup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker Compose specifies how the IRIS container is built, which ports are exposed, how storage is handled, and what commands are executed at startup. In particular, I want to highlight the following points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;volumes: ./dur/.:/dur:rw&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This creates the &lt;code&gt;/dur&lt;/code&gt; directory inside the container and maps it to &lt;code&gt;./dur&lt;/code&gt; (relative to the location of &lt;code&gt;docker-compose.yml&lt;/code&gt;) on the host machine, with both read and write permissions.&lt;/p&gt;

&lt;p&gt;In practice, this means that both the host machine and the container share the same path. This makes it very easy to load files into IRIS and inspect or modify them from the host without any extra copying steps.&lt;/p&gt;

&lt;p&gt;In this project, this is how the &lt;code&gt;/data&lt;/code&gt; and &lt;code&gt;/models&lt;/code&gt; folders are directly made available inside the container under &lt;code&gt;/dur&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;command: --after "/usr/irissys/iris_autoconf.sh"&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This command allows the execution of a bash script immediately after the container is up and running. The script contains all the commands needed to open an IRIS terminal session and execute any required IRIS-side configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; The commands in this script are executed every time the container starts. This means that if the container goes down for any reason and restarts (for example, due to &lt;code&gt;restart: always&lt;/code&gt;), all the commands in this script will be executed again. If this behavior is not taken into account when writing the script, it can lead to unintended side effects such as reinstalling packages or resetting tables.&lt;/p&gt;

&lt;h3&gt;
  
  
  dockerfile
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Stage 1: Build stage for installing dependencies
FROM python:3.12-slim AS builder

# Set the working directory
WORKDIR /app

# Copy the requirements file into the image
COPY requirements.txt requirements.txt

# Install the Python dependencies into a temporary location
RUN pip install --no-cache-dir --target /install -r requirements.txt

# Stage 2: Final image with InterSystems IRIS and the installed Python libraries
FROM containers.intersystems.com/intersystems/iris-community:latest-em

# Switch to the root user to install necessary system packages
USER root

# Install the correct Python 3.12 development library for Ubuntu Noble
RUN apt-get update &amp;amp;&amp;amp; apt-get install -y libpython3.12-dev wget &amp;amp;&amp;amp; \
    rm -rf /var/lib/apt/lists/*

# Set the environment variables for Embedded Python
ENV PythonRuntimeLibrary=/usr/lib/x86_64-linux-gnu/libpython3.12.so
ENV PythonRuntimeLibraryVersion=3.12

# Update the LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:${LD_LIBRARY_PATH}

# Copy the installed Python packages from the builder stage
COPY --from=builder /install /usr/irissys/mgr/python

# Your own Python package
COPY python_utils /usr/irissys/mgr/python/python_utils
ENV PYTHONPATH=/usr/irissys/mgr/python:${PYTHONPATH}


# Copy ObjectScript classes into the image
COPY MockPackage /usr/irissys/mgr/MockPackage
# Copy and set permissions for the autoconf script while still root
COPY iris_autoconf.sh /usr/irissys/iris_autoconf.sh
RUN chmod +x /usr/irissys/iris_autoconf.sh

# Switch back to the default `irisowner` user
USER irisowner
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a two-stage Dockerfile.&lt;/p&gt;

&lt;p&gt;The first stage is a lightweight build stage used to install all Python dependencies listed in &lt;code&gt;requirements.txt&lt;/code&gt; into a temporary directory. This keeps the final image clean and avoids installing build tools directly into the IRIS image.&lt;/p&gt;

&lt;p&gt;The second stage is based on the official InterSystems IRIS image. Here, the Python runtime library required for Embedded Python is installed, and IRIS is configured so that Embedded Python can recognize both the runtime library and all installed Python packages, including custom ones.&lt;/p&gt;

&lt;p&gt;It is worth highlighting the following configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Embedded Python runtime configuration&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  ENV PythonRuntimeLibrary=/usr/lib/x86_64-linux-gnu/libpython3.12.so
  ENV PythonRuntimeLibraryVersion=3.12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These environment variables achieve what would otherwise be configured manually through the Management Portal by navigating to:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;System Administration → Configuration → Additional Settings → Advanced Memory&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;and updating the Embedded Python runtime settings. Defining them in the Dockerfile makes the configuration explicit, reproducible, and version-controlled.&lt;/p&gt;

&lt;p&gt;Additionally, the classes inside the package "MockPackage" are copied inside the container through:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;COPY MockPackage /usr/irissys/mgr/MockPackage&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;to be later on, automatically imported to IRIS when the the following bash file is executed after the container is up and running.&lt;/p&gt;

&lt;h3&gt;
  
  
  iris_autoconf.sh
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
set -e

iris session IRIS &amp;lt;&amp;lt;'EOF'

/* Install IPM/ZPM client if you still need that first
   (your original snippet did this already) */
s version="latest" s r=##class(%Net.HttpRequest).%New(),r.Server="pm.community.intersystems.com",r.SSLConfiguration="ISC.FeatureTracker.SSL.Config" d r.Get("/packages/zpm/"_version_"/installer"),$system.OBJ.LoadStream(r.HttpResponse.Data,"c")

/* Configure registry */
zpm
repo -r -n registry -url https://pm.community.intersystems.com/ -user "" -pass ""
install csvgenpy
quit

/* Import and Compile the MockPackage */
/* The "ck" flags will Compile and Keep the source */
Do $system.OBJ.Import("/usr/irissys/mgr/MockPackage", "ck")

/* Upload csv data ONCE to Table Automatically using csvgenpy */
SET exists = ##class(%SYSTEM.SQL.Schema).TableExists("MockPackage.NoShowsAppointments")
IF 'exists {   do ##class(shvarov.csvgenpy.csv).Generate("/dur/data/healthcare_noshows_appointments.csv","NoShowsAppointments","MockPackage")   }

halt
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a bash script that is executed inside the container immediately after startup. It opens an IRIS terminal session using &lt;code&gt;iris session IRIS&lt;/code&gt; and runs IRIS-specific commands to perform additional configuration steps automatically.&lt;/p&gt;

&lt;p&gt;These steps include importing custom packages whose classes were copied inside the container's storage, installing IPM (available as &lt;code&gt;zpm&lt;/code&gt; inside the IRIS terminal), installing IPM packages such as &lt;code&gt;csvgenpy&lt;/code&gt;, and using &lt;code&gt;csvgenpy&lt;/code&gt; to load a CSV file mounted into the container at &lt;code&gt;/dur/data/healthcare_noshows_appointments.csv&lt;/code&gt; to create and populate a corresponding table in IRIS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; This script is executed every time the container starts. If this behavior is not considered, it can lead to unintended side effects such as reloading or resetting data. That is why it is important to make the script safe to run multiple times, for example, by checking whether the target table already exists before creating or populating it. This is especially relevant here because the Docker Compose restart policy is set to &lt;code&gt;restart: always&lt;/code&gt;, meaning the container will automatically restart and re-execute these commands whenever it goes down.&lt;/p&gt;

&lt;h2&gt;
  
  
  Packages for Benchmarking
&lt;/h2&gt;

&lt;p&gt;This section introduces the ObjectScript packages used to benchmark different data access strategies in IRIS for a Machine Learning inference workload. The focus here is not on model quality, but on measuring and comparing the time it takes to retrieve data from IRIS, convert it into a Pandas DataFrame, and run inference using a trained LightGBM model.&lt;/p&gt;

&lt;p&gt;Each class plays a specific role in this process, from data preparation, to model training, and finally to inference and performance comparison.&lt;/p&gt;

&lt;h3&gt;
  
  
  MockDataManager.cls
&lt;/h3&gt;

&lt;p&gt;This class contains methods for taking a given CSV file and duplicating its rows to reach a desired dataset size (&lt;code&gt;AdjustDataSize&lt;/code&gt;), as well as updating a given IRIS table with the specified CSV (&lt;code&gt;UpdateTableFromCSV&lt;/code&gt;). The main purpose of these utilities is to allow testing query and inference time across multiple table sizes in a controlled way.&lt;/p&gt;

&lt;p&gt;Note: Throughout this analysis, we focus exclusively on the &lt;strong&gt;inference time&lt;/strong&gt; of a LightGBM model. We are not concerned with model performance metrics such as F1 score, precision, recall, accuracy, or else at this stage.&lt;/p&gt;

&lt;h3&gt;
  
  
  MockModelManager.cls
&lt;/h3&gt;

&lt;p&gt;In this class, the only relevant method is &lt;code&gt;TrainNoShowsModel&lt;/code&gt;. It leverages the data processing pipeline defined in &lt;code&gt;python_utils.utils&lt;/code&gt; to prepare the raw data, passed in as a Pandas DataFrame, fit a LightGBM model, and persist the trained model to disk.&lt;/p&gt;

&lt;p&gt;The model is saved to a predefined location, which in this setup corresponds to the persistent storage mounted through Docker volumes in &lt;code&gt;docker-compose.yml&lt;/code&gt;. This allows the trained model to be reused across container restarts and inference runs without retraining.&lt;/p&gt;

&lt;h3&gt;
  
  
  MockInference.cls
&lt;/h3&gt;

&lt;p&gt;The core of the performance comparison lives in this class. The process begins by loading the trained LightGBM model weights from the file path specified in the &lt;code&gt;MODELPATH&lt;/code&gt; parameter. While this path is currently hardcoded, it serves as a static reference point shared by all inference tests.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RunInferenceWDynamicSQL&lt;/code&gt; represents the first approach. It relies on an ObjectScript method called &lt;code&gt;DynamicSQL&lt;/code&gt;, which executes a Dynamic SQL statement to filter records by age. The results are packed into a &lt;code&gt;%DynamicArray&lt;/code&gt; of &lt;code&gt;%DynamicObjects&lt;/code&gt;. This method is then called by the &lt;code&gt;dynamic_sql_query&lt;/code&gt; Python function in &lt;code&gt;python_utils/querymethods.py&lt;/code&gt;, where the IRIS objects are converted into a structure that can be easily transformed into a Pandas DataFrame.&lt;/p&gt;

&lt;p&gt;The entire workflow, including execution time measurement via a Python decorator defined in &lt;code&gt;python_utils/utils.py&lt;/code&gt;, is orchestrated inside &lt;code&gt;RunInferenceWDynamicSQL&lt;/code&gt;. The resulting DataFrame is then passed through the inference pipeline to produce predictions and measure end-to-end inference latency.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RunInferenceWIRISSQL&lt;/code&gt; follows a simpler path. It uses the &lt;code&gt;iris_sql_query&lt;/code&gt; method from &lt;code&gt;python_utils/querymethods.py&lt;/code&gt; to execute the SQL query directly from Python. The resulting IRIS SQL iterator is transformed directly into a Pandas DataFrame, after which the same inference and timing logic used in the previous method is applied.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RunInferenceWGLobals&lt;/code&gt; is the most direct approach, as it queries the underlying data structures (globals) backing the table. It uses the &lt;code&gt;iris_global_query&lt;/code&gt; method to fetch data directly from &lt;code&gt;^vCVc.Dvei.1&lt;/code&gt;. This particular global was identified as the &lt;code&gt;DataLocation&lt;/code&gt; in the storage definition of the &lt;code&gt;MockPackage.NoShowsAppointments&lt;/code&gt; table.&lt;/p&gt;

&lt;p&gt;The global name is a result of the hashed storage automatically generated when the table was built from the CSV file.&lt;/p&gt;

&lt;p&gt;Finally, the integrity of all three approaches is verified using the &lt;code&gt;ConsistencyCheck&lt;/code&gt; method. This utility asserts that the Pandas DataFrames produced by each query strategy are identical, ensuring that data types, values, and numerical precision remain perfectly consistent regardless of the access method used.&lt;/p&gt;

&lt;p&gt;Because this check raises no errors, it confirms that Dynamic SQL, direct SQL access from Python, and high-speed global access are all returning exactly the same dataset.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance comparison
&lt;/h2&gt;

&lt;p&gt;To evaluate performance, we measured query and inference times for increasing table sizes and report in the table below the average time over 10 runs for each configuration. Query time corresponds to retrieving the data from the database, while inference time corresponds to running the LightGBM model on the resulting dataset.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Rows&lt;/th&gt;
&lt;th&gt;DynamicSQL – Query&lt;/th&gt;
&lt;th&gt;DynamicSQL – Infer&lt;/th&gt;
&lt;th&gt;IRISSQL – Query&lt;/th&gt;
&lt;th&gt;IRISSQL – Infer&lt;/th&gt;
&lt;th&gt;Globals – Query&lt;/th&gt;
&lt;th&gt;Globals – Infer&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;0.003219271&lt;/td&gt;
&lt;td&gt;0.042354488&lt;/td&gt;
&lt;td&gt;0.001749706&lt;/td&gt;
&lt;td&gt;0.043090796&lt;/td&gt;
&lt;td&gt;0.001184559&lt;/td&gt;
&lt;td&gt;0.043616056&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1,000&lt;/td&gt;
&lt;td&gt;0.031865168&lt;/td&gt;
&lt;td&gt;0.052698898&lt;/td&gt;
&lt;td&gt;0.019246697&lt;/td&gt;
&lt;td&gt;0.056159472&lt;/td&gt;
&lt;td&gt;0.005061340&lt;/td&gt;
&lt;td&gt;0.045210719&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;0.237553477&lt;/td&gt;
&lt;td&gt;0.082497978&lt;/td&gt;
&lt;td&gt;0.099582171&lt;/td&gt;
&lt;td&gt;0.068728352&lt;/td&gt;
&lt;td&gt;0.036206818&lt;/td&gt;
&lt;td&gt;0.061128354&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100,000&lt;/td&gt;
&lt;td&gt;5.279174852&lt;/td&gt;
&lt;td&gt;0.189197206&lt;/td&gt;
&lt;td&gt;1.122253346&lt;/td&gt;
&lt;td&gt;0.177564192&lt;/td&gt;
&lt;td&gt;0.535172153&lt;/td&gt;
&lt;td&gt;0.175085044&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;500,000&lt;/td&gt;
&lt;td&gt;68.741133046&lt;/td&gt;
&lt;td&gt;0.639807224&lt;/td&gt;
&lt;td&gt;7.015313649&lt;/td&gt;
&lt;td&gt;0.610818386&lt;/td&gt;
&lt;td&gt;2.743980526&lt;/td&gt;
&lt;td&gt;0.587647438&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1,000,000&lt;/td&gt;
&lt;td&gt;196.871173100&lt;/td&gt;
&lt;td&gt;1.145034313&lt;/td&gt;
&lt;td&gt;22.138613220&lt;/td&gt;
&lt;td&gt;1.136569023&lt;/td&gt;
&lt;td&gt;5.987578392&lt;/td&gt;
&lt;td&gt;1.106307745&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2,000,000&lt;/td&gt;
&lt;td&gt;711.319680452&lt;/td&gt;
&lt;td&gt;3.021180152&lt;/td&gt;
&lt;td&gt;60.142974615&lt;/td&gt;
&lt;td&gt;2.879153728&lt;/td&gt;
&lt;td&gt;11.92040014&lt;/td&gt;
&lt;td&gt;2.728573560&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;To characterise how query and inference times scale with respect to table size, we fitted a power-law regression of the form:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenephm08qhsw3ec4zzkb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenephm08qhsw3ec4zzkb.png" alt=" " width="388" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Inference Time
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcvw6y0dacebvenak2m2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcvw6y0dacebvenak2m2.png" alt=" " width="625" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu1ioe6qjno3i0950a0z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu1ioe6qjno3i0950a0z.png" alt=" " width="431" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Inference time is very similar across all three query methods, which is expected, as the resulting input DataFrame was verified to be identical in all cases.&lt;/p&gt;

&lt;p&gt;From the measurements, the model is able to perform inference on approximately 1 million rows in about 1 second, highlighting the high throughput of LightGBM.&lt;/p&gt;

&lt;p&gt;The fitted exponent (k ~ 1.3) indicates slightly superlinear scaling of total inference time with respect to the number of rows. This behaviour is commonly observed in large-scale batch processing and is likely attributable to system-level effects such as cache pressure or memory bandwidth saturation, rather than to the algorithmic complexity of the model itself.&lt;/p&gt;

&lt;p&gt;The scaling factor "a" is on the order of tens of nanoseconds, reflecting the efficiency of the per-row computation. While the superlinear exponent implies that the marginal cost per additional row increases with table size, this effect becomes noticeable only at large scales (millions of rows), as illustrated by the increasing slope in the log–log plot.&lt;/p&gt;

&lt;p&gt;The marginal inference cost can be estimated from the derivative of the fitted model:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wnvyih0ql36502xmb8g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wnvyih0ql36502xmb8g.png" alt=" " width="138" height="57"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Evaluating this expression shows that the per-row marginal inference time increases from approximately 1.9e-7 seconds at 1000 rows to 1.5 e-6 seconds at 1 million rows, remaining firmly in the microsecond range within the observed data regime.&lt;/p&gt;

&lt;p&gt;Finally, the fitted constant offset (c ~ 0.08) seconds likely represents a fixed inference overhead, such as model invocation and runtime initialisation, and should be interpreted as a constant cost independent of table size.&lt;/p&gt;

&lt;h3&gt;
  
  
  Query Time
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjj1v93hxrjouzb4ij4r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjj1v93hxrjouzb4ij4r.png" alt=" " width="357" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpj4on4da3gfoa6e10tpg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpj4on4da3gfoa6e10tpg.png" alt=" " width="447" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Query time exhibits substantially different scaling behavior across the three access methods. In contrast to inference time, which is largely independent of the query mechanism, query performance is dominated by the data access strategy and its interaction with storage and execution layers.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Globals-based&lt;/strong&gt; approach shows nearly linear scaling (k ~ 1.03), indicating that the cost of retrieving each additional row remains approximately constant across the measured range. This behavior is consistent with sequential access patterns and minimal query-planning overhead, making Globals the most scalable option for large result sets.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;IRISSQL&lt;/strong&gt; approach exhibits moderately superlinear scaling (k ~ 1.48). While still efficient for moderate table sizes, the increasing marginal cost suggests growing overhead from SQL execution, query planning, or intermediate result materialization as the number of rows increases.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;DynamicSQL&lt;/strong&gt; approach displays the most pronounced superlinear scaling (k ~ 1.82), resulting in rapidly increasing query times at larger scales. This behavior explains the steep slope observed in the plot and indicates that DynamicSQL incurs significant additional overhead as result size grows, making it the least scalable method for large batch queries.&lt;/p&gt;

&lt;p&gt;Although the fitted scaling factors "a" are numerically small, they must be interpreted jointly with the exponent "k". In practice, the exponent dominates the asymptotic behavior, which is why DynamicSQL, despite a small "a", becomes significantly slower at large table sizes.&lt;/p&gt;

&lt;p&gt;The fitted constant term "c" represents the fixed query overhead. For IRISSQL, "c" is close to zero, indicating a small startup cost. This overhead is even smaller for the Globals-based approach, where the fitted value is slightly negative, effectively suggesting a zero fixed cost. This behavior is expected, as data retrieval via a global key proceeds directly without additional query planning or execution overhead.&lt;/p&gt;

&lt;p&gt;In contrast, the relatively large constant offset observed for DynamicSQL indicates a substantial fixed overhead, likely associated with query preparation or execution setup. This fixed cost penalizes performance across all table sizes and becomes particularly impactful at both small and large scales.&lt;/p&gt;

&lt;p&gt;Overall, these results highlight that query time, unlike inference time, is highly sensitive to the data access method, with Globals offering near-linear scalability, IRISSQL providing a balanced middle ground, and DynamicSQL exhibiting poor scalability for large result sets.&lt;/p&gt;

&lt;p&gt;Please refer to the following repository for more details:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/JorgeIvanJH/IRIS_dockerization.git" rel="noopener noreferrer"&gt;https://github.com/JorgeIvanJH/IRIS_dockerization.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Video demo here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/IcShNKQ4jIk" rel="noopener noreferrer"&gt;https://youtu.be/IcShNKQ4jIk&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have any questions or notice any mistakes, please don’t hesitate to reach out.&lt;/p&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>sql</category>
      <category>python</category>
      <category>performance</category>
    </item>
    <item>
      <title>Vector Search with Embedded Python in InterSystems IRIS</title>
      <dc:creator>InterSystems Developer</dc:creator>
      <pubDate>Thu, 30 Apr 2026 15:35:55 +0000</pubDate>
      <link>https://dev.to/intersystems/vector-search-with-embedded-python-in-intersystems-iris-h3a</link>
      <guid>https://dev.to/intersystems/vector-search-with-embedded-python-in-intersystems-iris-h3a</guid>
      <description>&lt;p&gt;&lt;span&gt;&lt;span&gt;One objective of vectorization is to render unstructured text more machine-usable. Vector embeddings accomplish this by encoding the semantics of text as high-dimensional numeric vectors, which can be employed by advanced search algorithms (normally an approximate nearest neighbor algorithm like Hierarchical Navigable Small World). This not only improves our ability to interact with unstructured text programmatically but makes it searchable by context and by meaning beyond what is captured literally by keyword.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;In this article I will walk through a simple vector search implementation that Kwabena Ayim-Aboagye and I fleshed out using embedded python in InterSystems IRIS for Health. I'll also dive a bit into how to use embedded python and dynamic SQL generally, and how to take advantage of vector search features offered natively through IRIS.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;span&gt; &lt;/span&gt;&lt;h2&gt;Environment Details:&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;OS: Windows Server 2025&lt;/li&gt;
&lt;li&gt;InterSystems IRIS for Health 2025.1&lt;/li&gt;
&lt;li&gt;VS Code / InterSystems Server Manager&lt;/li&gt;
&lt;li&gt;Python 3.13.7&lt;/li&gt;
&lt;li&gt;Python Libraries: pandas, ollama, iris*&lt;em&gt;&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Ollama 0.12.3 and model all-minilm&lt;/li&gt;
&lt;li&gt;Dynamic SQL&lt;/li&gt;
&lt;li&gt;Sample database of unstructured text (classic poems)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Process:&lt;/h2&gt;
&lt;h3&gt;      0. &lt;strong&gt;Setup the environment; complete installs&lt;/strong&gt;
&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;h3&gt;&lt;strong&gt;Define an auxiliary table&lt;/strong&gt;&lt;/h3&gt;
&lt;ul&gt;&lt;li&gt;The embeddings table &lt;code&gt;User.SamplePoetryVectors&lt;/code&gt; has a foreign key on &lt;code&gt;User.SamplePoetry&lt;/code&gt; as well as an &lt;code&gt;EMBEDDING&lt;/code&gt; property of type &lt;code&gt;%Library.Vector&lt;/code&gt;. Ollama &lt;code&gt;all-minilm&lt;/code&gt; generates embeddings of 384 dimensions, so we imposed a length constraint accordingly.&lt;ul&gt;
&lt;li&gt;&lt;img src="/sites/default/files/inline/images/table_dfns_0.png" alt=""&gt;&lt;/li&gt;
&lt;li&gt;*Note that because the goal is to ultimately take advantage of &lt;a href="https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&amp;amp;CLASSNAME=%25SQL.Index.HNSW" rel="noopener noreferrer"&gt;IRIS' native HNSWIndex&lt;/a&gt; and &lt;a href="https://docs.intersystems.com/iris20253/csp/docbook/Doc.View.cls?KEY=RSQL_vectorcosine" rel="noopener noreferrer"&gt;IRIS' native vector search methods&lt;/a&gt;, &lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GSQL_vecsearch#GSQL_vecsearch_index_hnsw" rel="noopener noreferrer"&gt;we must have a column of type %Library.Vector (or %Library.Embedding) of fixed length that is of type decimal or double&lt;/a&gt; upon which to index.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h3&gt;
&lt;strong&gt;Define a &lt;/strong&gt;&lt;code&gt;&lt;strong&gt;RegisteredObject&lt;/strong&gt;&lt;/code&gt;&lt;strong&gt; class&lt;/strong&gt; for our vectorization methods, which will be written in embedded python. First let's focus on a &lt;code&gt;VectorizeTable()&lt;/code&gt; method, which will contain a driver function (of the same name) and a few supporting process functions all written in Python.&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;The driver function walks through the process as follows:&lt;ol&gt;
&lt;li&gt;Load from IRIS into a Pandas Dataframe (via supporting function &lt;code&gt;load_table()&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Generate an embedding column (via supporting class method &lt;code&gt;GetEmbeddingString&lt;/code&gt;, which will later be used to generate embeddings for queries as well)&lt;ul&gt;&lt;li&gt;Convert the embedding column to a string that's compatible with IRIS vector type&lt;/li&gt;&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Write the dataframe into the auxiliary able&lt;/li&gt;
&lt;li&gt;Create an HNSW index on the auxiliary table&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;VectorizeTable()&lt;/code&gt; class method then simply calls the driver function:&lt;ul&gt;&lt;li&gt;&lt;img src="/sites/default/files/inline/images/vectorizetable.png" alt=""&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Let's examine it step-by-step:&lt;/li&gt;
&lt;/ul&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;h4&gt;&lt;strong&gt;Load the table from IRIS into a Pandas Dataframe&lt;/strong&gt;&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;pre&gt;&lt;code&gt;&lt;span class="mention"&gt;def&lt;/span&gt; &lt;span class="mention"&gt;load_table&lt;/span&gt;&lt;span class="mention"&gt;(sample_size=&lt;/span&gt;&lt;span class="mention"&gt;''&lt;/span&gt;) -&amp;gt; pd.DataFrame:&lt;br&gt;
    sql = &lt;span class="mention"&gt;f"SELECT * FROM SQLUser.SamplePoetry&lt;/span&gt;&lt;span class="mention"&gt;{&lt;/span&gt;&lt;span class="mention"&gt;f' LIMIT &lt;/span&gt;&lt;span class="mention"&gt;{sample_size}&lt;/span&gt;' &lt;span class="mention"&gt;if&lt;/span&gt; sample_size != &lt;span class="mention"&gt;'*'&lt;/span&gt; &lt;span class="mention"&gt;else&lt;/span&gt; &lt;span class="mention"&gt;''&lt;/span&gt;}"&lt;br&gt;
    result_set = iris.sql.exec(sql)&lt;br&gt;
    df = result_set.dataframe()
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;span class="mention"&amp;gt;# Entries without text will not be vectorized nor searchable&amp;lt;/span&amp;gt;
&amp;lt;span class="mention"&amp;gt;for&amp;lt;/span&amp;gt; index, row &amp;lt;span class="mention"&amp;gt;in&amp;lt;/span&amp;gt; df.iterrows():
    &amp;lt;span class="mention"&amp;gt;if&amp;lt;/span&amp;gt; row[&amp;lt;span class="mention"&amp;gt;'poem'&amp;lt;/span&amp;gt;] == &amp;lt;span class="mention"&amp;gt;' '&amp;lt;/span&amp;gt; &amp;lt;span class="mention"&amp;gt;or&amp;lt;/span&amp;gt; row[&amp;lt;span class="mention"&amp;gt;'poem'&amp;lt;/span&amp;gt;] &amp;lt;span class="mention"&amp;gt;is&amp;lt;/span&amp;gt; &amp;lt;span class="mention"&amp;gt;None&amp;lt;/span&amp;gt;:
        df = df.drop(index)

&amp;lt;span class="mention"&amp;gt;return&amp;lt;/span&amp;gt; df&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;li data-list-item-id="eefcc1d39f96038e122e461e7526ba90b"&amp;gt;This function leverages the &amp;lt;code&amp;gt;dataframe()&amp;lt;/code&amp;gt; method of &amp;lt;a href="https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&amp;amp;amp;CLASSNAME=%25SYS.Python.SQLResultSet#METHOD_dataframe" target="_blank"&amp;gt;the embedded python SQLResultSet objects&amp;lt;/a&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;li data-list-item-id="ea4ede0aa2a09dabf2902a06278aa79df"&amp;gt;&amp;lt;code&amp;gt;load_table()&amp;lt;/code&amp;gt; accepts an optional &amp;lt;code&amp;gt;sample_size&amp;lt;/code&amp;gt; argument for testing purposes. There's also a filter for entries without unstructured text. Though our sample database is curated and complete, some use cases may seek to vectorize datasets for which one cannot assume each row will have data for all columns (for example survey responses with skipped questions). As opposed to implementing a "null" or empty vector, we chose to exclude such rows from vector search by removing them at this step in the process.&amp;lt;/li&amp;gt;&amp;lt;li data-list-item-id="e821e2611fe70c080e7a022d8f3c21f1b"&amp;gt;*Note that &amp;lt;code&amp;gt;iris&amp;lt;/code&amp;gt; is the &amp;lt;a href="https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=GEPYTHON_reference" target="_blank"&amp;gt;InterSystems IRIS Python Module&amp;lt;/a&amp;gt;. It functions as an API to access IRIS classes, methods, and to interact with the database, etc.&amp;lt;/li&amp;gt;&amp;lt;li data-list-item-id="ea69a4d07e45b85e56b943c780424979e"&amp;gt;*Note that &amp;lt;code&amp;gt;SQLUser&amp;lt;/code&amp;gt; is the &amp;lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GSQL_tables#GSQL_tables_schemadefault" target="_blank"&amp;gt;system-wide default schema&amp;lt;/a&amp;gt; which &amp;lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GOBJ_defpersobj#GOBJ_defpersobj_sqlproj_pkg" target="_blank"&amp;gt;corresponds to the default package&amp;lt;/a&amp;gt;&amp;lt;code&amp;gt;User&amp;lt;/code&amp;gt;.&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;li class="ck-list-marker-bold" data-list-item-id="e9ca354b9ae137911dc78afd93da5b132"&amp;gt;&amp;lt;h4&amp;gt;&amp;lt;strong&amp;gt;Generate an embedding column (support method)&amp;lt;/strong&amp;gt;&amp;lt;/h4&amp;gt;&amp;lt;ul&amp;gt;&amp;lt;li data-list-item-id="ea097b9c25eb08247c89cab9fcfac9e6d"&amp;gt;&amp;lt;pre class="codeblock-container" idlang="0" lang="ObjectScript" tabsize="4"&amp;gt;&amp;lt;code class="language-plaintext language-cls hljs cos"&amp;gt;&amp;lt;span class="mention"&amp;gt;ClassMethod&amp;lt;/span&amp;gt; GetEmbeddingString(aurg &amp;lt;span class="mention"&amp;gt;As&amp;lt;/span&amp;gt; &amp;lt;span class="mention"&amp;gt;%String&amp;lt;/span&amp;gt;) &amp;lt;span class="mention"&amp;gt;As&amp;lt;/span&amp;gt; &amp;lt;span class="mention"&amp;gt;%String&amp;lt;/span&amp;gt; [ Language = python ]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;{&lt;br&gt;
  import iris&lt;br&gt;
  import ollama&lt;/p&gt;

&lt;p&gt;response = ollama.embed(model='all-minilm',input=[ aurg ])&lt;br&gt;
  embedding_str = str(response.embeddings[&lt;span&gt;0&lt;/span&gt;])&lt;/p&gt;

&lt;p&gt;&lt;span&gt;return&lt;/span&gt; embedding_str&lt;br&gt;
}&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;We installed Ollama on our VM, loaded the &lt;code&gt;all-minilm&lt;/code&gt; embedding model, and generated embeddings using Ollama’s Python library. This allowed us to run the model locally and generate embeddings without an API key.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GetEmbeddingString&lt;/code&gt; returns the embedding as a string because &lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=RSQL_tovector#RSQL_tovector_args_data" rel="noopener noreferrer"&gt;&lt;code&gt;TO_VECTOR&lt;/code&gt;&lt;/a&gt; by default expects the &lt;code&gt;data&lt;/code&gt; argument to be a string, more on that to follow.&lt;/li&gt;
&lt;li&gt;*Note that Embedded Python provides syntax for calling other ObjectScript methods defined within the current class (similar to &lt;code&gt;self&lt;/code&gt; in Python). The earlier example uses &lt;code&gt;iris.cls(&lt;strong&gt;name&lt;/strong&gt;)&lt;/code&gt; syntax to get a reference to the current ObjectScript class and invoke &lt;code&gt;GetEmbeddingString&lt;/code&gt; (ObjectScript method) from &lt;code&gt;VectorizeTable&lt;/code&gt; (Embedded Python method inside ObjectScript method).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h4&gt;&lt;strong&gt;Write the embeddings from the dataframe into the table in IRIS&lt;/strong&gt;&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;pre&gt;&lt;code&gt;&lt;span class="mention"&gt;# Write dataframe into new table&lt;/span&gt;&lt;br&gt;
print(&lt;span class="mention"&gt;"Loading data into table..."&lt;/span&gt;)&lt;br&gt;
&lt;span class="mention"&gt;for&lt;/span&gt; index, row &lt;span class="mention"&gt;in&lt;/span&gt; df.iterrows():&lt;br&gt;
    sql = iris.sql.prepare(&lt;span class="mention"&gt;"INSERT INTO SQLUser.SamplePoetryVectors (ID, EMBEDDING) VALUES (?, TO_VECTOR(?, decimal))"&lt;/span&gt;)&lt;br&gt;
    rs = sql.execute(row[&lt;span class="mention"&gt;'id'&lt;/span&gt;], row[&lt;span class="mention"&gt;'embedding'&lt;/span&gt;])

&lt;p&gt;print(&lt;span&gt;"Data loaded into table."&lt;/span&gt;)&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;Here, we use Dynamic SQL to populate &lt;code&gt;SamplePoetryVectors&lt;/code&gt; row-by-row. Because earlier we declared the &lt;code&gt;EMBEDDING&lt;/code&gt; property to be of type &lt;code&gt;%Library.Vector&lt;/code&gt; we must use &lt;a href="http://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=RSQL_tovector#RSQL_tovector_args_data" rel="noopener noreferrer"&gt;&lt;code&gt;TO_VECTOR&lt;/code&gt;&lt;/a&gt; to convert the embeddings to IRIS' native &lt;a href="https://docs.intersystems.com/iris20253/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&amp;amp;PRIVATE=1&amp;amp;CLASSNAME=%25Library.Vector" rel="noopener noreferrer"&gt;&lt;code&gt;VECTOR&lt;/code&gt;&lt;/a&gt; datatype upon insertion. We ensured compatibility with &lt;code&gt;TO_VECTOR&lt;/code&gt; by converting the embeddings to strings earlier.&lt;ul&gt;&lt;li&gt;The &lt;code&gt;iris&lt;/code&gt; python module again allows us to take advantage of Dynamic SQL from within our Embedded Python function.&lt;/li&gt;&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h4&gt;&lt;strong&gt;Create a HNSW Index&lt;/strong&gt;&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;pre&gt;&lt;code&gt;&lt;span class="mention"&gt;# Create Index&lt;/span&gt;&lt;br&gt;
iris.sql.exec(&lt;span class="mention"&gt;"CREATE INDEX HNSWIndex ON TABLE SQLUser.SamplePoetryVectors (EMBEDDING) AS HNSW(Distance='Cosine')"&lt;/span&gt;)&lt;br&gt;
print(&lt;span class="mention"&gt;"Index created."&lt;/span&gt;)&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;IRIS will natively implement a &lt;a href="https://arxiv.org/abs/1603.09320" rel="noopener noreferrer"&gt;HNSW graph&lt;/a&gt; for use in vector search methods when an &lt;a href="https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&amp;amp;CLASSNAME=%25SQL.Index.HNSW" rel="noopener noreferrer"&gt;HNSW index&lt;/a&gt; is created on a compatible column. The vector search methods available through IRIS are &lt;code&gt;VECTOR_DOT_PRODUCT&lt;/code&gt; and &lt;code&gt;VECTOR_COSINE&lt;/code&gt;. Once the index is created, IRIS will automatically use it to optimize the corresponding vector search method when called in subsequent queries. The parameter defaults for an HNSW index are &lt;code&gt;Distance = Cosine&lt;/code&gt;, &lt;code&gt;M = 16&lt;/code&gt;, and &lt;code&gt;efConstruction = 200&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Note that &lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=RSQL_vectorcosine#RSQL_vectorcosine_desc" rel="noopener noreferrer"&gt;&lt;code&gt;VECTOR_COSINE&lt;/code&gt;&lt;/a&gt; implicitly normalizes its input vectors, so we did not need to perform normalization before inserting them into the table in order for our vector search queries to be scored correctly!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h3&gt;
&lt;strong&gt;Implement a &lt;/strong&gt;&lt;code&gt;&lt;strong&gt;VectorSearch()&lt;/strong&gt;&lt;/code&gt;&lt;strong&gt; class method&lt;/strong&gt;
&lt;/h3&gt;
&lt;ul&gt;&lt;li&gt;&lt;h4&gt; &lt;img src="/sites/default/files/inline/images/vectorsearch_0.png" alt=""&gt; &lt;span&gt; &lt;/span&gt;
&lt;/h4&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;h4&gt;Generate an embedding for the query string&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;pre&gt;&lt;code&gt;&lt;span class="mention"&gt;# Generate embedding of search parameter&lt;/span&gt;&lt;br&gt;
search_vector = iris.cls(&lt;strong&gt;name&lt;/strong&gt;).GetEmbeddingString(aurg)&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;Reusing the class method &lt;code&gt;GetEmbeddingString&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h4&gt;Prepare and execute a query that utilizes &lt;code&gt;VECTOR_COSINE&lt;/code&gt;
&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;pre&gt;&lt;code&gt;&lt;span class="mention"&gt;# Prepare and execute SQL statement&lt;/span&gt;&lt;br&gt;
stmt = iris.sql.prepare(&lt;br&gt;
        """SELECT top 5 p.poem, p.title, p.author &lt;br&gt;
        FROM SQLUser.SamplePoetry AS p &lt;br&gt;
        JOIN SQLUser.SamplePoetryVectors AS v &lt;br&gt;
        ON p.ID = v.ID &lt;br&gt;
        ORDER BY VECTOR_COSINE(v.embedding, TO_VECTOR(?)) DESC"""&lt;br&gt;
)&lt;br&gt;
results = stmt.execute(search_vector)&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;We use a &lt;code&gt;JOIN&lt;/code&gt; here to combine the poetry text with its corresponding vector embedding so we can rank results by semantic similarity.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h4&gt;Output the results&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;pre&gt;&lt;code&gt;results_df = pd.DataFrame(results)

&lt;p&gt;pd.set_option(&lt;span&gt;'display.max_colwidth'&lt;/span&gt;, &lt;span&gt;25&lt;/span&gt;)&lt;br&gt;
results_df.rename(columns={&lt;span&gt;0&lt;/span&gt;: &lt;span&gt;'Poem'&lt;/span&gt;, &lt;span&gt;1&lt;/span&gt;: &lt;span&gt;'Title'&lt;/span&gt;, &lt;span&gt;2&lt;/span&gt;: &lt;span&gt;'Author'&lt;/span&gt;}, inplace=&lt;span&gt;True&lt;/span&gt;)&lt;/p&gt;


&lt;p&gt;print(results_df)&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;

&lt;li&gt;Utilizes formatting options from pandas to tweak how it appears in the IRIS Terminal:&lt;ul&gt;&lt;li&gt;

&lt;img src="/sites/default/files/inline/images/terminal_example.png" alt=""&gt; &lt;span&gt; &lt;/span&gt;
&lt;/li&gt;&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>vectordatabase</category>
      <category>database</category>
      <category>tutorial</category>
      <category>ux</category>
    </item>
    <item>
      <title>KMS . Introduction to its use in IRIS and an example of setup on AWS EC2 system</title>
      <dc:creator>InterSystems Developer</dc:creator>
      <pubDate>Sun, 26 Apr 2026 16:19:04 +0000</pubDate>
      <link>https://dev.to/intersystems/kms-introduction-to-its-use-in-iris-and-an-example-of-setup-on-aws-ec2-system-425e</link>
      <guid>https://dev.to/intersystems/kms-introduction-to-its-use-in-iris-and-an-example-of-setup-on-aws-ec2-system-425e</guid>
      <description>&lt;p&gt;IRIS can use a KMS (Key Managment Service) as of release 2023.3.  Intersystems documentation is a good resource on KMS implementation but does not go into details of the KMS set up on the system, nor provide an easily followable example of how one might set this up for basic testing.&lt;/p&gt;

&lt;p&gt;The purpose of this article is to supplement the docs with a brief explanation of KMS, an example of its use in IRIS, and notes for setup of a testing system on AWS EC2 RedHat Linux system using the AWS KMS.  It is assumed in this document that the reader/implementor already has access/knowledge to set up an AWS EC2 Linux system running IRIS (2023.3 or later), and that they have proper authority to access the AWS KMS and AWS IAM (for creating roles and polices), or that they will be able to get this access either on their own or via their organizations Security contact in charge of their AWS access.&lt;br&gt;&lt;/p&gt;

&lt;p&gt;What is KMS and what does it do for IRIS?:&lt;/p&gt;

&lt;p&gt;KMS means Key Management Service.   Briefly, it provides an external secure method of encrypting and decrypting IRIS encryption keys through a trusted service, the KMS.&lt;/p&gt;

&lt;p&gt;In prior implementation, when using unattended startup, IRIS would never store unencrypted encryption keys; IRIS would encrypt a key with an encrypted copy of the key encryption key in that key itself.  It would then store a user ID and password in IRIS to unencrypt the encrypted key encryption key.  This leaves an unencrypted copy of the user ID and password stored in an IRIS database, which leaves extra burden on IRIS managers of securing that.  &lt;span&gt;&lt;span&gt;The key encryption key is encrypted/decrypted by a symmetric key that is based on a key admin’s password using PBKDF2 (Password-Based Key Derivation Function 2). So the key that encrypts the key encryption key is never stored anywhere – it’s derived on the fly when a key admin supplies their password. Since there can be multiple admins for keys in a given key file we store in the key file one encrypted copy of the key encryption key (per admin) and then a single encrypted copy of each database/data element encryption key (encrypted with the key encryption key).&lt;/span&gt;&lt;/span&gt;&lt;br&gt; &lt;/p&gt;

&lt;p&gt;With KMS we do not store the id and password in IRIS.  When we create the encryption key with KMS we get an encrypted encryption key, and the KMS keeps the key encryption key for us. We reach out to the kms server with the encrypted encryption key.  the kms server decrypts the encryption key.  The decrypted key is sent back to us and stored in memory.  The communications are secured using TLS.&lt;/p&gt;

&lt;p&gt;We don't ever have access to the raw key encryption key.  We use it as a service via kms.  The key encryption key stays on the kms server.  This helps with key management and key security.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Current implementation (as of 1/22/2024) of KMS is Cloud Vendor Specific&lt;/p&gt;

&lt;p&gt;In AWS you must specify creation of a symmetric key. &lt;/p&gt;

&lt;p&gt;In Azure you must specify creation of an RSA key&lt;/p&gt;

&lt;p&gt;Future implementation my include google KMS.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;---&lt;/p&gt;

&lt;p&gt;Example of workflow setting up new encryption key in IRIS using KMS:&lt;/p&gt;

&lt;p&gt;The following assumes you have set up an IRIS system to access an AWS KMS server and your instance has been authorized to access the keys there and you have set up a key for use.  (See Setup Notes following this example for an example of setting up KMS on AWS to connect with an AWS EC2 RedHat Linux instance.)&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;1.%SYS&amp;gt;D ^EncryptionKey&lt;/p&gt;

&lt;p&gt;2.Create New Key&lt;/p&gt;

&lt;p&gt;3.Name the key&lt;/p&gt;

&lt;p&gt;4.Use KMS: yes&lt;/p&gt;

&lt;p&gt;      Here you specify properties of the key.  Choose backup if you want a regular encryption key made to backup this KMS key.  This is the only place you can do this.  Treat this backup as you would a normal Encryption key. &lt;/p&gt;

&lt;p&gt;5. Select AWS for the kms server&lt;/p&gt;

&lt;p&gt;6. Get the key ID and the region from your AWS Key Managed Service console&lt;/p&gt;

&lt;p&gt;7. Env Key ; you should not need to specify anything here if your system is set up correctly (per this article). See AWS docs for further details if necessary for your needs.  Leave blank for the purpose of simplifying this for testing example.&lt;/p&gt;

&lt;p&gt;8. You should receive a message like:&lt;/p&gt;

&lt;p&gt;Encryption key file created: iriskmstest1&lt;br&gt;Encryption key created via KMS: 87A85627-9F8C-11EE-8839-0608ECAD1BAF&lt;/p&gt;

&lt;p&gt;This key is NOT activated.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Key Activation and use are then usual encryption key setup steps.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;If there are issues with the activation at startup it will error and go into interactive mode&lt;/p&gt;

&lt;p&gt;For interactive startup if you pass in a kms key it will not prompt for username or password&lt;/p&gt;

&lt;p&gt;If you put in the backup key (generated in step 14 above) then it will ask for the username and password you created at key creation time (just like normal key)&lt;/p&gt;

&lt;p&gt;If there are issues you will see errors in your startup, or logged in messages.log if silent startup.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;In general, your IRIS system does not need to be on AWS or other cloud system, it accesses the KMS for the key over TLS.&lt;/p&gt;

&lt;p&gt;IRIS uses credentials of current user when accessing the KMS server, so you need to make sure that user has access to KMS&lt;/p&gt;

&lt;p&gt;the AWS key policy defines who can use the key on AWS.  See following setup notes for an example.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;----&lt;/p&gt;

&lt;p&gt;Setup Notes: Getting an AWS EC2 Linux system running IRIS to work with an AWS KMS:&lt;/p&gt;

&lt;p&gt;(The following assumes you already have an AWS EC2 RedHat Linux system running an IRIS version that supports KMS)&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;To set up the AWS EC2 system to use the AWS KMS server:&lt;/p&gt;

&lt;p&gt;Follow Setup instructions in following link to install the AWS CLI on your EC2 system:&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;  Install or update the latest version of the AWS CLI - AWS Command Line Interface (amazon.com)&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;There are instructions for different OS types.  For the purpose of this instruction set I used an AWS RedHat Linux system.  It was fairly strait forward to follow that doc to install the AWS CLI on the system.&lt;/p&gt;

&lt;p&gt;I also had to use 'sudo yum install unzip' to install unzip on the system in order to follow the instructions which had me use unzip on the AWS client download zip file.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Here are the steps to create a key that could be used by an IRIS instance for encryption key encryption:&lt;/p&gt;

&lt;p&gt;1. In AWS Mgmnt Console go to Key Management Service.&lt;/p&gt;

&lt;p&gt;2. Click on Customer Managed Keys&lt;/p&gt;

&lt;p&gt;3. Click on Create Key&lt;/p&gt;

&lt;p&gt;5. Accept the Defaults&lt;/p&gt;

&lt;p&gt;6. Enter an Alias; this is the name for the key&lt;/p&gt;

&lt;p&gt;7.Key Admin Options: default policy&lt;/p&gt;

&lt;p&gt;8. Click Finish&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;The IRIS instance will also need to be authorization to use the KMS key. This is done either by running the instance as a user who has authenticated to AWS and is authorized to use the key, specifying a credentials file with the AWS_SHARED_CREDENTIALS_FILE environment variable or by assigning to the EC2 itself an IAM role that either has a policy attached to it that allows key usage or that has an explicit allowance specified in the key policy itself.&lt;/p&gt;

&lt;p&gt;For the purpose of this instruction set we are following the 3rd as ISC Development has suggested this would be the most commonly used by customers in AWS.  In the following we will create an IAM role that can be assigned to the EC2 instance itself. The role can have a policy attached to it that gives it very targeted privileges to access a given key in the KMS (or even just allow specific operations with the key).  We are only exploring the most simple process to give us something to use for testing...&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Here are the steps for Authorizing an Instance of IRIS on an AWS EC2 system to use the key on the KMS server:&lt;/p&gt;

&lt;p&gt;1.In AWS Managment Console go to Key Management Service&lt;/p&gt;

&lt;p&gt;2. Under "Customer managed keys" click on the Key ID of the key you want to use.&lt;/p&gt;

&lt;p&gt;3. In the "General configuration" section click the "Copy" icon next to the ARN to copy the ARN to the clipboard. Paste this value somewhere to use later in the policy configuration.&lt;/p&gt;

&lt;p&gt;4. In AWS Mgmnt Console go to IAM.&lt;br&gt;5. Under "Access Management"&amp;gt;"Policies" click "Create policy".&lt;br&gt;6. Under "Select a service" choose KMS from the drop-down list. Click "Next".&lt;br&gt;7. Under "Actions allowed" click on the "Write" access level expander. Check the "Decrypt" and "Encrypt" checkboxes.&lt;br&gt;8. Under "Resources" click on the "Add ARNs" link.&lt;br&gt;9. Paste the entire ARN from Step 3 above into the "Resource ARN" text field. Click "Add ARNs". Click "Next".&lt;br&gt;10. Under "Policy details" provide a policy name and, if desired, a policy description. Click "Create policy".&lt;/p&gt;

&lt;p&gt;11. In IAM under "Access Management"&amp;gt;"Roles" click "Create role".&lt;br&gt;12. Under "Trusted entity type" click "AWS service". Under "Use case" select EC2 from the drop-down list. Click "Next".&lt;br&gt;13. Under "Permissions policies" start typing the policy name from Step 10 until it appears in the list. Click the checkbox next to it. Click "Next".&lt;br&gt;14. Under "Role details" provide a role name. Click "Create role".&lt;/p&gt;

&lt;p&gt;15. In AWS Mgmnt Console go to EC2. Navigate to "Instances"&amp;gt;"Instances".&lt;br&gt;16. If EC2 instance already exists:&lt;br&gt;    a. Click checkbox next to instance name.&lt;br&gt;    b. Click "Actions"&amp;gt;"Security"&amp;gt;"Modify IAM role".&lt;br&gt;    c. Choose the role from Step 15 from the drop-down list.&lt;br&gt;    d. Click "Update IAM role".&lt;br&gt;16. If launching new EC2 instance:&lt;br&gt;    a. Click "Launch instances".&lt;br&gt;    b. Under "Advanced details" choose role from Step 15 in "IAM instance profile" drop-down list.&lt;/p&gt;

&lt;p&gt;17.You can now use the kms key in ^EncryptionKey&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Notes:&lt;br&gt; After creating policy/role you might need to refresh the Mgmt Console for these new resources to show up.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;---&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Supplemental:&lt;/p&gt;

&lt;p&gt;Classes methods of interest:&lt;/p&gt;

&lt;p&gt;%SYSTEM.Encryption.KMSCreatEncryptionKey()&lt;/p&gt;

&lt;p&gt;%SYSTEM.Encryption.ActivateEncryptionKey() ;just supply the kms key, no need for username or password&lt;/p&gt;

&lt;p&gt;do ReadFile^EncryptionKey(&amp;lt;key&amp;gt;,.data) zw data ;it will be obvious if the key is kms type from the data returned.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Doc link:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.intersystems.com/irisforhealth20233/csp/docbook/DocBook.UI.Page.cls?KEY=ROARS_encrypt_mgmt#ROARS_encrypt_KMS" rel="noopener noreferrer"&gt;Key Management Tasks | InterSystems IRIS for Health 2023.3&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>aws</category>
      <category>encryption</category>
      <category>beginners</category>
    </item>
    <item>
      <title>IRIS SIEM System Integration with Crowdstrike Logscale</title>
      <dc:creator>InterSystems Developer</dc:creator>
      <pubDate>Sun, 26 Apr 2026 16:17:27 +0000</pubDate>
      <link>https://dev.to/intersystems/iris-siem-system-integration-with-crowdstrike-logscale-5406</link>
      <guid>https://dev.to/intersystems/iris-siem-system-integration-with-crowdstrike-logscale-5406</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg68k8f6ffrgaoekhdq3l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg68k8f6ffrgaoekhdq3l.png" alt=" " width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IRIS makes &lt;a href="https://www.irs.gov/privacy-disclosure/security-information-and-event-management-siem-systems" rel="noopener noreferrer"&gt;SIEM&lt;/a&gt; systems integration simple with Structured Logging and Pipes!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Adding a SIEM integration to InterSystems IRIS for "Audit Database Events" was dead simple with the &lt;a href="https://cloud.community.humio.com/" rel="noopener noreferrer"&gt;Community Edition of CrowdStrike's Falcon LogScale&lt;/a&gt;, and here's how I got it done.  &lt;br&gt;&lt;br&gt;&lt;strong&gt;CrowdStrike Community LogScale Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.community.humio.com/" rel="noopener noreferrer"&gt;Getting Started&lt;/a&gt; was ridiculously straight forward and I had the account approved in a couple of days with the following disclaimer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Falcon LogScale Community is a free service providing you with up to 16 GB/day of data ingest, up to 5 users, and 7 day data retention, if you exceed the limitations, you’ll be asked to upgrade to a paid offering. You can use Falcon LogScale under the limitations as long as you want, provided, that we can modify or terminate the Community program at any time without notice or liability of any kind.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Pretty generous and a good fit for this implementation, with the caveat all good things can come to an end I guess, cut your self an ingestion token in the UI and save it to your favorite hiding place for secrets.&lt;br&gt;&lt;br&gt;&lt;strong&gt;Python Interceptor - irislogd2crwd.py&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Wont go over this amazing piece of software engineering in detail, but it is as simple as a python implementation that accepts STDIN, breaks up what it sees into events, and ships them off to the SIEM platform to be ingested.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span class="hljs-comment"&gt;#!/usr/bin/env python&lt;/span&gt;
&lt;span class="hljs-keyword"&gt;import&lt;/span&gt; json
&lt;span class="hljs-keyword"&gt;import&lt;/span&gt; time
&lt;span class="hljs-keyword"&gt;import&lt;/span&gt; os
&lt;span class="hljs-keyword"&gt;import&lt;/span&gt; sys
&lt;span class="hljs-keyword"&gt;import&lt;/span&gt; requests
&lt;span class="hljs-keyword"&gt;import&lt;/span&gt; socket
&lt;span class="hljs-keyword"&gt;from&lt;/span&gt; datetime &lt;span class="hljs-keyword"&gt;import&lt;/span&gt; datetime
&lt;span class="hljs-keyword"&gt;from&lt;/span&gt; humiolib.HumioClient &lt;span class="hljs-keyword"&gt;import&lt;/span&gt; HumioIngestClient


input_list = sys.stdin.read().splitlines() &lt;span class="hljs-comment"&gt;# From ^LOGDMN Pipe!&lt;/span&gt;
&lt;span class="hljs-keyword"&gt;for&lt;/span&gt; irisevent &lt;span class="hljs-keyword"&gt;in&lt;/span&gt; input_list:
    &lt;span class="hljs-comment"&gt;# Required for CRWD Data Source&lt;/span&gt;
    today = datetime.now()
    fqdn = socket.getfqdn()

    payload = [
        {
            &lt;span class="hljs-string"&gt;"tags"&lt;/span&gt;: {
                &lt;span class="hljs-string"&gt;"host"&lt;/span&gt;: fqdn,
                &lt;span class="hljs-string"&gt;"source"&lt;/span&gt;: &lt;span class="hljs-string"&gt;"irislogd"&lt;/span&gt;
            },
                &lt;span class="hljs-string"&gt;"events"&lt;/span&gt;: [
                {
                    &lt;span class="hljs-string"&gt;"timestamp"&lt;/span&gt;: today.isoformat(sep=&lt;span class="hljs-string"&gt;'T'&lt;/span&gt;,timespec=&lt;span class="hljs-string"&gt;'auto'&lt;/span&gt;) + &lt;span class="hljs-string"&gt;"Z"&lt;/span&gt;,
                    &lt;span class="hljs-string"&gt;"attributes"&lt;/span&gt;: {&lt;span class="hljs-string"&gt;"irislogd"&lt;/span&gt;:json.loads(irisevent)} 
                }
            ]
        }
    ]

    client = HumioIngestClient(
        base_url= &lt;span class="hljs-string"&gt;"https://cloud.community.humio.com"&lt;/span&gt;,
        ingest_token= os.environ[&lt;span class="hljs-string"&gt;"CRWD_LOGSCALE_APIKEY"&lt;/span&gt;]
    )
    ingest_response = client.ingest_json_data(payload)

    
&lt;/code&gt;&lt;/pre&gt;

&lt;blockquote&gt;
&lt;p&gt;You will want to &lt;strong&gt;chmod +x&lt;/strong&gt; this script and put it where &lt;strong&gt;irisowner&lt;/strong&gt; can enjoy it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;strong&gt;InterSystems IRIS Structured Logging Setup&lt;/strong&gt;&lt;br&gt;&lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ALOG" rel="noopener noreferrer"&gt;Structured Logging in IRIS&lt;/a&gt; is documented to the 9's, so this will be a Cliff Note to the end state of configuring ^LOGDMN.  The thing that caught my attention in the docs is probably the most unclear part of the implementation, but the most powerful and fun for sure.&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqogkjvf367fcrclgkb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqogkjvf367fcrclgkb7.png" alt=" " width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After:&lt;br&gt;&lt;strong&gt;ENABLING&lt;/strong&gt; the Log Daemon, &lt;strong&gt;CONFIGURING&lt;/strong&gt; the Log Daemon and &lt;strong&gt;STARTING&lt;/strong&gt; Logging your configuration should look like this:&lt;br&gt; &lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span class="hljs-built_in"&gt;%SYS&lt;/span&gt;&amp;gt;&lt;span class="hljs-keyword"&gt;Do&lt;/span&gt; &lt;span class="hljs-symbol"&gt;^LOGDMN&lt;/span&gt;
&lt;span class="hljs-number"&gt;1&lt;/span&gt;) Enable logging
&lt;span class="hljs-number"&gt;2&lt;/span&gt;) Disable logging
&lt;span class="hljs-number"&gt;3&lt;/span&gt;) Display configuration
&lt;span class="hljs-number"&gt;4&lt;/span&gt;) Edit configuration
&lt;span class="hljs-number"&gt;5&lt;/span&gt;) &lt;span class="hljs-keyword"&gt;Set&lt;/span&gt; default configuration
&lt;span class="hljs-number"&gt;6&lt;/span&gt;) Display logging status
&lt;span class="hljs-number"&gt;7&lt;/span&gt;) Start logging
&lt;span class="hljs-number"&gt;8&lt;/span&gt;) Stop logging
&lt;span class="hljs-number"&gt;9&lt;/span&gt;) Restart logging

LOGDMN option? &lt;span class="hljs-number"&gt;3&lt;/span&gt;
LOGDMN configuration

Minimum level: -&lt;span class="hljs-number"&gt;1&lt;/span&gt; (DEBUG)
 Pipe command: /tmp/irislogd2crwd.py
       Format: JSON
     Interval: &lt;span class="hljs-number"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;blockquote&gt;
&lt;pre&gt;/tmp/irislogd2crwd.py  # Location of our chmod +x Python Interceptor
JSON                   # Important&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now that we are logging somewhere else, lets just pump up the verbosity in the Audit Log and enable all the events since somebody else is paying for it.&lt;br&gt;&lt;br&gt;Stealing from &lt;a class="mentioned-user" href="https://dev.to/sylvain"&gt;@sylvain&lt;/a&gt;.Guilbaud&lt;span&gt; 's &lt;a href="https://community.intersystems.com/post/how-activate-all-audit-system-events" rel="noopener noreferrer"&gt;post&lt;/a&gt;:&lt;/span&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0ii8d2x1fi65u4v2ahc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0ii8d2x1fi65u4v2ahc.png" alt=" " width="800" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;strong&gt;CrowdStrike LogScale Event Processing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It wont take long to get the hang of, but the Search Console is the beginning of all good things with setting up customized observability based on your events.  The search pane with filter criteria displays in the left corner, the available attributes on the left sidebar and the matching events in the results pane in the main view.&lt;br&gt;&lt;br&gt;LogScale uses The LogScale &lt;em&gt;Query Language&lt;/em&gt; (&lt;a href="https://library.humio.com/data-analysis/syntax.html" rel="noopener noreferrer"&gt;LQL&lt;/a&gt;) &lt;span&gt; to back the widgets, alerts and actions.&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8vz7xje2v52zmgh7wyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8vz7xje2v52zmgh7wyz.png" alt=" " width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I suck at visualizations, so I am sure you could do better than below with a box of crayons, but here is my 4 widgets of glory to put a clown suit on the SIEM events for this post:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9fh286x7cg58rut35nn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9fh286x7cg58rut35nn.png" alt=" " width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we look under the hood for the "Event Types" widget, the following LQL is only needed behind a time series graph lql:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;timechart(irislogd.event)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;br&gt;So we did the thing!&lt;br&gt;&lt;br&gt;&lt;strong&gt;We've integrated IRIS with the Enterprise SIEM implementation&lt;/strong&gt; and the Security Team is "😀 "  &lt;br&gt;&lt;br&gt;The bonus here are the things that are also accomplished with the exact same development pattern as above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Notifications&lt;/li&gt;
&lt;li&gt;Actions&lt;/li&gt;
&lt;li&gt;Scheduled Searches&lt;/li&gt;
&lt;li&gt;Scheduled Daily Reports&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>monitoring</category>
      <category>beginners</category>
      <category>security</category>
      <category>programming</category>
    </item>
    <item>
      <title>From FHIR Events to Explainable Agentic AI: Building a Clinical Follow‑Up Demo with InterSystems IRIS for Health</title>
      <dc:creator>InterSystems Developer</dc:creator>
      <pubDate>Wed, 22 Apr 2026 16:50:36 +0000</pubDate>
      <link>https://dev.to/intersystems/from-fhir-events-to-explainable-agentic-ai-building-a-clinical-follow-up-demo-with-intersystems-325n</link>
      <guid>https://dev.to/intersystems/from-fhir-events-to-explainable-agentic-ai-building-a-clinical-follow-up-demo-with-intersystems-325n</guid>
      <description>&lt;p&gt;&lt;strong&gt;10:47 AM&lt;/strong&gt; — Jose Garcia's creatinine test results arrive at the hospital FHIR server.&lt;br&gt;
&lt;strong&gt;2.1 mg/dL&lt;/strong&gt; — a 35% increase from last month.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What happens next?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Most systems:&lt;/strong&gt; ❌ The result sits in a queue until a clinician reviews it manually — hours or days later.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;This system:&lt;/strong&gt; 👍 An AI agent evaluates the trend, consults clinical guidelines, and generates evidence-based recommendations — &lt;strong&gt;in seconds, automatically&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;No chatbot. No manual prompts. No black-box reasoning.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is &lt;strong&gt;event-driven clinical decision support&lt;/strong&gt; with full explainability:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlnxmq8i5cwmr06vj6mu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlnxmq8i5cwmr06vj6mu.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Triggered automatically&lt;/strong&gt; by FHIR events&lt;br&gt;
✅ &lt;strong&gt;Multi-agent reasoning&lt;/strong&gt; (context, guidelines, recommendations)&lt;br&gt;
✅ &lt;strong&gt;Complete audit trail&lt;/strong&gt; in SQL (every decision, every evidence source)&lt;br&gt;
✅ &lt;strong&gt;FHIR-native outputs&lt;/strong&gt; (DiagnosticReport published to server)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Built with:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;InterSystems IRIS for Health&lt;/strong&gt; — Orchestration, FHIR, persistence, vector search&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CrewAI&lt;/strong&gt; — Multi-agent framework for structured reasoning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;You'll learn:&lt;/strong&gt; 🖋️ How to &lt;strong&gt;orchestrate agentic AI workflows&lt;/strong&gt; within production-grade interoperability systems — and why &lt;strong&gt;explainability&lt;/strong&gt; matters more than accuracy alone.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/43Vl7cU_uNY"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;




&lt;h2&gt;
  
  
  🎬 What This Demo Produces
&lt;/h2&gt;

&lt;p&gt;When Jose's abnormal creatinine observation arrives, the system automatically generates:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;INPUT:&lt;/strong&gt; FHIR Observation (creatinine 2.1 mg/dL, status: HIGH)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OUTPUT:&lt;/strong&gt; FHIR DiagnosticReport containing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Risk Level:&lt;/strong&gt; Medium-High (confidence: 85%)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recommendations:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;⚠️ Repeat creatinine in 7–14 days&lt;/li&gt;
&lt;li&gt;💊 Review nephrotoxic medications (currently on Ibuprofen)&lt;/li&gt;
&lt;li&gt;📊 Monitor renal function closely&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Evidence Used:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Patient context: CKD Stage 3 + progressive creatinine rise (&amp;gt;30%)&lt;/li&gt;
&lt;li&gt;Clinical guidelines: KDIGO section on AKI management in CKD&lt;/li&gt;
&lt;li&gt;Lab trend analysis: 1.6 → 1.9 → 2.1 mg/dL over 3 months&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AUDIT TRAIL:&lt;/strong&gt; Every decision, recommendation, and evidence citation persisted in SQL tables for compliance and review.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎯 What Problem Does This Solve?
&lt;/h2&gt;

&lt;p&gt;Most AI demos in healthcare focus on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chat interfaces for asking questions&lt;/li&gt;
&lt;li&gt;Unstructured text outputs&lt;/li&gt;
&lt;li&gt;Opaque reasoning ("trust the AI")&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In real clinical environments, what matters is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reacting to &lt;strong&gt;clinical events&lt;/strong&gt; automatically&lt;/li&gt;
&lt;li&gt;Understanding complete &lt;strong&gt;patient context&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Providing &lt;strong&gt;explainable recommendations&lt;/strong&gt; with evidence&lt;/li&gt;
&lt;li&gt;Persisting decisions for &lt;strong&gt;audit and compliance&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This demo answers a simple but realistic question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;What happens when a new abnormal lab result arrives — and how can we automate the initial clinical assessment while maintaining transparency?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🧪 Demo Scenario: CKD + Rising Creatinine
&lt;/h2&gt;

&lt;p&gt;The demo is based on a common healthcare use case:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Patient: Jose Garcia (MRN-1000001)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Conditions:&lt;/strong&gt; Chronic Kidney Disease (CKD Stage 3), Hypertension&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medications:&lt;/strong&gt; Ibuprofen (NSAID), Lisinopril&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lab history:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;3 months ago: 1.6 mg/dL&lt;/li&gt;
&lt;li&gt;1 month ago: 1.9 mg/dL&lt;/li&gt;
&lt;li&gt;Today: &lt;strong&gt;2.1 mg/dL&lt;/strong&gt; ← triggers workflow&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The &lt;strong&gt;&amp;gt;30% progressive increase&lt;/strong&gt; requires clinical follow-up.&lt;/p&gt;

&lt;p&gt;Instead of waiting for manual review, the system automatically:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Detects the event&lt;/strong&gt; (FHIR Observation POST)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retrieves patient context&lt;/strong&gt; (conditions, medications, lab history)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consults clinical guidelines&lt;/strong&gt; via RAG (vector search)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performs agentic reasoning&lt;/strong&gt; across three specialized agents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Produces explainable recommendations&lt;/strong&gt; with evidence citations&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  ⏱️ From Event to Evidence: The Complete Journey
&lt;/h2&gt;

&lt;p&gt;Follow a single lab result through the system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;FHIR Observation posted to IRIS server&lt;/li&gt;
&lt;li&gt;Interoperability Production triggered&lt;/li&gt;
&lt;li&gt;Context Agent queries patient history from FHIR&lt;/li&gt;
&lt;li&gt;Guidelines Agent searches vector database (clinical documents)&lt;/li&gt;
&lt;li&gt;Reasoning Agent synthesizes 3 recommendations&lt;/li&gt;
&lt;li&gt;Results persisted to SQL (&lt;code&gt;Cases&lt;/code&gt;, &lt;code&gt;CaseRecommendations&lt;/code&gt;, &lt;code&gt;CaseEvidences&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;FHIR DiagnosticReport published to server&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complete&lt;/strong&gt; — Full audit trail available for review&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From event to actionable recommendations.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 Architecture Overview
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Key Principle
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;InterSystems IRIS for Health is the orchestrator and system of record.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AI agents are external capabilities that are governed, triggered, and integrated by the IRIS platform. IRIS owns the data, the workflow, and the audit trail — the agents provide specialized reasoning.&lt;/p&gt;

&lt;h3&gt;
  
  
  High-Level Flow
&lt;/h3&gt;

&lt;p&gt;Key steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;FHIR Observation&lt;/strong&gt; → POSTed to IRIS FHIR server&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interaction Strategy&lt;/strong&gt; → Detects clinical event&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interoperability Production&lt;/strong&gt; → Orchestrates workflow&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Business Operation&lt;/strong&gt; → Calls Agentic AI REST service (FastAPI)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agents Execute&lt;/strong&gt; → Context retrieval, guideline search, reasoning&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Results Return&lt;/strong&gt; → Structured JSON back to IRIS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistence&lt;/strong&gt; → SQL tables store cases, recommendations, evidence&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Publishing&lt;/strong&gt; → FHIR DiagnosticReport created and stored&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Visual Components
&lt;/h3&gt;

&lt;p&gt;The demo includes a &lt;strong&gt;Gradio web UI&lt;/strong&gt; for interactive demonstration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Post lab values and trigger the workflow&lt;/li&gt;
&lt;li&gt;Watch real-time agent progress&lt;/li&gt;
&lt;li&gt;View recommendations and evidence citations&lt;/li&gt;
&lt;li&gt;Query SQL audit tables&lt;/li&gt;
&lt;li&gt;Access IRIS Production message viewer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This makes the complete flow visible and understandable.&lt;/p&gt;




&lt;h2&gt;
  
  
  🤖 Why CrewAI? Understanding Multi-Agent Architecture
&lt;/h2&gt;

&lt;p&gt;CrewAI is a &lt;strong&gt;multi-agent orchestration framework&lt;/strong&gt; that enables specialized AI agents to collaborate on complex tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In this demo, three agents work sequentially:&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Context Agent
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Role:&lt;/strong&gt; Gather patient clinical history from FHIR server&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Action:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fetch patient demographics and conditions&lt;/li&gt;
&lt;li&gt;Retrieve historical lab results (creatinine trends)&lt;/li&gt;
&lt;li&gt;Collect active medications&lt;/li&gt;
&lt;li&gt;Identify risk factors (NSAID use + CKD)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt; Structured patient context for reasoning&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Guidelines Agent
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Role:&lt;/strong&gt; Search clinical knowledge base using RAG (Retrieval-Augmented Generation)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Action:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Query IRIS vector database with semantic search&lt;/li&gt;
&lt;li&gt;Find relevant guideline sections (clinical protocols, etc.)&lt;/li&gt;
&lt;li&gt;Retrieve evidence chunks with similarity scores&lt;/li&gt;
&lt;li&gt;Provide citations for recommendations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt; Evidence-based clinical guidance&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Reasoning Agent
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Role:&lt;/strong&gt; Synthesize recommendations from context + guidelines&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Action:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Analyze lab trends (&amp;gt;30% increase = significant)&lt;/li&gt;
&lt;li&gt;Identify risk factors (CKD + NSAID + progressive rise)&lt;/li&gt;
&lt;li&gt;Apply clinical decision rules&lt;/li&gt;
&lt;li&gt;Generate structured recommendations with confidence levels&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt; Risk assessment + actionable follow-up plan&lt;/p&gt;




&lt;h3&gt;
  
  
  Why Multi-Agent Instead of Single LLM Call?
&lt;/h3&gt;

&lt;p&gt;Agentic workflows provide:&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Better structured reasoning&lt;/strong&gt; — Each agent has a focused responsibility&lt;br&gt;
✅ &lt;strong&gt;Tool use&lt;/strong&gt; — Agents can query FHIR, search vector databases, analyze trends&lt;br&gt;
✅ &lt;strong&gt;Explainable decision chains&lt;/strong&gt; — Each step is traceable&lt;br&gt;
✅ &lt;strong&gt;Separation of concerns&lt;/strong&gt; — Context ≠ Guidelines ≠ Reasoning&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Critical:&lt;/strong&gt; IRIS orchestrates the agents — CrewAI is used as a library, not the platform. IRIS owns persistence, orchestration, FHIR integration, and audit trails.&lt;/p&gt;


&lt;h2&gt;
  
  
  🔄 Interoperability Production
&lt;/h2&gt;

&lt;p&gt;The workflow is managed by three IRIS components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Business Service&lt;/strong&gt; (&lt;code&gt;FHIRObservationIn&lt;/code&gt;)&lt;br&gt;
Triggered automatically when FHIR Observation is POSTed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Business Process&lt;/strong&gt; (&lt;code&gt;FollowUpAI&lt;/code&gt;)&lt;br&gt;
Orchestrates three-step workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Call agent service&lt;/li&gt;
&lt;li&gt;Persist results to SQL&lt;/li&gt;
&lt;li&gt;Publish DiagnosticReport&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Business Operations&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ClinicalAgenticOperation&lt;/code&gt; → REST call to FastAPI/CrewAI&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ClinicalAiPersistence&lt;/code&gt; → SQL table writes&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ClinicalReportPublisher&lt;/code&gt; → FHIR DiagnosticReport POST&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  🔍 Explainability: Proving the AI's Reasoning
&lt;/h2&gt;

&lt;p&gt;One of the most critical aspects of clinical AI is &lt;strong&gt;proving why a recommendation was made&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;IRIS persists everything in a &lt;strong&gt;minimal, queryable SQL model&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Cases&lt;/code&gt;&lt;/strong&gt; — What happened (patient, observation, risk level, confidence)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;CaseRecommendations&lt;/code&gt;&lt;/strong&gt; — What to do (action type, description, timeframe)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;CaseEvidences&lt;/code&gt;&lt;/strong&gt; — Why (guideline citations, similarity scores, text excerpts)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Example Queries
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;"What cases were evaluated today?"&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;CaseId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;PatientRef&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;RiskLevel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;Confidence&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;ReasoningSummary&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;clinicalai_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Cases&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;CreatedAt&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="k"&gt;CURRENT_DATE&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;CreatedAt&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CaseId: CSE-20260108-001
PatientRef: Patient/1 (Jose Garcia)
RiskLevel: medium-high
Confidence: high
ReasoningSummary: The patient with stage 3 chronic kidney disease and hypertension demonstrates a sustained and progressive increase in serum creatinine over 90 days...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;"Why did the agent recommend nephrotoxic medication review?"&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GuidelineId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Similarity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Excerpt&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;clinicalai_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CaseEvidences&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CaseId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'b344f121-db68-4cd6-8877-1855c3d547ff'&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Similarity&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GuidelineId: ckd_creatinine_guideline_demo
Similarity: 0.66
Excerpt: "Recommended actions include repeat serum creatinine testing within 7–14 days, review of current medications for nephrotoxicity, assessment of contributing factors, and close monitoring of renal function."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Every recommendation has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The clinical context used&lt;/li&gt;
&lt;li&gt;The guidelines consulted&lt;/li&gt;
&lt;li&gt;The similarity scores showing relevance&lt;/li&gt;
&lt;li&gt;The reasoning chain from data to decision&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can answer "Why did the AI recommend this?" with SQL queries and evidence citations.&lt;/p&gt;




&lt;h2&gt;
  
  
  🩺 Publishing Results as FHIR DiagnosticReport
&lt;/h2&gt;

&lt;p&gt;The final step closes the loop: AI outputs become part of the &lt;strong&gt;clinical record&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The system publishes a &lt;strong&gt;FHIR DiagnosticReport&lt;/strong&gt; containing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Subject:&lt;/strong&gt; Patient reference (Jose Garcia)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Result:&lt;/strong&gt; Link to triggering Observation (creatinine 2.1)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conclusion:&lt;/strong&gt; Risk level + reasoning summary&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PresentedForm:&lt;/strong&gt; Human-readable recommendations (Base64-encoded)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extensions:&lt;/strong&gt; Case ID, confidence score, model metadata&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This makes the AI output:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Interoperable&lt;/strong&gt; — Standard FHIR resource&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consumable&lt;/strong&gt; — Accessible via FHIR API by EHRs, portals, apps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auditable&lt;/strong&gt; — Part of the permanent clinical record&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Queryable&lt;/strong&gt; — &lt;code&gt;GET /DiagnosticReport?result=Observation/14&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The DiagnosticReport is not a separate "AI system output" — it's a &lt;strong&gt;first-class clinical document&lt;/strong&gt; that follows the same standards as lab reports and radiology findings.&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 Try It Yourself
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Quick Start (15 minutes):&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Clone the repository&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   git clone https://github.com/intersystems-ib/iris-health-fhir-agentic-demo
   &lt;span class="nb"&gt;cd &lt;/span&gt;iris-health-fhir-agentic-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start IRIS container&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Load sample patient data&lt;/strong&gt; (Jose Garcia with CKD history)&lt;br&gt;
Follow the &lt;a href="https://github.com/intersystems-ib/iris-health-fhir-agentic-demo#5-%F0%9F%91%A4-load-sample-fhir-data" rel="noopener noreferrer"&gt;README setup instructions&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Run the Gradio UI&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   python run_ui.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open browser to &lt;code&gt;http://localhost:7860&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;POST an abnormal lab value&lt;/strong&gt; and watch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time agent progress&lt;/li&gt;
&lt;li&gt;Evidence retrieval from vector database&lt;/li&gt;
&lt;li&gt;Recommendations generated with confidence scores&lt;/li&gt;
&lt;li&gt;SQL audit trail queries&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Query the results&lt;/strong&gt; using IRIS SQL Explorer or Management Portal&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;💬 Questions or feedback?&lt;/strong&gt; Reply to this post — I'd love to hear about your use cases.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎯 What You've Learned
&lt;/h2&gt;

&lt;p&gt;If you've followed along, you now understand how to:&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Trigger AI workflows from FHIR events&lt;/strong&gt; — No manual initiation required&lt;br&gt;
✅ &lt;strong&gt;Orchestrate multi-agent systems with CrewAI&lt;/strong&gt; — Context, Guidelines, Reasoning agents&lt;br&gt;
✅ &lt;strong&gt;Build explainable AI with SQL audit trails&lt;/strong&gt; — Every decision is traceable&lt;br&gt;
✅ &lt;strong&gt;Publish AI outputs as FHIR resources&lt;/strong&gt; — Interoperable clinical documents&lt;br&gt;
✅ &lt;strong&gt;Integrate agentic AI with IRIS Interoperability&lt;/strong&gt; — Production-grade orchestration&lt;/p&gt;




&lt;h2&gt;
  
  
  🔮 Beyond Lab Results: What Else Can You Automate?
&lt;/h2&gt;

&lt;p&gt;This pattern applies to many clinical scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Medication reconciliation alerts&lt;/strong&gt; — Detect drug-drug interactions or contraindications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Care gap identification&lt;/strong&gt; — Missing screenings based on age, conditions, guidelines&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Risk stratification triggers&lt;/strong&gt; — Identify high-risk patients for intervention&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clinical trial matching&lt;/strong&gt; — Find eligible patients based on inclusion criteria&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The architecture is the same: &lt;strong&gt;event → context → evidence → reasoning → action&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 Conclusion
&lt;/h2&gt;

&lt;p&gt;This demo shows how &lt;strong&gt;Agentic AI&lt;/strong&gt; can be safely and effectively integrated into &lt;strong&gt;real clinical workflows&lt;/strong&gt; using &lt;strong&gt;InterSystems IRIS for Health&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;By combining:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Event-driven interoperability&lt;/strong&gt; — React to clinical events automatically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agentic reasoning&lt;/strong&gt; — Multi-agent collaboration with tool use&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQL persistence&lt;/strong&gt; — Full audit trails for compliance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FHIR-native outputs&lt;/strong&gt; — Standard clinical documents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We move from AI experiments to &lt;strong&gt;platform-grade clinical AI&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⭐ &lt;strong&gt;Star the repo:&lt;/strong&gt; &lt;a href="https://github.com/intersystems-ib/iris-health-fhir-agentic-demo" rel="noopener noreferrer"&gt;https://github.com/intersystems-ib/iris-health-fhir-agentic-demo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🧪 &lt;strong&gt;Try the demo&lt;/strong&gt; with your own clinical guidelines&lt;/p&gt;

&lt;p&gt;💬 &lt;strong&gt;Share your use case&lt;/strong&gt; — What clinical event would you automate first?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tutorial</category>
      <category>vectordatabase</category>
      <category>sql</category>
    </item>
    <item>
      <title>Dify Now Supports IRIS as a Vector Store — Setup Guide</title>
      <dc:creator>InterSystems Developer</dc:creator>
      <pubDate>Tue, 21 Apr 2026 14:18:22 +0000</pubDate>
      <link>https://dev.to/intersystems/dify-now-supports-iris-as-a-vector-store-setup-guide-396l</link>
      <guid>https://dev.to/intersystems/dify-now-supports-iris-as-a-vector-store-setup-guide-396l</guid>
      <description>&lt;h2&gt;Why This Integration Matters&lt;/h2&gt;
&lt;p&gt;InterSystems continues to push AI capabilities forward natively in IRIS — vector search, MCP support, and Agentic AI capabilities. That roadmap is important, and there is no intention of stepping back from it.&lt;/p&gt;
&lt;p&gt;But the AI landscape is also evolving in a way that makes ecosystem integration increasingly essential. Tools like &lt;strong&gt;Dify&lt;/strong&gt; — an open-source, production-grade LLM orchestration platform — have become a serious part of enterprise AI stacks. In Japan in particular, Dify adoption is no longer just for startups or hobbyists; it has reached large enterprises, with employees using it as the backbone of internal AI workflows. Meeting developers and teams where they already are is as valuable as building new capabilities in isolation.&lt;/p&gt;
&lt;p&gt;That's the motivation behind this integration: IRIS handles what it does best — reliable, queryable, SQL-accessible data with built-in processing logic — while Dify handles LLM orchestration, RAG pipelines, and agentic workflows. Together, they form a stack where IRIS users don't have to choose between their data infrastructure and the AI tools gaining momentum around them.&lt;/p&gt;
&lt;p&gt;This integration was contributed to Dify as an OSS pull request and merged in Dify v1.11.2 (&lt;a href="https://github.com/langgenius/dify/pull/29480" rel="noopener noreferrer"&gt;#29480&lt;/a&gt;). Several follow-up fixes have been merged since — covered below. This article walks through the setup.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;First, I'd like to thank &lt;span&gt;&lt;strong&gt;&lt;span&gt;&lt;a class="mentioned-user" href="https://dev.to/megumi"&gt;@megumi&lt;/a&gt;.Kakechi&lt;/span&gt;&lt;/strong&gt;&lt;/span&gt; for encouraging me to post this on the English Developer Community — this would have remained a Japanese-only article without that push. I'd also like to extend my gratitude to &lt;span&gt;&lt;strong&gt;&lt;span&gt;&lt;a class="mentioned-user" href="https://dev.to/tomohiro"&gt;@tomohiro&lt;/a&gt;.Iwamoto&lt;/span&gt;&lt;/strong&gt;&lt;/span&gt; and &lt;span&gt;&lt;strong&gt;&lt;span&gt;@Mihoko.Iijima&lt;/span&gt;&lt;/strong&gt;&lt;/span&gt;, who took the time to test this integration hands-on and provided invaluable feedback that helped shape the fixes included in later releases.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;OSS Background:&lt;/strong&gt; If you're curious about the contribution journey itself, I wrote about it on Zenn: &lt;a href="https://zenn.dev/tomookuyama/articles/214abc2cb73212" rel="noopener noreferrer"&gt;「このDB、もっと知られてもいいのでは？」からOSSコントリビュートに至った話&lt;/a&gt; (Japanese)&lt;/p&gt;&lt;/blockquote&gt;

&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Docker / Docker Compose&lt;/td&gt;
&lt;td&gt;Any recent version&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Git&lt;/td&gt;
&lt;td&gt;For cloning the Dify repository&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;Setup&lt;/h2&gt;
&lt;h3&gt;1. Clone the Dify repository&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; git clone &lt;a href="https://github.com/langgenius/dify.git" rel="noopener noreferrer"&gt;https://github.com/langgenius/dify.git&lt;/a&gt;&lt;br&gt;
&amp;gt; cd dify/docker&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;2. Prepare the environment file&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; cp .env.example .env&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;3. Enable IRIS as the vector store&lt;/h3&gt;
&lt;p&gt;Open &lt;code&gt;.env&lt;/code&gt; and change one line:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Before (default)&lt;br&gt;
VECTOR_STORE=weaviate
&lt;h1&gt;
  
  
  After
&lt;/h1&gt;

&lt;p&gt;VECTOR_STORE=iris&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That's the minimum. IRIS connection defaults are pre-configured for local use. To connect to an existing IRIS instance or customize the setup, the relevant parameters are:&lt;/p&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;IRIS_HOST&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;iris&lt;/td&gt;
&lt;td&gt;Container service name&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;IRIS_SUPER_SERVER_PORT&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1972&lt;/td&gt;
&lt;td&gt;SuperServer port&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;IRIS_WEB_SERVER_PORT&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;52773&lt;/td&gt;
&lt;td&gt;Management Portal port&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;IRIS_USER&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;_SYSTEM&lt;/td&gt;
&lt;td&gt;Login username&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;IRIS_PASSWORD&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Dify@1234&lt;/td&gt;
&lt;td&gt;Set automatically on first launch&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;IRIS_DATABASE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;USER&lt;/td&gt;
&lt;td&gt;Target namespace&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;IRIS_SCHEMA&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;dify&lt;/td&gt;
&lt;td&gt;SQL schema for Dify tables&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;IRIS_TEXT_INDEX&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;true&lt;/td&gt;
&lt;td&gt;Enable full-text index&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;IRIS_TEXT_INDEX_LANGUAGE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;en&lt;/td&gt;
&lt;td&gt;Language for text indexing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;IRIS_MIN_CONNECTION&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Connection pool minimum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;IRIS_MAX_CONNECTION&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Connection pool maximum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;IRIS_TIMEZONE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;UTC&lt;/td&gt;
&lt;td&gt;Timezone setting&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h3&gt;4. Start the containers&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; docker compose up -d&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;5. Confirm all containers are running&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; docker compose ps&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Look for the &lt;code&gt;iris&lt;/code&gt; container with a &lt;code&gt;STATUS&lt;/code&gt; of &lt;code&gt;Up&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; docker % docker compose ps --format "table {{.Name}}\t{{.Service}}\t{{.Status}}"&lt;br&gt;
NAME                     SERVICE         STATUS&lt;br&gt;
docker-api-1             api             Up&lt;br&gt;
docker-db_postgres-1     db_postgres     Up (healthy)&lt;br&gt;
docker-nginx-1           nginx           Up&lt;br&gt;
docker-plugin_daemon-1   plugin_daemon   Up&lt;br&gt;
docker-redis-1           redis           Up (healthy)&lt;br&gt;
docker-sandbox-1         sandbox         Up (healthy)&lt;br&gt;
docker-ssrf_proxy-1      ssrf_proxy      Up&lt;br&gt;
docker-web-1             web             Up&lt;br&gt;
docker-worker-1          worker          Up&lt;br&gt;
docker-worker_beat-1     worker_beat     Up&lt;br&gt;
iris                     iris            Up   &amp;lt;-- this one&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Access Dify&lt;/h2&gt;
&lt;p&gt;Navigate to &lt;code&gt;&lt;a href="http://localhost/" rel="noopener noreferrer"&gt;http://localhost/&lt;/a&gt;&lt;/code&gt; in your browser. On first launch you'll be prompted to create an admin account.&lt;/p&gt;

&lt;h2&gt;Verify IRIS is Storing Your Vectors&lt;/h2&gt;
&lt;h3&gt;Step 1 — Create a Knowledge Base&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Log in to Dify&lt;/li&gt;
&lt;li&gt;Go to &lt;strong&gt;Knowledge&lt;/strong&gt; → &lt;strong&gt;Create Knowledge&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Upload a text file or PDF&lt;/li&gt;
&lt;li&gt;In Step 2, under &lt;strong&gt;Index Method&lt;/strong&gt;, select &lt;strong&gt;"High Quality"&lt;/strong&gt; (recommended)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhlf4cj9ao0q2hbp1woqy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhlf4cj9ao0q2hbp1woqy.png" alt=" " width="800" height="912"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If no Embedding Model has been configured yet, the dropdown will show "No model found." Click &lt;strong&gt;"Model Provider Settings"&lt;/strong&gt; at the bottom of the dropdown to proceed.&lt;/p&gt;
&lt;h3&gt;Step 2 — Set Up a Model Provider (OpenAI)&lt;/h3&gt;
&lt;p&gt;The Model Provider screen lists available providers. Find &lt;strong&gt;OpenAI&lt;/strong&gt; and click &lt;strong&gt;Install&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6u6ls59sz97m7erboyj4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6u6ls59sz97m7erboyj4.png" alt=" " width="800" height="650"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note on cost:&lt;/strong&gt; OpenAI requires an API key separate from a ChatGPT Plus subscription — you'll need to add credits to your OpenAI API account. Embedding costs are extremely low, however; a few dollars will go a long way. If you'd prefer a free alternative, local models via &lt;strong&gt;LM Studio&lt;/strong&gt; or &lt;strong&gt;Ollama&lt;/strong&gt; (OpenAI-API-compatible) are also supported.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;After installation, OpenAI appears under &lt;strong&gt;"To be configured"&lt;/strong&gt;. Click &lt;strong&gt;Setup&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqi4hw0rq81fljs3po793.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqi4hw0rq81fljs3po793.png" alt=" " width="800" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;API Key Authorization Configuration&lt;/strong&gt; dialog opens. If you don't have an API key yet, click &lt;strong&gt;"Get your API Key from OpenAI"&lt;/strong&gt; to open the OpenAI API keys page directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F740f24thl60ami9kergd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F740f24thl60ami9kergd.png" alt=" " width="800" height="659"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter a name (e.g. &lt;code&gt;dify&lt;/code&gt;), paste your API key, then click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99qze6cofi5yvgog22wl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99qze6cofi5yvgog22wl.png" alt=" " width="800" height="659"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;
&lt;h3&gt; &lt;/h3&gt;
&lt;h3&gt;Step 3 — Select the Embedding Model&lt;/h3&gt;
&lt;p&gt;Return to the Knowledge creation screen. The &lt;strong&gt;Embedding Model&lt;/strong&gt; dropdown now lists available OpenAI models. Select &lt;code&gt;text-embedding-3-small&lt;/code&gt; — it offers an excellent balance of cost and retrieval quality for most use cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fby6madpme774ncrznrz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fby6madpme774ncrznrz4.png" alt=" " width="800" height="881"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Save &amp;amp; Process&lt;/strong&gt;. When a green checkmark appears next to your document, embedding is complete — your document chunks are now stored as vectors in IRIS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ak4jg5quwnjhwpd6ijl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ak4jg5quwnjhwpd6ijl.png" alt=" " width="800" height="607"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;
&lt;h3&gt;Step 4 — Inspect the Data via Management Portal&lt;/h3&gt;
&lt;p&gt;This is where IRIS users have a distinct advantage. Open the Management Portal:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&lt;a href="http://localhost:52773/csp/sys/UtilHome.csp?$NAMESPACE=USER" rel="noopener noreferrer"&gt;http://localhost:52773/csp/sys/UtilHome.csp?$NAMESPACE=USER&lt;/a&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Username&lt;/td&gt;
&lt;td&gt;&lt;code&gt;_SYSTEM&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Password&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Dify@1234&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;p&gt;Navigate to &lt;strong&gt;System Explorer → SQL&lt;/strong&gt;, set the schema filter to &lt;code&gt;dify&lt;/code&gt;, and the tables Dify created will appear. Open one, and you'll see your document chunks — including the raw vector embeddings — stored exactly as you'd expect from any IRIS table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmlxzcyc32rbrubyfzc9x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmlxzcyc32rbrubyfzc9x.png" alt=" " width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qd6oy4lyhaoz3qqbhj6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qd6oy4lyhaoz3qqbhj6.png" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;The IRIS Advantage:&lt;/strong&gt; With most vector stores, embedding data is opaque — accessible only through a proprietary API. With IRIS, you have full SQL access. Query vector data directly, join it against operational data already in your namespace, inspect what's been indexed, or build custom retrieval logic on top of it. That's a meaningful capability for teams already invested in the IRIS ecosystem.&lt;/p&gt;&lt;/blockquote&gt;

&lt;h2&gt;What's Been Fixed Since the Initial Release&lt;/h2&gt;
&lt;p&gt;The initial integration shipped in v1.11.2 with some rough edges. All known issues have since been resolved and merged into the official Dify codebase.&lt;/p&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;br&gt;
&lt;thead&gt;&lt;tr&gt;
&lt;br&gt;
&lt;th&gt;PR&lt;/th&gt;
&lt;br&gt;
&lt;th&gt;Release&lt;/th&gt;
&lt;br&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;br&gt;
&lt;/tr&gt;&lt;/thead&gt;
&lt;br&gt;
&lt;tbody&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;&lt;a href="https://github.com/langgenius/dify/pull/29480" rel="noopener noreferrer"&gt;#29480&lt;/a&gt;&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;v1.11.2&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;Initial IRIS vector store support&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;&lt;a href="https://github.com/langgenius/dify/pull/31309" rel="noopener noreferrer"&gt;#31309&lt;/a&gt;&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;post-v1.11.2&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;Fix full-text search and hybrid search for IRIS backend&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;&lt;a href="https://github.com/langgenius/dify/pull/31899" rel="noopener noreferrer"&gt;#31899&lt;/a&gt;&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;v1.12.1&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;Fix IRIS data persistence across container recreation using Durable %SYS&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;&lt;a href="https://github.com/langgenius/dify/pull/31901" rel="noopener noreferrer"&gt;#31901&lt;/a&gt;&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;v1.12.1&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;Further improvements to Durable %SYS data persistence&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;/tbody&gt;
&lt;br&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note on Durable %SYS:&lt;/strong&gt; Without the fixes in &lt;a href="https://github.com/langgenius/dify/pull/31899" rel="noopener noreferrer"&gt;#31899&lt;/a&gt; and &lt;a href="https://github.com/langgenius/dify/pull/31901" rel="noopener noreferrer"&gt;#31901&lt;/a&gt;, IRIS data could be lost on container recreation — a common issue with the Community Edition Docker image. IRIS developers will recognize Durable %SYS immediately; these fixes ensure the Docker setup respects it correctly.&lt;/p&gt;&lt;/blockquote&gt;

&lt;h2&gt;Troubleshooting&lt;/h2&gt;
&lt;h3&gt;IRIS container fails to start on Windows&lt;/h3&gt;
&lt;p&gt;On Windows, the IRIS container may fail to start due to volume directory permission issues. Run the following from the &lt;code&gt;docker&lt;/code&gt; directory &lt;em&gt;before&lt;/em&gt; starting the containers:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;chmod -R 777 ./volumes/iris&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This is a known limitation on Windows host environments. A fix to eliminate this step is planned for a future release.&lt;/p&gt;&lt;/blockquote&gt;
&lt;h3&gt;IRIS container fails to start (high core count)&lt;/h3&gt;
&lt;p&gt;On machines with high core counts, IRIS Community Edition's 20-core limit may be triggered. Add the following to the &lt;code&gt;iris&lt;/code&gt; service in &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;services:&lt;br&gt;&lt;br&gt;
  iris:&lt;br&gt;&lt;br&gt;
    cpuset: "0-19"&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Summary&lt;/h2&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;br&gt;
&lt;thead&gt;&lt;tr&gt;
&lt;br&gt;
&lt;th&gt;What&lt;/th&gt;
&lt;br&gt;
&lt;th&gt;How&lt;/th&gt;
&lt;br&gt;
&lt;/tr&gt;&lt;/thead&gt;
&lt;br&gt;
&lt;tbody&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;Enable IRIS in Dify&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;Set &lt;code&gt;VECTOR_STORE=iris&lt;/code&gt; in &lt;code&gt;.env&lt;/code&gt;&lt;br&gt;
&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;Verify stored vectors&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;Management Portal → SQL → schema: &lt;code&gt;dify&lt;/code&gt;&lt;br&gt;
&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;Data persistence&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;Handled by Durable %SYS (fixed in v1.12.1)&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;Community Edition&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;Free · 10 GB data · Up to 20 cores&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;/tbody&gt;
&lt;br&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;p&gt;A follow-up post will walk through building a full RAG chatbot on this stack. Questions or issues — drop them in the comments below.&lt;/p&gt;

&lt;h2&gt;References&lt;/h2&gt;
&lt;ul&gt;

&lt;li&gt;&lt;a href="https://docs.dify.ai" rel="noopener noreferrer"&gt;Dify Documentation&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://github.com/langgenius/dify" rel="noopener noreferrer"&gt;Dify GitHub&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://hub.docker.com/r/intersystemsdc/iris-community" rel="noopener noreferrer"&gt;IRIS Community Edition — Docker Hub&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://community.intersystems.com" rel="noopener noreferrer"&gt;InterSystems Developer Community&lt;/a&gt;&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>ai</category>
      <category>chatgpt</category>
      <category>python</category>
    </item>
    <item>
      <title>Step-by-Step Guide: Setting Up RAG for Gen AI Agents Using IRIS Vector DB in Python</title>
      <dc:creator>InterSystems Developer</dc:creator>
      <pubDate>Sun, 29 Mar 2026 12:14:04 +0000</pubDate>
      <link>https://dev.to/intersystems/step-by-step-guide-setting-up-rag-for-gen-ai-agents-using-iris-vector-db-in-python-3b00</link>
      <guid>https://dev.to/intersystems/step-by-step-guide-setting-up-rag-for-gen-ai-agents-using-iris-vector-db-in-python-3b00</guid>
      <description>&lt;p&gt;&lt;span&gt;&lt;strong&gt;How to set up RAG for OpenAI agents using IRIS Vector DB in Python&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;In this article, I’ll walk you through an example of using InterSystems IRIS Vector DB to store embeddings and integrate them with an OpenAI agent.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;To demonstrate this, we’ll create an OpenAI agent with knowledge of InterSystems technology. We’ll achieve this by storing embeddings of some InterSystems documentation in IRIS and then using IRIS vector search to retrieve relevant content—enabling a Retrieval-Augmented Generation (RAG) workflow.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;Note: &lt;/span&gt;&lt;span&gt;Section 1 details how process text into embeddings. If you are only interested in IRIS vector search you can skip ahead to Section 2.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt; &lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Section 1: Embedding Data&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;Your embeddings are only as good as your data! To get the best results, you should prepare your data carefully. This may include:&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span&gt;&lt;span&gt;Cleaning the text (removing special characters or excess whitespace)&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;&lt;span&gt;Chunking the data into smaller pieces&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;&lt;span&gt;Other preprocessing techniques&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;For this example, the documentation is stored in simple text files that require minimal cleaning. However, we will divide the text into chunks to enable more efficient and accurate RAG.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt; &lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Step 1: Chunking Text Files&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;Chunking text into manageable pieces benefits RAG systems in two ways:&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li value="1"&gt;&lt;span&gt;&lt;span&gt;More accurate retrieval – embeddings represent smaller, more specific sections of text.&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;&lt;span&gt;More efficient retrieval – less text per query reduces cost and improves performance.&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;For this example, we’ll store the chunked text in Parquet files before uploading to IRIS (though you can use any approach, including direct upload).&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt; &lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Chunking Function&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;We’ll use RecursiveCharacterTextSplitter from langchain_text_splitters to split text strategically based on paragraph, sentence, and word boundaries.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span&gt;&lt;span&gt;Chunk size: 300 tokens (larger chunks provide more context but increase retrieval cost)&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;&lt;span&gt;Chunk overlap: 50 tokens (helps maintain context across chunks)&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;&lt;span class="mention"&gt;from&lt;/span&gt; langchain_text_splitters &lt;span class="mention"&gt;import&lt;/span&gt; RecursiveCharacterTextSplitter

&lt;p&gt;&lt;span&gt;def&lt;/span&gt; &lt;span&gt;chunk_text_by_tokens&lt;/span&gt;&lt;span&gt;(text: str, chunk_size: int, chunk_overlap: int)&lt;/span&gt; -&amp;gt; list[str]:&lt;br&gt;
    """&lt;br&gt;
    Chunk text prioritizing paragraph and sentence boundaries using&lt;br&gt;
    RecursiveCharacterTextSplitter. Returns a list of chunk strings.&lt;br&gt;
    """&lt;br&gt;
    splitter = RecursiveCharacterTextSplitter(&lt;br&gt;
        &lt;span&gt;# Prioritize larger semantic units first, then fall back to smaller ones&lt;/span&gt;&lt;br&gt;
        separators=[&lt;span&gt;"\n\n"&lt;/span&gt;, &lt;span&gt;"\n"&lt;/span&gt;, &lt;span&gt;". "&lt;/span&gt;, &lt;span&gt;" "&lt;/span&gt;, &lt;span&gt;""&lt;/span&gt;],&lt;br&gt;
        chunk_size=chunk_size,&lt;br&gt;
        chunk_overlap=chunk_overlap,&lt;br&gt;
        length_function=len,&lt;br&gt;
        is_separator_regex=&lt;span&gt;False&lt;/span&gt;,&lt;br&gt;
    )&lt;br&gt;
    &lt;span&gt;return&lt;/span&gt; splitter.split_text(text)&lt;br&gt;
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;Next, we’ll use the chunking function to process one text file at a time and apply a tiktoken encoder to calculate token counts and generate metadata. This metadata will be useful later when creating embeddings and storing them in IRIS.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&lt;span class="mention"&gt;from&lt;/span&gt; pathlib &lt;span class="mention"&gt;import&lt;/span&gt; Path&lt;br&gt;
&lt;span class="mention"&gt;import&lt;/span&gt; tiktoken

&lt;p&gt;&lt;span&gt;def&lt;/span&gt; &lt;span&gt;chunk_file&lt;/span&gt;&lt;span&gt;(path: Path, chunk_size: int, chunk_overlap: int, encoding_name: str = &lt;/span&gt;&lt;span&gt;"cl100k_base"&lt;/span&gt;) -&amp;gt; list[dict]:&lt;br&gt;
    """&lt;br&gt;
    Read a file, split its contents into token-aware chunks, and return metadata for each chunk.&lt;br&gt;
    Returns a list of dicts with keys:&lt;br&gt;
    - filename&lt;br&gt;
    - relative_path&lt;br&gt;
    - absolute_path&lt;br&gt;
    - chunk_index&lt;br&gt;
    - chunk_text&lt;br&gt;
    - token_count&lt;br&gt;
    - modified_time&lt;br&gt;
    - size_bytes&lt;br&gt;
    """&lt;br&gt;
    p = Path(path)&lt;br&gt;
    &lt;span&gt;if&lt;/span&gt; &lt;span&gt;not&lt;/span&gt; p.exists() &lt;span&gt;or&lt;/span&gt; &lt;span&gt;not&lt;/span&gt; p.is_file():&lt;br&gt;
        &lt;span&gt;raise&lt;/span&gt; FileNotFoundError(&lt;span&gt;f"File not found: &lt;/span&gt;&lt;span&gt;{path}&lt;/span&gt;")&lt;br&gt;
    &lt;span&gt;try&lt;/span&gt;:&lt;br&gt;
        text = p.read_text(encoding=&lt;span&gt;"utf-8"&lt;/span&gt;, errors=&lt;span&gt;"replace"&lt;/span&gt;)&lt;br&gt;
    &lt;span&gt;except&lt;/span&gt; Exception &lt;span&gt;as&lt;/span&gt; e:&lt;br&gt;
        &lt;span&gt;raise&lt;/span&gt; RuntimeError(&lt;span&gt;f"Failed to read file &lt;/span&gt;&lt;span&gt;{p}&lt;/span&gt;: &lt;span&gt;{e}&lt;/span&gt;")&lt;br&gt;
    &lt;span&gt;# Prepare tokenizer for accurate token counts&lt;/span&gt;&lt;br&gt;
    &lt;span&gt;try&lt;/span&gt;:&lt;br&gt;
        encoding = tiktoken.get_encoding(encoding_name)&lt;br&gt;
    &lt;span&gt;except&lt;/span&gt; Exception &lt;span&gt;as&lt;/span&gt; e:&lt;br&gt;
        &lt;span&gt;raise&lt;/span&gt; ValueError(&lt;span&gt;f"Invalid encoding name '&lt;/span&gt;&lt;span&gt;{encoding_name}&lt;/span&gt;': &lt;span&gt;{e}&lt;/span&gt;")&lt;br&gt;
    &lt;span&gt;# Create chunks using provided chunker&lt;/span&gt;&lt;br&gt;
    chunks = chunk_text_by_tokens(text, chunk_size, chunk_overlap)&lt;br&gt;
    &lt;span&gt;# File metadata&lt;/span&gt;&lt;br&gt;
    stat = p.stat()&lt;br&gt;
    &lt;span&gt;from&lt;/span&gt; datetime &lt;span&gt;import&lt;/span&gt; datetime, timezone&lt;br&gt;
    modified_time = datetime.fromtimestamp(stat.st_mtime, tz=timezone.utc).isoformat()&lt;br&gt;
    absolute_path = str(p.resolve())&lt;br&gt;
    &lt;span&gt;try&lt;/span&gt;:&lt;br&gt;
        relative_path = str(p.resolve().relative_to(Path.cwd()))&lt;br&gt;
    &lt;span&gt;except&lt;/span&gt; Exception:&lt;br&gt;
        relative_path = p.name&lt;br&gt;
    &lt;span&gt;# Build rows&lt;/span&gt;&lt;br&gt;
    rows: list[dict] = []&lt;br&gt;
    &lt;span&gt;for&lt;/span&gt; idx, chunk &lt;span&gt;in&lt;/span&gt; enumerate(chunks):&lt;br&gt;
        token_count = len(encoding.encode(chunk))&lt;br&gt;
        rows.append({&lt;br&gt;
            &lt;span&gt;"filename"&lt;/span&gt;: p.name,&lt;br&gt;
            &lt;span&gt;"relative_path"&lt;/span&gt;: relative_path,&lt;br&gt;
            &lt;span&gt;"absolute_path"&lt;/span&gt;: absolute_path,&lt;br&gt;
            &lt;span&gt;"chunk_index"&lt;/span&gt;: idx,&lt;br&gt;
            &lt;span&gt;"chunk_text"&lt;/span&gt;: chunk,&lt;br&gt;
            &lt;span&gt;"token_count"&lt;/span&gt;: token_count,&lt;br&gt;
            &lt;span&gt;"modified_time"&lt;/span&gt;: modified_time,&lt;br&gt;
            &lt;span&gt;"size_bytes"&lt;/span&gt;: stat.st_size,&lt;br&gt;
        })&lt;br&gt;
    &lt;span&gt;return&lt;/span&gt; rows&lt;br&gt;
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Step 2: Creating embeddings&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;You can generate embeddings using cloud providers (e.g., OpenAI) or local models via Ollama (e.g., nomic-embed-text). In this example, we’ll use OpenAI’s text-embedding-3-small model to embed each chunk and save the results back to Parquet for later ingestion into IRIS Vector DB.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&lt;span class="mention"&gt;from&lt;/span&gt; openai &lt;span class="mention"&gt;import&lt;/span&gt; OpenAI&lt;br&gt;
&lt;span class="mention"&gt;import&lt;/span&gt; pandas &lt;span class="mention"&gt;as&lt;/span&gt; pd

&lt;p&gt;&lt;span&gt;def&lt;/span&gt; &lt;span&gt;embed_and_save_parquet&lt;/span&gt;&lt;span&gt;(input_parquet_path: str, output_parquet_path: str)&lt;/span&gt;:&lt;br&gt;
    """&lt;br&gt;
    Loads a Parquet file, creates embeddings for the 'chunk_text' column using &lt;br&gt;
    OpenAI's small embedding model, and saves the result to a new Parquet file.&lt;br&gt;
    Args:&lt;br&gt;
        input_parquet_path (str): Path to the input Parquet file containing 'chunk_text'.&lt;br&gt;
        output_parquet_path (str): Path to save the new Parquet file with embeddings.&lt;br&gt;
        openai_api_key (str): Your OpenAI API key.&lt;br&gt;
    """&lt;br&gt;
    key = os.getenv(&lt;span&gt;"OPENAI_API_KEY"&lt;/span&gt;)&lt;br&gt;
    &lt;span&gt;if&lt;/span&gt; &lt;span&gt;not&lt;/span&gt; key:&lt;br&gt;
        print(&lt;span&gt;"ERROR: OPENAI_API_KEY environment variable is not set."&lt;/span&gt;, file=sys.stderr)&lt;br&gt;
        sys.exit(&lt;span&gt;1&lt;/span&gt;)&lt;br&gt;
    &lt;span&gt;try&lt;/span&gt;:&lt;br&gt;
        &lt;span&gt;# Load the Parquet file&lt;/span&gt;&lt;br&gt;
        df = pd.read_parquet(input_parquet_path)&lt;br&gt;
        &lt;span&gt;# Initialize OpenAI client&lt;/span&gt;&lt;br&gt;
        client = OpenAI(api_key=key)&lt;br&gt;
        &lt;span&gt;# Generate embeddings for each chunk_text&lt;/span&gt;&lt;br&gt;
        embeddings = []&lt;br&gt;
        &lt;span&gt;for&lt;/span&gt; text &lt;span&gt;in&lt;/span&gt; df[&lt;span&gt;'chunk_text'&lt;/span&gt;]:&lt;br&gt;
            response = client.embeddings.create(&lt;br&gt;
                input=text,&lt;br&gt;
                model=&lt;span&gt;"text-embedding-3-small"&lt;/span&gt;  &lt;span&gt;# Using the small embedding model&lt;/span&gt;&lt;br&gt;
            )&lt;br&gt;
            embeddings.append(response.data[&lt;span&gt;0&lt;/span&gt;].embedding)&lt;br&gt;
        &lt;span&gt;# Add embeddings to the DataFrame&lt;/span&gt;&lt;br&gt;
        df[&lt;span&gt;'embedding'&lt;/span&gt;] = embeddings&lt;br&gt;
        &lt;span&gt;# Save the new DataFrame to a Parquet file&lt;/span&gt;&lt;br&gt;
        df.to_parquet(output_parquet_path, index=&lt;span&gt;False&lt;/span&gt;)&lt;br&gt;
        print(&lt;span&gt;f"Embeddings generated and saved to &lt;/span&gt;&lt;span&gt;{output_parquet_path}&lt;/span&gt;")&lt;br&gt;
    &lt;span&gt;except&lt;/span&gt; FileNotFoundError:&lt;br&gt;
        print(&lt;span&gt;f"Error: Input file not found at &lt;/span&gt;&lt;span&gt;{input_parquet_path}&lt;/span&gt;")&lt;br&gt;
    &lt;span&gt;except&lt;/span&gt; KeyError:&lt;br&gt;
        print(&lt;span&gt;"Error: 'chunk_text' column not found in the input Parquet file."&lt;/span&gt;)&lt;br&gt;
    &lt;span&gt;except&lt;/span&gt; Exception &lt;span&gt;as&lt;/span&gt; e:&lt;br&gt;
        print(&lt;span&gt;f"An unexpected error occurred: &lt;/span&gt;&lt;span&gt;{e}&lt;/span&gt;")&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Step 3: Put the data processing together&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;Now it’s time to run the pipeline. In this example, we’ll load and chunk the Business Service documentation, generate embeddings, and write the results to Parquet for IRIS ingestion.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;CHUNK_SIZE_TOKENS = &lt;span class="mention"&gt;300&lt;/span&gt;&lt;br&gt;
    CHUNK_OVERLAP_TOKENS = &lt;span class="mention"&gt;50&lt;/span&gt;&lt;br&gt;
    ENCODING_NAME=&lt;span class="mention"&gt;"cl100k_base"&lt;/span&gt;&lt;br&gt;
    current_file_path = Path(&lt;strong&gt;file&lt;/strong&gt;).resolve()
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;load_documentation_to_parquet(input_dir=current_file_path.parent / &amp;lt;span class="mention"&amp;gt;"Documentation"&amp;lt;/span&amp;gt; / &amp;lt;span class="mention"&amp;gt;"BusinessService"&amp;lt;/span&amp;gt;, 
                              output_file=current_file_path.parent / &amp;lt;span class="mention"&amp;gt;"BusinessService.parquet"&amp;lt;/span&amp;gt;, 
                              chunk_size=CHUNK_SIZE_TOKENS, 
                              chunk_overlap=CHUNK_OVERLAP_TOKENS, 
                              encoding_name=ENCODING_NAME)
embed_and_save_parquet(input_parquet_path=current_file_path.parent / &amp;lt;span class="mention"&amp;gt;"BusinessService.parquet"&amp;lt;/span&amp;gt;, 
                        output_parquet_path=current_file_path.parent / &amp;lt;span class="mention"&amp;gt;"BusinessService_embedded.parquet"&amp;lt;/span&amp;gt;)
&lt;/code&gt;&lt;/pre&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;A row in our final business service parquet file will look something like this:&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{&lt;span class="mention"&gt;"filename"&lt;/span&gt;:&lt;span class="mention"&gt;"FileInboundAdapters.txt"&lt;/span&gt;,&lt;span class="mention"&gt;"relative_path"&lt;/span&gt;:&lt;span class="mention"&gt;"Documentation\BusinessService\Adapters\FileInboundAdapters.txt"&lt;/span&gt;,&lt;span class="mention"&gt;"absolute_path"&lt;/span&gt;:&lt;span class="mention"&gt;"C:\Users\…\Documentation\BusinessService\Adapters\FileInboundAdapters.txt"&lt;/span&gt;,&lt;span class="mention"&gt;"chunk_index"&lt;/span&gt;:&lt;span class="mention"&gt;0&lt;/span&gt;,&lt;span class="mention"&gt;"chunk_text"&lt;/span&gt;:&lt;span class="mention"&gt;"Settings for the File Inbound Adapter\nProvides reference information for settings of the file inbound adapter, EnsLib.File.InboundAdapterOpens in a new tab. You can configure these settings after you have added a business service that uses this adapter to your production.\nSummary"&lt;/span&gt;,&lt;span class="mention"&gt;"token_count"&lt;/span&gt;:&lt;span class="mention"&gt;52&lt;/span&gt;,&lt;span class="mention"&gt;"modified_time"&lt;/span&gt;:&lt;span class="mention"&gt;"2025-11-25T18:34:16.120336+00:00"&lt;/span&gt;,&lt;span class="mention"&gt;"size_bytes"&lt;/span&gt;:&lt;span class="mention"&gt;13316&lt;/span&gt;,&lt;span class="mention"&gt;"embedding"&lt;/span&gt;:[&lt;span class="mention"&gt;-0.02851865254342556&lt;/span&gt;,&lt;span class="mention"&gt;0.01860344596207142&lt;/span&gt;,…,&lt;span class="mention"&gt;0.0135544464207155&lt;/span&gt;]}&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Section 2: Using IRIS Vector Search&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt; &lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Step 4: Upload Your Embeddings to IRIS&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;Choose the IRIS namespace and table name you’ll use to store embeddings. (The script below will create the table if it doesn’t already exist.) Then use the InterSystems IRIS Python DB-API driver to insert the chunks and their embeddings.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;The function below reads a Parquet file containing chunk text and embeddings, normalizes the embedding column to a JSON-serializable list of floats, connects to IRIS, creates the destination table if it doesn’t exist (with a VECTOR(FLOAT, 1536) column, where 1536 is the number of dimensions in the embedding), and then inserts each row using TO_VECTOR(?) in a parameterized SQL statement. It commits the transaction on success, logs progress, and cleans up the connection, rolling back on database errors.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&lt;span class="mention"&gt;import&lt;/span&gt; iris  &lt;span class="mention"&gt;# The InterSystems IRIS Python DB-API driver &lt;/span&gt;&lt;br&gt;
&lt;span class="mention"&gt;import&lt;/span&gt; pandas &lt;span class="mention"&gt;as&lt;/span&gt; pd&lt;br&gt;
&lt;span class="mention"&gt;import&lt;/span&gt; numpy &lt;span class="mention"&gt;as&lt;/span&gt; np&lt;br&gt;
&lt;span class="mention"&gt;import&lt;/span&gt; json&lt;br&gt;
&lt;span class="mention"&gt;from&lt;/span&gt; pathlib &lt;span class="mention"&gt;import&lt;/span&gt; Path

&lt;p&gt;&lt;span&gt;# --- Configuration ---&lt;/span&gt;&lt;br&gt;
PARQUET_FILE_PATH = &lt;span&gt;"your_embeddings.parquet"&lt;/span&gt;&lt;br&gt;
IRIS_HOST = &lt;span&gt;"localhost"&lt;/span&gt;&lt;br&gt;
IRIS_PORT = &lt;span&gt;8881&lt;/span&gt;&lt;br&gt;
IRIS_NAMESPACE = &lt;span&gt;"VECTOR"&lt;/span&gt;&lt;br&gt;
IRIS_USERNAME = &lt;span&gt;"superuser"&lt;/span&gt;&lt;br&gt;
IRIS_PASSWORD = &lt;span&gt;"sys"&lt;/span&gt;&lt;br&gt;
TABLE_NAME = &lt;span&gt;"AIDemo.Embeddings"&lt;/span&gt; &lt;span&gt;# Must match the table created in IRIS&lt;/span&gt;&lt;br&gt;
EMBEDDING_DIMENSIONS = &lt;span&gt;1536&lt;/span&gt; &lt;span&gt;# Must match the dimensions for the embeddings you used&lt;/span&gt;&lt;br&gt;
&lt;span&gt;def&lt;/span&gt; &lt;span&gt;upload_embeddings_to_iris&lt;/span&gt;&lt;span&gt;(parquet_path: str)&lt;/span&gt;:&lt;br&gt;
    """&lt;br&gt;
    Reads a Parquet file with 'chunk_text' and 'embedding' columns &lt;br&gt;
    and uploads them to an InterSystems IRIS vector database table.&lt;br&gt;
    """&lt;br&gt;
    &lt;span&gt;# 1. Load data from the Parquet file using pandas&lt;/span&gt;&lt;br&gt;
    &lt;span&gt;try&lt;/span&gt;:&lt;br&gt;
        df = pd.read_parquet(parquet_path)&lt;br&gt;
        &lt;span&gt;if&lt;/span&gt; &lt;span&gt;'chunk_text'&lt;/span&gt; &lt;span&gt;not&lt;/span&gt; &lt;span&gt;in&lt;/span&gt; df.columns &lt;span&gt;or&lt;/span&gt; &lt;span&gt;'embedding'&lt;/span&gt; &lt;span&gt;not&lt;/span&gt; &lt;span&gt;in&lt;/span&gt; df.columns:&lt;br&gt;
            print(&lt;span&gt;"Error: Parquet file must contain 'chunk_text' and 'embedding' columns."&lt;/span&gt;)&lt;br&gt;
            &lt;span&gt;return&lt;/span&gt;&lt;br&gt;
    &lt;span&gt;except&lt;/span&gt; FileNotFoundError:&lt;br&gt;
        print(&lt;span&gt;f"Error: The file at &lt;/span&gt;&lt;span&gt;{parquet_path}&lt;/span&gt; was not found.")&lt;br&gt;
        &lt;span&gt;return&lt;/span&gt;&lt;br&gt;
    &lt;span&gt;# Ensure embeddings are in a format compatible with TO_VECTOR function (list of floats)&lt;/span&gt;&lt;br&gt;
    &lt;span&gt;# Parquet often saves numpy arrays as lists&lt;/span&gt;&lt;br&gt;
    &lt;span&gt;if&lt;/span&gt; isinstance(df[&lt;span&gt;'embedding'&lt;/span&gt;].iloc[&lt;span&gt;0&lt;/span&gt;], np.ndarray):&lt;br&gt;
        df[&lt;span&gt;'embedding'&lt;/span&gt;] = df[&lt;span&gt;'embedding'&lt;/span&gt;].apply(&lt;span&gt;lambda&lt;/span&gt; x: x.tolist())&lt;br&gt;
    print(&lt;span&gt;f"Loaded &lt;/span&gt;&lt;span&gt;{len(df)}&lt;/span&gt; records from &lt;span&gt;{parquet_path}&lt;/span&gt;.")&lt;br&gt;
    &lt;span&gt;# 2. Establish connection to InterSystems IRIS&lt;/span&gt;&lt;br&gt;
    connection = &lt;span&gt;None&lt;/span&gt;&lt;br&gt;
    &lt;span&gt;try&lt;/span&gt;:&lt;br&gt;
        conn_string = &lt;span&gt;f"&lt;/span&gt;&lt;span&gt;{IRIS_HOST}&lt;/span&gt;:&lt;span&gt;{IRIS_PORT}&lt;/span&gt;/&lt;span&gt;{IRIS_NAMESPACE}&lt;/span&gt;"&lt;br&gt;
        connection = iris.connect(conn_string, IRIS_USERNAME, IRIS_PASSWORD)&lt;br&gt;
        cursor = connection.cursor()&lt;br&gt;
        print(&lt;span&gt;"Successfully connected to InterSystems IRIS."&lt;/span&gt;)&lt;br&gt;
        &lt;span&gt;# Create embedding table if it doesn't exist&lt;/span&gt;&lt;br&gt;
        cursor.execute(f"""&lt;br&gt;
            CREATE TABLE IF NOT EXISTS  &lt;span&gt;{TABLE_NAME}&lt;/span&gt; (&lt;br&gt;
            ID INTEGER IDENTITY PRIMARY KEY,&lt;br&gt;
            chunk_text VARCHAR(2500), embedding VECTOR(FLOAT, &lt;span&gt;{EMBEDDING_DIMENSIONS}&lt;/span&gt;)&lt;br&gt;
            )"""&lt;br&gt;
        )&lt;br&gt;
        &lt;span&gt;# 3. Prepare the SQL INSERT statement&lt;/span&gt;&lt;br&gt;
        &lt;span&gt;# InterSystems IRIS uses the TO_VECTOR function for inserting vector data via SQL&lt;/span&gt;&lt;br&gt;
        insert_sql = f"""&lt;br&gt;
        INSERT INTO &lt;span&gt;{TABLE_NAME}&lt;/span&gt; (chunk_text, embedding) &lt;br&gt;
        VALUES (?, TO_VECTOR(?))&lt;br&gt;
        """&lt;br&gt;
        &lt;span&gt;# 4. Iterate and insert data&lt;/span&gt;&lt;br&gt;
        count = &lt;span&gt;0&lt;/span&gt;&lt;br&gt;
        &lt;span&gt;for&lt;/span&gt; index, row &lt;span&gt;in&lt;/span&gt; df.iterrows():&lt;br&gt;
            text = row[&lt;span&gt;'chunk_text'&lt;/span&gt;]&lt;br&gt;
            &lt;span&gt;# Convert the list of floats to a JSON string, which is required by TO_VECTOR when using DB-API&lt;/span&gt;&lt;br&gt;
            vector_json_str = json.dumps(row[&lt;span&gt;'embedding'&lt;/span&gt;]) &lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        cursor.execute(insert_sql, (text, vector_json_str))
        count += &amp;lt;span class="mention"&amp;gt;1&amp;lt;/span&amp;gt;
        &amp;lt;span class="mention"&amp;gt;if&amp;lt;/span&amp;gt; count % &amp;lt;span class="mention"&amp;gt;100&amp;lt;/span&amp;gt; == &amp;lt;span class="mention"&amp;gt;0&amp;lt;/span&amp;gt;:
            print(&amp;lt;span class="mention"&amp;gt;f"Inserted &amp;lt;/span&amp;gt;&amp;lt;span class="mention"&amp;gt;{count}&amp;lt;/span&amp;gt; rows...")

    &amp;lt;span class="mention"&amp;gt;# Commit the transaction&amp;lt;/span&amp;gt;
    connection.commit()
    print(&amp;lt;span class="mention"&amp;gt;f"Data upload complete. Total rows inserted: &amp;lt;/span&amp;gt;&amp;lt;span class="mention"&amp;gt;{count}&amp;lt;/span&amp;gt;.")
&amp;lt;span class="mention"&amp;gt;except&amp;lt;/span&amp;gt; iris.DBAPIError &amp;lt;span class="mention"&amp;gt;as&amp;lt;/span&amp;gt; e:
    print(&amp;lt;span class="mention"&amp;gt;f"A database error occurred: &amp;lt;/span&amp;gt;&amp;lt;span class="mention"&amp;gt;{e}&amp;lt;/span&amp;gt;")
    &amp;lt;span class="mention"&amp;gt;if&amp;lt;/span&amp;gt; connection:
        connection.rollback()
&amp;lt;span class="mention"&amp;gt;except&amp;lt;/span&amp;gt; Exception &amp;lt;span class="mention"&amp;gt;as&amp;lt;/span&amp;gt; e:
    print(&amp;lt;span class="mention"&amp;gt;f"An unexpected error occurred: &amp;lt;/span&amp;gt;&amp;lt;span class="mention"&amp;gt;{e}&amp;lt;/span&amp;gt;")
&amp;lt;span class="mention"&amp;gt;finally&amp;lt;/span&amp;gt;:
    &amp;lt;span class="mention"&amp;gt;if&amp;lt;/span&amp;gt; connection:
        connection.close()
        print(&amp;lt;span class="mention"&amp;gt;"Database connection closed."&amp;lt;/span&amp;gt;)
&lt;/code&gt;&lt;/pre&gt;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;Example usage:&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;current_file_path = Path(&lt;strong&gt;file&lt;/strong&gt;).resolve()&lt;br&gt;
    upload_embeddings_to_iris(current_file_path.parent / &lt;span class="mention"&gt;"BusinessService_embedded.parquet"&lt;/span&gt;)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Step 5: Create your embedding search functionality&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;Next, we’ll create a search function that embeds the user’s query, runs a vector similarity search in IRIS via the Python DB&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span&gt;‑&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span&gt;API, and returns the top&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span&gt;‑&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span&gt;k matching chunks from our embeddings table.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;The example function below reads a Parquet file containing text chunks and their corresponding embeddings, then uploads this data into the InterSystems IRIS vector storage table. It first validates the Parquet file and normalizes the embedding format into a JSON array string compatible with IRIS’s TO_VECTOR function. After establishing a connection to IRIS, the function creates the target table if it does not exist, prepares a parameterized SQL INSERT statement, and iterates through each row to insert the chunk text and embedding. Finally, it commits the transaction, logs progress, and ensures proper error handling and cleanup of the database connection.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&lt;span class="mention"&gt;import&lt;/span&gt; iris&lt;br&gt;
&lt;span class="mention"&gt;from&lt;/span&gt; typing &lt;span class="mention"&gt;import&lt;/span&gt; List&lt;br&gt;
&lt;span class="mention"&gt;import&lt;/span&gt; os&lt;br&gt;
&lt;span class="mention"&gt;from&lt;/span&gt; openai &lt;span class="mention"&gt;import&lt;/span&gt; OpenAI

&lt;p&gt;&lt;span&gt;# --- Configuration ---&lt;/span&gt;&lt;br&gt;
PARQUET_FILE_PATH = &lt;span&gt;"your_embeddings.parquet"&lt;/span&gt;&lt;br&gt;
IRIS_HOST = &lt;span&gt;"localhost"&lt;/span&gt;&lt;br&gt;
IRIS_PORT = &lt;span&gt;8881&lt;/span&gt;&lt;br&gt;
IRIS_NAMESPACE = &lt;span&gt;"VECTOR"&lt;/span&gt;&lt;br&gt;
IRIS_USERNAME = &lt;span&gt;"superuser"&lt;/span&gt;&lt;br&gt;
IRIS_PASSWORD = &lt;span&gt;"sys"&lt;/span&gt;&lt;br&gt;
TABLE_NAME = &lt;span&gt;"AIDemo.Embeddings"&lt;/span&gt; &lt;span&gt;# Must match the table created in IRIS&lt;/span&gt;&lt;br&gt;
EMBEDDING_DIMENSIONS = &lt;span&gt;1536&lt;/span&gt;&lt;br&gt;
MODEL = &lt;span&gt;"text-embedding-3-small"&lt;/span&gt;&lt;br&gt;
&lt;span&gt;def&lt;/span&gt; &lt;span&gt;get_embedding&lt;/span&gt;&lt;span&gt;(text: str, model: str, client)&lt;/span&gt; -&amp;gt; List[float]:&lt;br&gt;
    &lt;span&gt;# Normalize newlines and coerce to str&lt;/span&gt;&lt;br&gt;
    payload = [(&lt;span&gt;""&lt;/span&gt; &lt;span&gt;if&lt;/span&gt; text &lt;span&gt;is&lt;/span&gt; &lt;span&gt;None&lt;/span&gt; &lt;span&gt;else&lt;/span&gt; str(text)).replace(&lt;span&gt;"\n"&lt;/span&gt;, &lt;span&gt;" "&lt;/span&gt;) &lt;span&gt;for&lt;/span&gt; _ &lt;span&gt;in&lt;/span&gt; range(&lt;span&gt;1&lt;/span&gt;)]&lt;br&gt;
    resp = client.embeddings.create(model=model, input=payload, encoding_format=&lt;span&gt;"float"&lt;/span&gt;)&lt;br&gt;
    &lt;span&gt;return&lt;/span&gt; resp.data[&lt;span&gt;0&lt;/span&gt;].embedding&lt;br&gt;
&lt;span&gt;def&lt;/span&gt; &lt;span&gt;search_embeddings&lt;/span&gt;&lt;span&gt;(search: str, top_k: int)&lt;/span&gt;:&lt;br&gt;
    print(&lt;span&gt;"-------RAG--------"&lt;/span&gt;)&lt;br&gt;
    print(&lt;span&gt;f"Searching IRIS vector store for: "&lt;/span&gt;, search)&lt;br&gt;
    key = os.getenv(&lt;span&gt;"OPENAI_API_KEY"&lt;/span&gt;)&lt;br&gt;
    client = OpenAI(api_key=key)&lt;br&gt;
 &lt;span&gt;# 2. Establish connection to InterSystems IRIS&lt;/span&gt;&lt;br&gt;
    connection = &lt;span&gt;None&lt;/span&gt;&lt;br&gt;
    &lt;span&gt;try&lt;/span&gt;:&lt;br&gt;
        conn_string = &lt;span&gt;f"&lt;/span&gt;&lt;span&gt;{IRIS_HOST}&lt;/span&gt;:&lt;span&gt;{IRIS_PORT}&lt;/span&gt;/&lt;span&gt;{IRIS_NAMESPACE}&lt;/span&gt;"&lt;br&gt;
        connection = iris.connect(conn_string, IRIS_USERNAME, IRIS_PASSWORD)&lt;br&gt;
        cursor = connection.cursor()&lt;br&gt;
        print(&lt;span&gt;"Successfully connected to InterSystems IRIS."&lt;/span&gt;)&lt;br&gt;
        &lt;span&gt;# Embed query for searching&lt;/span&gt;&lt;br&gt;
        &lt;span&gt;#emb_raw = str(test_embedding) # FOR TESTING&lt;/span&gt;&lt;br&gt;
        emb_raw = get_embedding(search, model=MODEL, client=client)&lt;br&gt;
        emb_raw = str(emb_raw)&lt;br&gt;
        &lt;span&gt;#print("EMB_RAW:", emb_raw)&lt;/span&gt;&lt;br&gt;
        emb_values = []&lt;br&gt;
        &lt;span&gt;for&lt;/span&gt; x &lt;span&gt;in&lt;/span&gt; emb_raw.replace(&lt;span&gt;'['&lt;/span&gt;, &lt;span&gt;''&lt;/span&gt;).replace(&lt;span&gt;']'&lt;/span&gt;, &lt;span&gt;''&lt;/span&gt;).split(&lt;span&gt;','&lt;/span&gt;):&lt;br&gt;
            &lt;span&gt;try&lt;/span&gt;:&lt;br&gt;
                emb_values.append(str(float(x.strip())))&lt;br&gt;
            &lt;span&gt;except&lt;/span&gt; ValueError:&lt;br&gt;
                &lt;span&gt;continue&lt;/span&gt;&lt;br&gt;
        emb_str = &lt;span&gt;", "&lt;/span&gt;.join(emb_values)&lt;br&gt;
        &lt;span&gt;# Prepare the SQL SELECT statement&lt;/span&gt;&lt;br&gt;
        search_sql = f"""&lt;br&gt;
        SELECT TOP &lt;span&gt;{top_k}&lt;/span&gt; ID, chunk_text FROM &lt;span&gt;{TABLE_NAME}&lt;/span&gt;&lt;br&gt;
        ORDER BY VECTOR_DOT_PRODUCT((embedding), TO_VECTOR(('&lt;span&gt;{emb_str}&lt;/span&gt;'), FLOAT)) DESC&lt;br&gt;
        """&lt;br&gt;
        cursor.execute(search_sql)&lt;br&gt;
        results = []&lt;br&gt;
        row = cursor.fetchone()&lt;br&gt;
        &lt;span&gt;while&lt;/span&gt; row &lt;span&gt;is&lt;/span&gt; &lt;span&gt;not&lt;/span&gt; &lt;span&gt;None&lt;/span&gt;:&lt;br&gt;
            results.append(row[:])&lt;br&gt;
            row = cursor.fetchone()&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;span class="mention"&amp;gt;except&amp;lt;/span&amp;gt; iris.DBAPIError &amp;lt;span class="mention"&amp;gt;as&amp;lt;/span&amp;gt; e:
    print(&amp;lt;span class="mention"&amp;gt;f"A database error occurred: &amp;lt;/span&amp;gt;&amp;lt;span class="mention"&amp;gt;{e}&amp;lt;/span&amp;gt;")
    &amp;lt;span class="mention"&amp;gt;if&amp;lt;/span&amp;gt; connection:
        connection.rollback()
&amp;lt;span class="mention"&amp;gt;except&amp;lt;/span&amp;gt; Exception &amp;lt;span class="mention"&amp;gt;as&amp;lt;/span&amp;gt; e:
    print(&amp;lt;span class="mention"&amp;gt;f"An unexpected error occurred: &amp;lt;/span&amp;gt;&amp;lt;span class="mention"&amp;gt;{e}&amp;lt;/span&amp;gt;")
&amp;lt;span class="mention"&amp;gt;finally&amp;lt;/span&amp;gt;:
    &amp;lt;span class="mention"&amp;gt;if&amp;lt;/span&amp;gt; connection:
        connection.close()
        print(&amp;lt;span class="mention"&amp;gt;"Database connection closed."&amp;lt;/span&amp;gt;)
    print(&amp;lt;span class="mention"&amp;gt;"------------RAG Finished-------------"&amp;lt;/span&amp;gt;)
    &amp;lt;span class="mention"&amp;gt;return&amp;lt;/span&amp;gt; results
&lt;/code&gt;&lt;/pre&gt;


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;span&gt;&lt;strong&gt;Step 6: Add RAG context to your agent&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;Now that you’ve:&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;

&lt;li&gt;&lt;span&gt;&lt;span&gt;Chunked and embedded your documentation,&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;

&lt;li&gt;&lt;span&gt;&lt;span&gt;Uploaded embeddings to IRIS and created a vector index,&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;

&lt;li&gt;&lt;span&gt;&lt;span&gt;Built a search function for IRIS vector queries,&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;

&lt;/ul&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;it’s time to put it all together into an interactive Retrieval-Augmented Generation (RAG) chat using the OpenAI Responses API. For this example we will give the agent access to the search function directly (for more fine-grained control of the agent), but this can also be done using a library like langchain as well.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;First, you will need to create your instructions for the agent, making sure give it access to the search function:&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&lt;span class="mention"&gt;import&lt;/span&gt; os&lt;br&gt;&lt;br&gt;
&lt;span class="mention"&gt;# ---------------------------- Configuration ----------------------------&lt;/span&gt;&lt;br&gt;&lt;br&gt;
MODEL = os.getenv(&lt;span class="mention"&gt;"OPENAI_RESPONSES_MODEL"&lt;/span&gt;, &lt;span class="mention"&gt;"gpt-5-nano"&lt;/span&gt;)&lt;br&gt;&lt;br&gt;
SYSTEM_INSTRUCTIONS = (&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;"You are a helpful assistant that answers questions about InterSystems "&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;"business services and related integration capabilities. You have access "&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;"to a vector database of documentation chunks about business services. "&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;"\n\n"&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;"Use the &lt;code&gt;search_business_docs&lt;/code&gt; tool whenever the user asks about specific "&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;"settings, configuration options, or how to perform tasks with business "&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;"services. Ground your answers in the retrieved context, quoting or "&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;"summarizing relevant chunks. If nothing relevant is found, say so "&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;"clearly and answer from your general knowledge with a disclaimer."&lt;/span&gt;&lt;br&gt;&lt;br&gt;
)


&lt;p&gt;&lt;span&gt;# ---------------------------- Tool Definition ----------------------------&lt;/span&gt;&lt;br&gt;&lt;br&gt;
TOOLS = [&lt;br&gt;&lt;br&gt;
    {&lt;br&gt;&lt;br&gt;
        &lt;span&gt;"type"&lt;/span&gt;: &lt;span&gt;"function"&lt;/span&gt;,&lt;br&gt;&lt;br&gt;
        &lt;span&gt;"name"&lt;/span&gt;: &lt;span&gt;"search_business_docs"&lt;/span&gt;,&lt;br&gt;&lt;br&gt;
        &lt;span&gt;"description"&lt;/span&gt;: (&lt;br&gt;&lt;br&gt;
            &lt;span&gt;"Searches a vector database of documentation chunks related to "&lt;/span&gt;&lt;br&gt;&lt;br&gt;
            &lt;span&gt;"business services and returns the most relevant snippets."&lt;/span&gt;&lt;br&gt;&lt;br&gt;
        ),&lt;br&gt;&lt;br&gt;
        &lt;span&gt;"parameters"&lt;/span&gt;: {&lt;br&gt;&lt;br&gt;
            &lt;span&gt;"type"&lt;/span&gt;: &lt;span&gt;"object"&lt;/span&gt;,&lt;br&gt;&lt;br&gt;
            &lt;span&gt;"properties"&lt;/span&gt;: {&lt;br&gt;&lt;br&gt;
                &lt;span&gt;"query"&lt;/span&gt;: {&lt;br&gt;&lt;br&gt;
                    &lt;span&gt;"type"&lt;/span&gt;: &lt;span&gt;"string"&lt;/span&gt;,&lt;br&gt;&lt;br&gt;
                    &lt;span&gt;"description"&lt;/span&gt;: (&lt;br&gt;&lt;br&gt;
                        &lt;span&gt;"Natural language search query describing what you want "&lt;/span&gt;&lt;br&gt;&lt;br&gt;
                        &lt;span&gt;"to know about business services."&lt;/span&gt;&lt;br&gt;&lt;br&gt;
                    ),&lt;br&gt;&lt;br&gt;
                },&lt;br&gt;&lt;br&gt;
                &lt;span&gt;"top_k"&lt;/span&gt;: {&lt;br&gt;&lt;br&gt;
                    &lt;span&gt;"type"&lt;/span&gt;: &lt;span&gt;"integer"&lt;/span&gt;,&lt;br&gt;&lt;br&gt;
                    &lt;span&gt;"description"&lt;/span&gt;: (&lt;br&gt;&lt;br&gt;
                        &lt;span&gt;"Maximum number of results to retrieve from the vector DB."&lt;/span&gt;&lt;br&gt;&lt;br&gt;
                    ),&lt;br&gt;&lt;br&gt;
                    &lt;span&gt;"minimum"&lt;/span&gt;: &lt;span&gt;1&lt;/span&gt;,&lt;br&gt;&lt;br&gt;
                    &lt;span&gt;"maximum"&lt;/span&gt;: &lt;span&gt;10&lt;/span&gt;,&lt;br&gt;&lt;br&gt;
                },&lt;br&gt;&lt;br&gt;
            },&lt;br&gt;&lt;br&gt;
            &lt;span&gt;"required"&lt;/span&gt;: [&lt;span&gt;"query"&lt;/span&gt;, &lt;span&gt;"top_k"&lt;/span&gt;],&lt;br&gt;&lt;br&gt;
            &lt;span&gt;"additionalProperties"&lt;/span&gt;: &lt;span&gt;False&lt;/span&gt;,&lt;br&gt;&lt;br&gt;
        },&lt;br&gt;&lt;br&gt;
        &lt;span&gt;"strict"&lt;/span&gt;: &lt;span&gt;True&lt;/span&gt;,&lt;br&gt;&lt;br&gt;
    }&lt;br&gt;&lt;br&gt;
]&lt;br&gt;&lt;br&gt;
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;Now we need a small “router” method to let the model actually use our RAG tool.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;call_rag_tool(name, args) receives a function call emitted by the OpenAI Responses API and routes it to our local implementation (the search_business_docs tool that wraps Search.search_embeddings). It takes the model’s query and top_k, runs the IRIS vector search, and returns a JSON&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span&gt;‑&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span&gt;encoded payload of the top matches (IDs and text snippets). This stringified JSON is important because the Responses API expects tool outputs as strings; by formatting the results predictably, we make it easy for the model to ground its final answer in the retrieved documentation. If an unknown tool name is requested, the function returns an error payload so the model can handle it gracefully.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&lt;span class="mention"&gt;def&lt;/span&gt; &lt;span class="mention"&gt;call_rag_tool&lt;/span&gt;&lt;span class="mention"&gt;(name: str, args: Dict[str, Any])&lt;/span&gt; -&amp;gt; str:&lt;br&gt;&lt;br&gt;
    """Route function calls from the model to our local Python implementations.&lt;br&gt;&lt;br&gt;
    Currently only supports the &lt;code&gt;search_business_docs&lt;/code&gt; tool, which wraps&lt;br&gt;&lt;br&gt;
    &lt;code&gt;Search.search_embeddings&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
    The return value must be a string. We will JSON-encode a small structure&lt;br&gt;&lt;br&gt;
    so the model can consume the results reliably.&lt;br&gt;&lt;br&gt;
    """&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;if&lt;/span&gt; name == &lt;span class="mention"&gt;"search_business_docs"&lt;/span&gt;:&lt;br&gt;&lt;br&gt;
        query = args.get(&lt;span class="mention"&gt;"query"&lt;/span&gt;, &lt;span class="mention"&gt;""&lt;/span&gt;)&lt;br&gt;&lt;br&gt;
        top_k = args.get(&lt;span class="mention"&gt;"top_k"&lt;/span&gt;, &lt;span class="mention"&gt;""&lt;/span&gt;)&lt;br&gt;&lt;br&gt;
        results = search_embeddings(query, top_k)&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;# Expecting each row to be something like (ID, chunk_text)&lt;/span&gt;&lt;br&gt;&lt;br&gt;
        formatted: List[Dict[str, Any]] = []&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;for&lt;/span&gt; row &lt;span class="mention"&gt;in&lt;/span&gt; results:&lt;br&gt;&lt;br&gt;
            &lt;span class="mention"&gt;if&lt;/span&gt; &lt;span class="mention"&gt;not&lt;/span&gt; row:&lt;br&gt;&lt;br&gt;
                &lt;span class="mention"&gt;continue&lt;/span&gt;&lt;br&gt;&lt;br&gt;
            &lt;span class="mention"&gt;# Be defensive in case row length/structure changes&lt;/span&gt;&lt;br&gt;&lt;br&gt;
            doc_id = row[&lt;span class="mention"&gt;0&lt;/span&gt;] &lt;span class="mention"&gt;if&lt;/span&gt; len(row) &amp;gt; &lt;span class="mention"&gt;0&lt;/span&gt; &lt;span class="mention"&gt;else&lt;/span&gt; &lt;span class="mention"&gt;None&lt;/span&gt;&lt;br&gt;&lt;br&gt;
            text = row[&lt;span class="mention"&gt;1&lt;/span&gt;] &lt;span class="mention"&gt;if&lt;/span&gt; len(row) &amp;gt; &lt;span class="mention"&gt;1&lt;/span&gt; &lt;span class="mention"&gt;else&lt;/span&gt; &lt;span class="mention"&gt;None&lt;/span&gt;&lt;br&gt;&lt;br&gt;
            formatted.append({&lt;span class="mention"&gt;"id"&lt;/span&gt;: doc_id, &lt;span class="mention"&gt;"text"&lt;/span&gt;: text})&lt;br&gt;&lt;br&gt;
        payload = {&lt;span class="mention"&gt;"query"&lt;/span&gt;: query, &lt;span class="mention"&gt;"results"&lt;/span&gt;: formatted}&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;return&lt;/span&gt; json.dumps(payload, ensure_ascii=&lt;span class="mention"&gt;False&lt;/span&gt;)&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;# Unknown tool; return an error-style payload&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;return&lt;/span&gt; json.dumps({&lt;span class="mention"&gt;"error"&lt;/span&gt;: &lt;span class="mention"&gt;f"Unknown tool name: &lt;/span&gt;&lt;span class="mention"&gt;{name}&lt;/span&gt;"})&lt;br&gt;&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;Now that we have our RAG tool, we can start work on the chat loop logic. First, we need a helper to reliably pull the model’s final answer and any tool outputs from the OpenAI Responses API. extract_answer_and_sources(response) walks the response.output items containing the models outputs and concatenates them into a single answer string. It also collects the function_call_output payloads (the JSON we returned from our RAG tool), parses them, and exposes them as tool_context for transparency and debugging. The function parses the model output into a compact structure: {"answer": ..., "tool_context": [...]}.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&lt;span class="mention"&gt;def&lt;/span&gt; &lt;span class="mention"&gt;extract_answer_and_sources&lt;/span&gt;&lt;span class="mention"&gt;(response: Any)&lt;/span&gt; -&amp;gt; Dict[str, Any]:&lt;br&gt;&lt;br&gt;
    """Extract a structured answer and optional sources from a Responses API object.&lt;br&gt;&lt;br&gt;
    We don't enforce a global JSON response schema here. Instead, we:&lt;br&gt;&lt;br&gt;
    - Prefer the SDK's &lt;code&gt;output_text&lt;/code&gt; convenience when present&lt;br&gt;&lt;br&gt;
    - Fall back to concatenating any &lt;code&gt;output_text&lt;/code&gt; content parts&lt;br&gt;&lt;br&gt;
    - Also surface any tool-call-output payloads we got back this turn as&lt;br&gt;&lt;br&gt;
      &lt;code&gt;tool_context&lt;/code&gt; for debugging/inspection.&lt;br&gt;&lt;br&gt;
    """&lt;br&gt;&lt;br&gt;
    answer_text = &lt;span class="mention"&gt;""&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;# Preferred: SDK convenience&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;if&lt;/span&gt; hasattr(response, &lt;span class="mention"&gt;"output_text"&lt;/span&gt;) &lt;span class="mention"&gt;and&lt;/span&gt; response.output_text:&lt;br&gt;&lt;br&gt;
        answer_text = response.output_text&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;else&lt;/span&gt;:&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;# Fallback: walk output items&lt;/span&gt;&lt;br&gt;&lt;br&gt;
        parts: List[str] = []&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;for&lt;/span&gt; item &lt;span class="mention"&gt;in&lt;/span&gt; getattr(response, &lt;span class="mention"&gt;"output"&lt;/span&gt;, []) &lt;span class="mention"&gt;or&lt;/span&gt; []:&lt;br&gt;&lt;br&gt;
            &lt;span class="mention"&gt;if&lt;/span&gt; getattr(item, &lt;span class="mention"&gt;"type"&lt;/span&gt;, &lt;span class="mention"&gt;None&lt;/span&gt;) == &lt;span class="mention"&gt;"message"&lt;/span&gt;:&lt;br&gt;&lt;br&gt;
                &lt;span class="mention"&gt;for&lt;/span&gt; c &lt;span class="mention"&gt;in&lt;/span&gt; getattr(item, &lt;span class="mention"&gt;"content"&lt;/span&gt;, []) &lt;span class="mention"&gt;or&lt;/span&gt; []:&lt;br&gt;&lt;br&gt;
                    &lt;span class="mention"&gt;if&lt;/span&gt; getattr(c, &lt;span class="mention"&gt;"type"&lt;/span&gt;, &lt;span class="mention"&gt;None&lt;/span&gt;) == &lt;span class="mention"&gt;"output_text"&lt;/span&gt;:&lt;br&gt;&lt;br&gt;
                        parts.append(getattr(c, &lt;span class="mention"&gt;"text"&lt;/span&gt;, &lt;span class="mention"&gt;""&lt;/span&gt;))&lt;br&gt;&lt;br&gt;
        answer_text = &lt;span class="mention"&gt;""&lt;/span&gt;.join(parts)&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;# Collect any function_call_output items for visibility&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    tool_context: List[Dict[str, Any]] = []&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;for&lt;/span&gt; item &lt;span class="mention"&gt;in&lt;/span&gt; getattr(response, &lt;span class="mention"&gt;"output"&lt;/span&gt;, []) &lt;span class="mention"&gt;or&lt;/span&gt; []:&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;if&lt;/span&gt; getattr(item, &lt;span class="mention"&gt;"type"&lt;/span&gt;, &lt;span class="mention"&gt;None&lt;/span&gt;) == &lt;span class="mention"&gt;"function_call_output"&lt;/span&gt;:&lt;br&gt;&lt;br&gt;
            &lt;span class="mention"&gt;try&lt;/span&gt;:&lt;br&gt;&lt;br&gt;
                tool_context.append({&lt;br&gt;&lt;br&gt;
                    &lt;span class="mention"&gt;"call_id"&lt;/span&gt;: getattr(item, &lt;span class="mention"&gt;"call_id"&lt;/span&gt;, &lt;span class="mention"&gt;None&lt;/span&gt;),&lt;br&gt;&lt;br&gt;
                    &lt;span class="mention"&gt;"output"&lt;/span&gt;: json.loads(getattr(item, &lt;span class="mention"&gt;"output"&lt;/span&gt;, &lt;span class="mention"&gt;""&lt;/span&gt;)),&lt;br&gt;&lt;br&gt;
                })&lt;br&gt;&lt;br&gt;
            &lt;span class="mention"&gt;except&lt;/span&gt; Exception:&lt;br&gt;&lt;br&gt;
                tool_context.append({&lt;br&gt;&lt;br&gt;
                    &lt;span class="mention"&gt;"call_id"&lt;/span&gt;: getattr(item, &lt;span class="mention"&gt;"call_id"&lt;/span&gt;, &lt;span class="mention"&gt;None&lt;/span&gt;),&lt;br&gt;&lt;br&gt;
                    &lt;span class="mention"&gt;"output"&lt;/span&gt;: getattr(item, &lt;span class="mention"&gt;"output"&lt;/span&gt;, &lt;span class="mention"&gt;""&lt;/span&gt;),&lt;br&gt;&lt;br&gt;
                })&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;return&lt;/span&gt; {&lt;span class="mention"&gt;"answer"&lt;/span&gt;: answer_text.strip(), &lt;span class="mention"&gt;"tool_context"&lt;/span&gt;: tool_context}&lt;br&gt;&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;With the help of extract_answer_and_sources we can build the whole chat loop to orchestrate a two&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span&gt;‑&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span&gt;phase, tool&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span&gt;‑&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span&gt;calling conversation with the OpenAI Responses API. The chat_loop() function runs an interactive CLI: it collects the user’s question, sends a first request with system instructions and the search_business_docs tool, and then inspects any function_call items the model emits. For each function call, it executes our local RAG tool (call_rag_tool, which wraps search_embeddings) and appends the result back to the conversation as a function_call_output. It then makes a second request asking the model to use those tool outputs to produce a grounded answer, parses that answer via extract_answer_and_sources, and prints it. The loop maintains running context in input_items so each turn can build on prior messages and tool results.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&lt;span class="mention"&gt;def&lt;/span&gt; &lt;span class="mention"&gt;chat_loop&lt;/span&gt;&lt;span class="mention"&gt;()&lt;/span&gt; -&amp;gt; &lt;span class="mention"&gt;None&lt;/span&gt;:&lt;br&gt;&lt;br&gt;
    """Run an interactive CLI chat loop using the OpenAI Responses API.&lt;br&gt;&lt;br&gt;
    The loop supports multi-step tool-calling:&lt;br&gt;&lt;br&gt;
    - First call may return one or more &lt;code&gt;function_call&lt;/code&gt; items&lt;br&gt;&lt;br&gt;
    - We execute those locally (e.g., call search_embeddings)&lt;br&gt;&lt;br&gt;
    - We send the tool outputs back in a second &lt;code&gt;responses.create&lt;/code&gt; call&lt;br&gt;&lt;br&gt;
    - Then we print the model's final, grounded answer&lt;br&gt;&lt;br&gt;
    """&lt;br&gt;&lt;br&gt;
    key = os.getenv(&lt;span class="mention"&gt;"OPENAI_API_KEY"&lt;/span&gt;)&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;if&lt;/span&gt; &lt;span class="mention"&gt;not&lt;/span&gt; key:&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;raise&lt;/span&gt; RuntimeError(&lt;span class="mention"&gt;"OPENAI_API_KEY is not set in the environment."&lt;/span&gt;)&lt;br&gt;&lt;br&gt;
    client = OpenAI(api_key=key)&lt;br&gt;&lt;br&gt;
    print(&lt;span class="mention"&gt;"\nBusiness Service RAG Chat"&lt;/span&gt;)&lt;br&gt;&lt;br&gt;
    print(&lt;span class="mention"&gt;"Type 'exit' or 'quit' to stop.\n"&lt;/span&gt;)&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;# Running list of inputs (messages + tool calls + tool outputs) for context&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    input_items: List[Dict[str, Any]] = []&lt;br&gt;&lt;br&gt;
    &lt;span class="mention"&gt;while&lt;/span&gt; &lt;span class="mention"&gt;True&lt;/span&gt;:&lt;br&gt;&lt;br&gt;
        user_input = input(&lt;span class="mention"&gt;"You: "&lt;/span&gt;).strip()&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;if&lt;/span&gt; &lt;span class="mention"&gt;not&lt;/span&gt; user_input:&lt;br&gt;&lt;br&gt;
            &lt;span class="mention"&gt;continue&lt;/span&gt;&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;if&lt;/span&gt; user_input.lower() &lt;span class="mention"&gt;in&lt;/span&gt; {&lt;span class="mention"&gt;"exit"&lt;/span&gt;, &lt;span class="mention"&gt;"quit"&lt;/span&gt;}:&lt;br&gt;&lt;br&gt;
            print(&lt;span class="mention"&gt;"Goodbye."&lt;/span&gt;)&lt;br&gt;&lt;br&gt;
            &lt;span class="mention"&gt;break&lt;/span&gt;&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;# Add user message&lt;/span&gt;&lt;br&gt;&lt;br&gt;
        input_items.append({&lt;span class="mention"&gt;"role"&lt;/span&gt;: &lt;span class="mention"&gt;"user"&lt;/span&gt;, &lt;span class="mention"&gt;"content"&lt;/span&gt;: user_input})&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;# 1) First call: let the model decide whether to call tools&lt;/span&gt;&lt;br&gt;&lt;br&gt;
        response = client.responses.create(&lt;br&gt;&lt;br&gt;
            model=MODEL,&lt;br&gt;&lt;br&gt;
            instructions=SYSTEM_INSTRUCTIONS,&lt;br&gt;&lt;br&gt;
            tools=TOOLS,&lt;br&gt;&lt;br&gt;
            input=input_items,&lt;br&gt;&lt;br&gt;
        )&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;# Save model output items to our running conversation&lt;/span&gt;&lt;br&gt;&lt;br&gt;
        input_items += response.output&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;# 2) Execute any function calls&lt;/span&gt;&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;# The Responses API returns &lt;code&gt;function_call&lt;/code&gt; items in &lt;code&gt;response.output&lt;/code&gt;.&lt;/span&gt;&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;for&lt;/span&gt; item &lt;span class="mention"&gt;in&lt;/span&gt; response.output:&lt;br&gt;&lt;br&gt;
            &lt;span class="mention"&gt;if&lt;/span&gt; getattr(item, &lt;span class="mention"&gt;"type"&lt;/span&gt;, &lt;span class="mention"&gt;None&lt;/span&gt;) != &lt;span class="mention"&gt;"function_call"&lt;/span&gt;:&lt;br&gt;&lt;br&gt;
                &lt;span class="mention"&gt;continue&lt;/span&gt;&lt;br&gt;&lt;br&gt;
            name = getattr(item, &lt;span class="mention"&gt;"name"&lt;/span&gt;, &lt;span class="mention"&gt;None&lt;/span&gt;)&lt;br&gt;&lt;br&gt;
            raw_args = getattr(item, &lt;span class="mention"&gt;"arguments"&lt;/span&gt;, &lt;span class="mention"&gt;"{}"&lt;/span&gt;)&lt;br&gt;&lt;br&gt;
            &lt;span class="mention"&gt;try&lt;/span&gt;:&lt;br&gt;&lt;br&gt;
                args = json.loads(raw_args) &lt;span class="mention"&gt;if&lt;/span&gt; isinstance(raw_args, str) &lt;span class="mention"&gt;else&lt;/span&gt; raw_args&lt;br&gt;&lt;br&gt;
            &lt;span class="mention"&gt;except&lt;/span&gt; json.JSONDecodeError:&lt;br&gt;&lt;br&gt;
                args = {&lt;span class="mention"&gt;"query"&lt;/span&gt;: user_input}&lt;br&gt;&lt;br&gt;
            result_str = call_rag_tool(name, args &lt;span class="mention"&gt;or&lt;/span&gt; {})&lt;br&gt;&lt;br&gt;
            &lt;span class="mention"&gt;# Append tool result back as function_call_output&lt;/span&gt;&lt;br&gt;&lt;br&gt;
            input_items.append(&lt;br&gt;&lt;br&gt;
                {&lt;br&gt;&lt;br&gt;
                    &lt;span class="mention"&gt;"type"&lt;/span&gt;: &lt;span class="mention"&gt;"function_call_output"&lt;/span&gt;,&lt;br&gt;&lt;br&gt;
                    &lt;span class="mention"&gt;"call_id"&lt;/span&gt;: getattr(item, &lt;span class="mention"&gt;"call_id"&lt;/span&gt;, &lt;span class="mention"&gt;None&lt;/span&gt;),&lt;br&gt;&lt;br&gt;
                    &lt;span class="mention"&gt;"output"&lt;/span&gt;: result_str,&lt;br&gt;&lt;br&gt;
                }&lt;br&gt;&lt;br&gt;
            )&lt;br&gt;&lt;br&gt;
        &lt;span class="mention"&gt;# 3) Second call: ask the model to answer using tool outputs&lt;/span&gt;&lt;br&gt;&lt;br&gt;
        followup = client.responses.create(&lt;br&gt;&lt;br&gt;
            model=MODEL,&lt;br&gt;&lt;br&gt;
            instructions=(&lt;br&gt;&lt;br&gt;
                SYSTEM_INSTRUCTIONS&lt;br&gt;&lt;br&gt;
                + &lt;span class="mention"&gt;"\n\nYou have just received outputs from your tools. "&lt;/span&gt;&lt;br&gt;&lt;br&gt;
                + &lt;span class="mention"&gt;"Use them to give a concise, well-structured answer."&lt;/span&gt;&lt;br&gt;&lt;br&gt;
            ),&lt;br&gt;&lt;br&gt;
            tools=TOOLS,&lt;br&gt;&lt;br&gt;
            input=input_items,&lt;br&gt;&lt;br&gt;
        )&lt;br&gt;&lt;br&gt;
        structured = extract_answer_and_sources(followup)&lt;br&gt;&lt;br&gt;
        print(&lt;span class="mention"&gt;"Agent:\n"&lt;/span&gt; + structured[&lt;span class="mention"&gt;"answer"&lt;/span&gt;] + &lt;span class="mention"&gt;"\n"&lt;/span&gt;)&lt;br&gt;&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;That’s it! You’ve built a complete RAG pipeline powered by IRIS Vector Search. While this example focused on a simple use case, IRIS Vector Search opens the door to many more possibilities:&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;

&lt;li&gt;&lt;span&gt;&lt;span&gt;Knowledge store for more complex customer support agents&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;

&lt;li&gt;&lt;span&gt;&lt;span&gt;Conversational context storage for hyper-personalized agents &lt;/span&gt;&lt;/span&gt;&lt;/li&gt;

&lt;li&gt;&lt;span&gt;&lt;span&gt;Anomaly detection in textual data&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;

&lt;li&gt;&lt;span&gt;&lt;span&gt;Clustering analysis for textual data&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;

&lt;/ul&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;I hope this walkthrough gave you a solid starting point for exploring vector search and building your own AI-driven applications with InterSystems IRIS!&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;The full codebase can be found here:&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;

&lt;li&gt;&lt;a href="https://openexchange.intersystems.com/portal/products/IRISVectorSearchRAGExample" rel="noopener noreferrer"&gt;&lt;span&gt;&lt;span&gt;&lt;a href="https://openexchange.intersystems.com/portal/products/IRISVectorSearchRAGExample" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://openexchange.intersystems.com/portal/products/IRISVectorSearchRAGExample" rel="noopener noreferrer"&gt;https://openexchange.intersystems.com/portal/products/IRISVectorSearchRAGExample&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://github.com/isc-epolakie/IRISVectorSearchRAGExample" rel="noopener noreferrer"&gt;&lt;span&gt;&lt;span&gt;&lt;a href="https://github.com/isc-epolakie/IRISVectorSearchRAGExample" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://github.com/isc-epolakie/IRISVectorSearchRAGExample" rel="noopener noreferrer"&gt;https://github.com/isc-epolakie/IRISVectorSearchRAGExample&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/a&gt;&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>api</category>
      <category>python</category>
      <category>sql</category>
    </item>
    <item>
      <title>Virtualizing large databases - VMware CPU capacity planning</title>
      <dc:creator>InterSystems Developer</dc:creator>
      <pubDate>Wed, 25 Mar 2026 19:39:06 +0000</pubDate>
      <link>https://dev.to/intersystems/virtualizing-large-databases-vmware-cpu-capacity-planning-13b6</link>
      <guid>https://dev.to/intersystems/virtualizing-large-databases-vmware-cpu-capacity-planning-13b6</guid>
      <description>&lt;p&gt;I am often asked by customers, vendors or internal teams to explain CPU capacity planning for &lt;em&gt;large production databases&lt;/em&gt;  running on VMware vSphere. &lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;This post was originally written in 2017, I am updating the post in February 2026. For context I have kept the original post, but highlighted changes. This post was originally written for ESXi 6.0. The core principles remain valid for vSphere 7.x and 8.x, though there have been improvements to vNUMA handling, CPU scheduling (particularly for AMD EPYC), and CPU Hot Add compatibility with vNUMA in vSphere 8. Always consult the Performance Best Practices guide for your specific vSphere version. For a deeper dive see the "Additional links" section at the end of the post.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Changes are marked with:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;UPDATE 2026:&lt;/strong&gt; ...&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;In summary there are a few simple best practices to follow for sizing CPU for large production databases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Plan for one vCPU per physical CPU core.&lt;/li&gt;
&lt;li&gt;Consider NUMA and ideally size VMs to keep CPU and memory local to a NUMA node. &lt;/li&gt;
&lt;li&gt;Right-size virtual machines. Add vCPUs only when needed. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Generally this leads to a couple of common questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Because of hyper-threading VMware lets me create VMs with 2x the number of physical CPUs. Doesn’t that double capacity? Shouldn’t I create VMs with as many CPUs as possible?&lt;/li&gt;
&lt;li&gt;What is a NUMA node? Should I care about NUMA?&lt;/li&gt;
&lt;li&gt;VMs should be right-sized, but how do I know when they are?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I answer these questions with examples below. Bust also remember, best practices are not written in stone. Sometimes you need to make compromises. For example, it is likely that large production database VMs will NOT fit in a NUMA node, and as we will see that’s OK. Best practices are guidelines that you will have to evaluate and validate for your applications and environment.&lt;/p&gt;

&lt;p&gt;Although I am writing this with examples for databases running on InterSystems data platforms, the concepts and rules apply generally for capacity and performance planning for any large (Monster) VMs.&lt;/p&gt;



&lt;br&gt;
For virtualisation best practices and more posts on performance and capacity planning;&lt;br&gt;
&lt;a href="https://community.intersystems.com/post/capacity-planning-and-performance-series-index" rel="noopener noreferrer"&gt;A list of other posts in the InterSystems Data Platforms and performance series is here.&lt;/a&gt;



&lt;h1&gt;
  
  
  Monster VMs
&lt;/h1&gt;

&lt;p&gt;This post is mostly about deploying &lt;em&gt;Monster VMs &lt;/em&gt;, sometimes called &lt;em&gt;Wide VMs&lt;/em&gt;. The CPU resource requirements of high transaction databases mean they are often deployed on Monster VMs. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A monster VM is a VM with more Virtual CPUs or memory than a physical NUMA node.&lt;/p&gt;
&lt;/blockquote&gt;



&lt;h1&gt;
  
  
  CPU architecture and NUMA
&lt;/h1&gt;

&lt;p&gt;Current Intel processor architecture has Non-Uniform Memory Architecture (NUMA) architecture. For example, the servers I am using to run tests for this post have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Two CPU sockets, each with a processor with 12 cores (Intel E5-2680 v3). &lt;/li&gt;
&lt;li&gt;256 GB memory (16 x 16GB RDIMM)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each 12-core processor has its own local memory (128GB of RDIMMs and local cache) and can also access memory on other processors in the same host.  Each 12-core package of CPU, CPU cache and 128 GB RDIMM memory is a NUMA node. To access memory on another processor NUMA nodes are connected by a fast inter-connect.&lt;/p&gt;

&lt;p&gt;Processes running on a processor accessing local RDIMM and Cache memory have lower latency than going across the interconnect to access remote memory on another processor. Access across the interconnect increases latency, so performance is non-uniform. The same design applies to servers with more than two sockets. A four socket Intel server has four NUMA nodes.&lt;/p&gt;

&lt;p&gt;ESXi understands physical NUMA and the ESXi CPU scheduler is designed to optimise performance on NUMA systems. One of the ways ESXi maximises performance is to create data locality on a physical NUMA node. In our example if you have a VM with 12 vCPU and less than 128GB memory, ESXi will assign that VM to run on one of the physical NUMA nodes. Which leads to the rule;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If possible size VMs to keep CPU and memory local to a NUMA node. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you need a Monster VM larger than a NUMA node that is OK, ESXi does a very good job of automatically calculating and managing requirements. For example,  ESXi will create virtual NUMA nodes (vNUMA) that intelligently schedule onto the physical NUMA nodes for optimal performance. The vNUMA structure is exposed to the operating system. For example, if you have a host server with two 12-core processors and a VM with 16 vCPUs ESXi may use eight physical cores on on each of two processors to schedule VM vCPUs, the operating system (Linux or Windows) will see two NUMA nodes. &lt;/p&gt;

&lt;p&gt;It is also important to right-size your VMs and not allocate more resources than are needed as that can lead to wasted resources and loss of performance. As well as helping you size for NUMA, it is more efficient and will result in better performance, to have a 12 vCPU VM with high (but safe) CPU utilisation than a 24 vCPU VM with low or middling VM CPU utilisation, especially if there are other VMs on this host needing to be scheduled and competing for resources. This also re-enforces the rule;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Right-size virtual machines.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; There are differences between Intel and AMD implementations of NUMA. AMD has multiple NUMA nodes per processor. It’s been a while since I have seen AMD processors in a customer server, but if you have them review NUMA layout as part of your planning. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;UPDATE 2026:&lt;/strong&gt; Note: AMD EPYC processors are now common in datacenter environments and have a different NUMA architecture than Intel. EPYC processors can have multiple NUMA nodes per socket, configured via NPS (NUMA Per Socket) BIOS settings. Starting with vSphere 7.0 Update 2, the ESXi CPU scheduler includes significant optimizations for AMD EPYC that can achieve up to 50% better performance out-of-the-box. For AMD EPYC, the default BIOS settings (NPS-1, CCX-as-NUMA disabled) provide optimal performance for most virtualization workloads. Review AMD's VMware vSphere Tuning Guides for your specific EPYC generation (7003, 8004, 9004 series) for detailed recommendations.&lt;/p&gt;
&lt;/blockquote&gt;



&lt;h2&gt;
  
  
  Wide VMs and Licencing
&lt;/h2&gt;

&lt;p&gt;For best NUMA scheduling configure wide VMs;&lt;br&gt;
Correction June 2017:  Configure VMs with 1 vCPU per socket. &lt;br&gt;
For example, by default a VM with 24 vCPUs should be configured as 24 CPU sockets each with one core. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Follow VMware best practice rules .&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Please see &lt;a href="https://blogs.vmware.com/performance/2017/03/virtual-machine-vcpu-and-vnuma-rightsizing-rules-of-thumb.html" rel="noopener noreferrer"&gt;this post on the VMware blogs for examples. &lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The VMware blog post goes into detail, but the author, Mark Achtemichuk, recommends the following rules of thumb:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;While there are many advanced vNUMA settings, only in rare cases do they need to be changed from defaults.&lt;/li&gt;
&lt;li&gt;Always configure the virtual machine vCPU count to be reflected as Cores per Socket, until you exceed the physical core count of a single physical NUMA node.&lt;/li&gt;
&lt;li&gt;When you need to configure more vCPUs than there are physical cores in the NUMA node, evenly divide the vCPU count across the minimum number of NUMA nodes.&lt;/li&gt;
&lt;li&gt;Don’t assign an odd number of vCPUs when the size of your virtual machine exceeds a physical NUMA node.&lt;/li&gt;
&lt;li&gt;Don’t enable vCPU Hot Add unless you’re okay with vNUMA being disabled.&lt;/li&gt;
&lt;li&gt;Don’t create a VM larger than the total number of physical cores of your host.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;UPDATE 2026:&lt;/strong&gt; Starting with vSphere 6.5, vNUMA behavior was decoupled from the Cores per Socket setting. ESXi now automatically calculates and presents the optimal vNUMA topology to the guest OS. For most workloads, leaving the default settings is recommended. vSphere 8.0 introduced an enhanced virtual topology feature that automatically selects optimal coresPerSocket values for VMs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UPDATE 2026:&lt;/strong&gt; For vSphere 8 and later: The limitation where CPU Hot Add disabled vNUMA has been lifted in vSphere 8 for VMs using virtual hardware version 20. VMs can now be configured to expose vNUMA topology even with CPU Hot-Add enabled. However, this requires the VM to use the latest virtual hardware compatibility and the new vSphere 8 API property to be configured. **For earlier vSphere versions, the original guidance still applies: do not enable Hot Add for monster VMs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;IRIS licensing counts cores so this is not a problem, however for software or databases other than IRIS specifying that a VM has 24 sockets could make a difference to software licensing so you must check with vendors. &lt;/p&gt;



&lt;h1&gt;
  
  
  Hyper-threading and the CPU schedular
&lt;/h1&gt;

&lt;p&gt;Hyper-threading (HT) often comes up in discussions, I hear; “hyper-threading doubles the number of CPU cores”. Which obviously at the physical level it can’t — you have as many physical cores as you have. Hyper-threading should be enabled and will increase system performance. An expectation is maybe 20%-30% or more application performance increase, but the actual amount is dependant on the application and the workload. But certainly not double. &lt;/p&gt;

&lt;p&gt;As I posted in the &lt;a href="https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-9-cach%C3%A9-vmware-best-practice-guide" rel="noopener noreferrer"&gt;VMware best practice post&lt;/a&gt;, a good starting point for sizing &lt;em&gt;large production database VMs&lt;/em&gt; is to assume is that the vCPU has full physical core dedication on the server —basically ignore hyper-threading when capacity planning. For example; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For a 24-core host server plan for a total of up to 24 vCPU for production database VMs knowing there may be available headroom.  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once you have spent time monitoring the application, operating system and VMware performance during peak processing times you can decide if higher VM consolidation is possible. In the best practice post I stated the rule as;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;One physical CPU (includes hyper-threading) = One vCPU (includes hyper-threading).&lt;/p&gt;
&lt;/blockquote&gt;



&lt;h2&gt;
  
  
  Why Hyper-threading does not double CPU
&lt;/h2&gt;

&lt;p&gt;HT on Intel Xeon processors is a way of creating two &lt;em&gt;logical&lt;/em&gt; CPUs on one physical core. The operating system can efficiently schedule against the two logical processors — if a process or thread on a logical processor is waiting, for example for IO, the physical CPU resources can be used by the other logical processor. Only one logical processor can be progressing at any point in time, so although the physical core is more efficiently utilised &lt;em&gt;performance is not doubled&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;With HT enabled in the host BIOS, when creating a VM you can configure a vCPU per HT logical processor. For example, on a 24-physical core server with HT enabled you can create a VM with up to 48 vCPUS. The ESXi CPU scheduler will optimise processing by running VMs processes on separate physical cores first (while still considering NUMA). I explore later in the post whether allocating more vCPUs than physical cores on a Monster database VM helps scaling.&lt;/p&gt;

&lt;h3&gt;
  
  
  co-stop and CPU scheduling
&lt;/h3&gt;

&lt;p&gt;After monitoring host and application performance you may decide that some overcommitment of host CPU resources is possible. Whether this is a good idea will be very dependant on the applications and workloads. An understanding of the schedular and a key metric to monitor can help you be sure that you are not over committing host resources.&lt;/p&gt;

&lt;p&gt;I sometimes hear; for a VM to be progressing there must be the same number of free logical CPUs as there are vCPUs in the VM. For example, a 12 vCPU VM must ‘wait’ for 12 logical CPUs to be ‘available’ before execution progresses. However it should be noted that ESXi after version 3 this is not the case. ESXi uses relaxed co-scheduling for CPU for better application performance.&lt;/p&gt;

&lt;p&gt;Because multiple cooperating threads or processes frequently synchronise with each other not scheduling them together can increase latency in their operations. For example a thread waiting to be scheduled by another thread in a spin loop. For best performance ESXi tries to schedule as many sibling vCPUs together as possible. But the CPU scheduler can flexibly schedule vCPUs when there a multiple VMs competing for CPU resources in a consolidated environment. If there is too much time difference as some vCPUs make progress while siblings don’t (the time difference is called skew) then the leading vCPU will decide whether to stop itself (co-stop). Note that it is vCPUs that co-stop (or co-start), not the entire VM. This works very well when even when there is some over commitment of resources, however as you would expect; too much over commitment of CPU resources will inevitably impact performance. I show an example of over commitment and co-stop later in Example 2.&lt;/p&gt;

&lt;p&gt;Remember it is not a flat-out race for CPU resources between VMs; the ESXi CPU scheduler’s job is to ensure that policies such as CPU shares, reservations and limits are followed while maximising CPU utilisation and to ensure fairness, throughput, responsiveness and scalability. A discussion of using reservations and shares to prioritise production workloads is beyond the scope of this post and dependant on your application and workload mix. I may revisit this at a later time if I find any IRIS specific recommendations. There are many factors that come into play with the CPU scheduler, this section just skims the surface. For a deep dive see the VMware white paper and other links in the references at the end of the post. &lt;/p&gt;



&lt;h1&gt;
  
  
  Examples
&lt;/h1&gt;

&lt;p&gt;To illustrate the different vCPU configurations, I ran a series of benchmarks using a high transaction rate browser based Hospital Information System application. A similar concept to the DVD Store database benchmark developed by VMware.&lt;/p&gt;

&lt;p&gt;The scripts for the benchmark are created based on observations and metrics from live hospital implementations and include high use workflows, transactions and components that use the highest system resources. Driver VMs on other hosts simulate web sessions (users) by executing scripts with randomised input data at set workflow transaction rates. A benchmark with a rate of 1x is the baseline. Rates can be scaled up and down in increments. &lt;/p&gt;

&lt;p&gt;Along with the database and operating system metrics a good metric to gauge how the benchmark database VM is performing is component (also could be a transaction) response time as measured on the server. An example of a component is part of an end user screen. An increase in component response time means users would start to see a change for the worse in application response time. A well performing database system must provide &lt;em&gt;consistent&lt;/em&gt; high performance for end users. In the following charts, I am measuring against consistent test performance and an indication of end user experience by averaging the response time of the 10 slowest high-use components. Average component response time is expected to be  sub-second, a user screen may be made up of one component, or complex screens may have many components. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Remember you are always sizing for peak workload, plus a buffer for unexpected spikes in activity. I usually aim for average 80% peak CPU utilisation. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A full list of benchmark hardware and software is at the end of the post.  &lt;/p&gt;



&lt;h2&gt;
  
  
  Example 1. Right-sizing - single monster VM per host
&lt;/h2&gt;

&lt;p&gt;It is possible to create a database VM that is sized to use all the physical cores of a host server, for example a 24 vCPU VM on the 24 physical core host. Rather than run the server “bare-metal” in a IRIS database mirror for HA or introduce the complication of operating system failover clustering, the database VM is included in a vSphere cluster for management and HA, for example DRS and VMware HA. &lt;/p&gt;

&lt;p&gt;I have seen customers follow old-school thinking and size a primary database VM for expected capacity at the end of five years hardware life, but as we know from above it is better to right-size; you will get better performance and consolidation if your VMs are not oversized and managing HA will be easier; think Tetris if there is maintenance or host failure and the database monster VM has to migrate or restart on another host. If transaction rate is forecast to increase significantly vCPUs can be added ahead of time during planned maintenance. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note, 'hot add' CPU option disables vNUMA so do not use it for monster VMs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Consider the following chart showing a series of tests on the 24-core host. 3x transaction rate is the sweet spot and the capacity planning target for this 24-core system.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A single VM is running on the host.&lt;/li&gt;
&lt;li&gt;Four VM sizes were used to show performance at 12, 24, 36 and 48 vCPU. &lt;/li&gt;
&lt;li&gt;Transaction rates (1x, 2x, 3x, 4x, 5x) were run for each VM size (if possible).&lt;/li&gt;
&lt;li&gt;Performance/user experience is shown as component response time (bars).&lt;/li&gt;
&lt;li&gt;Average CPU% utilisation in the guest VM (lines).&lt;/li&gt;
&lt;li&gt;Host CPU utilisation reached 100% (red dashed line) at 4x rate for all VM sizes.&lt;/li&gt;
&lt;/ul&gt;



&lt;br&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcommunity.intersystems.com%2Fsites%2Fdefault%2Ffiles%2Finline%2Fimages%2Fsingle_guest_vm.png" title="Single Guest VM" alt="24 Physical Core Host&amp;lt;br&amp;gt;
Single guest VM average CPU% and Component Response time " width="800" height="383"&gt;&lt;br&gt;


&lt;p&gt;There is a lot going on in this chart, but we can focus on a couple of interesting things. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The 24 vCPU VM (orange) scaled up smoothly to the target 3x transaction rate. At 3x rate the in-guest VM is averaging 76% CPU (peaks were around 91%).  Host CPU utilisation is not much more than the guest VM. Component response time is pretty much flat up to 3x, so users are happy. As far as our target transaction rate — &lt;em&gt;this VM is right-sized&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So much for right-sizing, what about increasing vCPUs, that means using hyper threads. Is it possible to double performance and scalability? The short answer is &lt;em&gt;No!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In this case the answer can be seen by looking at component response time from 4x onwards. While the performance is ‘better’ with more logical cores (vCPUs) allocated, it is still not flat and as consistent as it was up to 3x. Users will be reporting slower response times at 4x no matter how many vCPUs are allocated. Remember at 4x the &lt;em&gt;host &lt;/em&gt; is already flat-lined at 100% CPU utilisation as reported by vSphere. At higher vCPU counts even though in-guest CPU metrics (vmstat) are reporting less than 100% utilisation this is not the case for physical resources. Remember the guest operating system does not know it is virtualised and is just reporting on resources presented to it. Also note the guest operating system does not see HT threads, all vCPUs are presented as physical cores.&lt;/p&gt;

&lt;p&gt;The point is that database processes (there are more than 200 IRIS processes at 3x transaction rate) are very busy and make very efficient use of processors, there is not a lot of slack for logical processors to schedule more work, or consolidate more VMs to this host. For example, a large part of IRIS processing is happening in-memory so there is not a lot of wait on IO. So while you can allocate more vCPUs than physical cores there is not a lot to be gained because the host is already 100% utilised.&lt;/p&gt;

&lt;p&gt;IRIS is very good at handling high workloads. Even when the host and VM are at 100% CPU utilisation the application is still running, and transaction rate is still increasing — scaling is not linear, and as we can see response times are getting longer and user experience will suffer — but the application does not ‘fall off a cliff’ and although not a good place to be users can still work. If you have an application that is not so sensitive to response times it is good to know you can push to the edge, and beyond, and IRIS still works safely.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Remember you do not want to run your database VM or your host at 100% CPU. You need capacity for unexpected spikes and growth in the VM, and ESXi hypervisor needs resources for all the networking, storage and other activities it does. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I always plan for peaks of 80% CPU utilisation. Even then sizing vCPU only up to the number of physical cores leaves some headroom for ESXi hypervisor on logical threads even in extreme situations.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you are running a hyper-converged (HCI) solution you MUST also factor in HCI CPU requirements at the host level. See my &lt;a href="https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-8-hyper-converged-infrastructure-capacity" rel="noopener noreferrer"&gt;previous post on HCI&lt;/a&gt; for more details. Basic CPU sizing of VMs deployed on HCI is the same as other VMs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Remember, You must validate and test everything in your own environment and with your applications.&lt;/p&gt;



&lt;h2&gt;
  
  
  Example 2. Over committed resources
&lt;/h2&gt;

&lt;p&gt;I have seen customer sites reporting ‘slow’ application performance while the guest operating system reports there are CPU resources to spare. &lt;/p&gt;

&lt;p&gt;Remember the guest operating system does not know it is virtualised. Unfortunately in-guest metrics, for example as reported by vmstat (for example in pButtons) can be deceiving, you must also get host level metrics and ESXi metrics (for example &lt;code&gt;esxtop&lt;/code&gt;) to truly understand system health and capacity. &lt;/p&gt;

&lt;p&gt;As you can see in the chart above when the host is reporting 100% utilisation the guest VM can be reporting a lower utilisation. The 36 vCPU VM (red) is reporting 80% average CPU utilisation at 4x rate while the host is reporting 100%. Even a right-sized VM can be starved of resources, if for example, after go-live other VMs are migrated on to the host, or resources are over-committed through badly configured DRS rules.&lt;/p&gt;

&lt;p&gt;To show key metrics, for this series of tests I configured the following;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Two database VMs running on the host.&lt;/li&gt;
&lt;li&gt; - a 24vCPU running at a constant 2x transaction rate (not shown on chart).&lt;/li&gt;
&lt;li&gt;- a 24vCPU running at 1x, 2x, 3x (these metrics are shown on chart).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With another database using resources;  at 3x rate, the guest OS (RHEL 7) vmstat is only reporting 86% average CPU utilisation and the run queue is only averaging 25. However, users of this system will be complaining loudly as the component response time shot up as processes are slowed.&lt;/p&gt;

&lt;p&gt;As shown in the following chart Co-stop and Ready Time tell the story why user performance is so bad. Ready Time (&lt;code&gt;%RDY&lt;/code&gt;) and CoStop (&lt;code&gt;%CoStop&lt;/code&gt;) metrics show CPU resources are massively over committed at the target 3x rate. This should not really be a surprise as the &lt;em&gt;host&lt;/em&gt; is running 2x (other VM) &lt;em&gt;and&lt;/em&gt; this database VMs 3x rate. &lt;/p&gt;



&lt;br&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcommunity.intersystems.com%2Fsites%2Fdefault%2Ffiles%2Finline%2Fimages%2Fovercommit_3.png" title="Over-committed host" width="800" height="492"&gt;&lt;br&gt;


&lt;p&gt;The chart shows Ready time increases when total CPU load on the host increases.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ready time is time that a VM is ready to run but cannot because CPU resources are not available. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Co-stop also increases. There are not enough free logical CPUs to allow the database VM to progress (as I detailed in the HT section above). The end result is processing is delayed due to contention for physical CPU resources. &lt;/p&gt;

&lt;p&gt;I have seen exactly this situation at a customer site where our support view from pButtons and vmstat only showed the virtualised operating system. While vmstat reported CPU headroom user performance experience was terrible. &lt;/p&gt;

&lt;p&gt;The lesson here is it was not until ESXi metrics and a host level view was made available that the real problem was diagnosed; over committed CPU resources caused by general cluster CPU resource shortage and to make the situation worse bad DRS rules causing high transaction database VMs to migrate together and overwhelm host resources. &lt;/p&gt;



&lt;h2&gt;
  
  
  Example 3. Over committed resources
&lt;/h2&gt;

&lt;p&gt;In this example I used a baseline 24 vCPU database VM running at 3x transaction rate, then two 24 vCPU database VMs at a constant 3x transaction rate. &lt;/p&gt;

&lt;p&gt;The average baseline CPU utilisation (see Example 1 above) was 76% for the VM and 85% for the host. A single 24 vCPU database VM is using all 24 physical processors. Running two 24 vCPU VMs means the VMs are competing for resources and are using all 48 logical execution threads on the server. &lt;/p&gt;



&lt;br&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcommunity.intersystems.com%2Fsites%2Fdefault%2Ffiles%2Finline%2Fimages%2Fovercommit_2vm.png" title="Over-committed host" width="773" height="405"&gt;&lt;br&gt;


&lt;p&gt;Remembering that the host was not 100% utilised with a single VM, we can still see a significant drop in throughput and performance as two very busy 24 vCPU VMs attempt to use the 24 physical cores on the host (even with HT). Although IRIS is very efficient using the available CPU resources there is still a 16% drop in database throughput per VM, and more importantly a more than 50% increase in component (user) response time. &lt;/p&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;My aim for this post is to answer the common questions. See the reference section below for a deeper dive into CPU host resources and the VMware CPU schedular.&lt;/p&gt;

&lt;p&gt;Even though there are many levels of nerd-knob twiddling and ESXi rat holes to go down to squeeze the last drop of performance out of your system, the basic rules are pretty simple.&lt;/p&gt;

&lt;p&gt;For &lt;em&gt;large production databases&lt;/em&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Plan for one vCPU per physical CPU core.&lt;/li&gt;
&lt;li&gt;Consider NUMA and ideally size VMs to keep CPU and memory local to a NUMA node. &lt;/li&gt;
&lt;li&gt;Right-size virtual machines. Add vCPUs only when needed. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to consolidate VMs remember large databases are very busy and will heavily utilise CPUs (physical and logical) at peak times. Don't oversubscribe them until your monitoring tells you it is safe.&lt;/p&gt;



&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://blogs.vmware.com/vsphere/2014/02/overcommit-vcpupcpu-monster-vms.html" rel="noopener noreferrer"&gt;VMware Blog - When to Overcommit vCPU:pCPU for Monster VMs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://frankdenneman.nl/2016/07/06/introduction-2016-numa-deep-dive-series" rel="noopener noreferrer"&gt;Introduction 2016 NUMA Deep Dive Series&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-vsphere-cpu-sched-performance-white-paper.pdf" rel="noopener noreferrer"&gt;The CPU Scheduler in VMware vSphere 5.1&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;



&lt;h2&gt;
  
  
  Tests
&lt;/h2&gt;

&lt;p&gt;I ran the examples in this post on a vSphere cluster made up of two processor Dell R730’s attached to an all flash array. During the examples there was no bottlenecks on the network or storage.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IRIS 2016.2.1.803.0&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;PowerEdge R730&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2x Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz&lt;/li&gt;
&lt;li&gt;16x 16GB RDIMM, 2133 MT/s, Dual Rank, x4 Data Width&lt;/li&gt;
&lt;li&gt;SAS 12Gbps HBA External Controller &lt;/li&gt;
&lt;li&gt;HyperThreading (HT) on&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;PowerVault MD3420, 12G SAS, 2U-24 drive &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;24x 24 960GB Solid State Drive SAS Read Intensive MLC 12Gbps 2.5in Hot-plug Drive, PX04SR &lt;/li&gt;
&lt;li&gt;2 Controller, 12G SAS, 2U MD34xx, 8G Cache &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;VMware ESXi 6.0.0 build-2494585&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VMs are configured for best practice; VMXNET3, PVSCSI, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;RHEL 7&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Large pages&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Baseline 1x rate averaged 700,000 glorefs/second (database access/second). 5x rate averaged more than 3,000,000 glorefs/second for 24 vCPUs. The tests were allowed to burn in until constant performance is achieved and then 15 minute samples were taken and averaged. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;These examples only to show the theory, you MUST validate with your own application!&lt;/p&gt;
&lt;/blockquote&gt;





&lt;h1&gt;
  
  
  Additional Links (Feb 2026)
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.vmware.com/docs/vsphere-esxi-vcenter-server-80U3-performance-best-practices" rel="noopener noreferrer"&gt;Performance Best Practices for VMware vSphere 8.0 Update 3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.vmware.com/docs/vsphere80-virtual-topology-perf" rel="noopener noreferrer"&gt;VMware vSphere 8.0 Virtual Topology Performance Study&lt;/a&gt; (also referenced in the &lt;a href="https://blogs.vmware.com/cloud-foundation/2022/11/10/extreme-performance-series-automatic-vtopology-for-vms-vsphere8/" rel="noopener noreferrer"&gt;Extreme Performance Series blog post&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.vmware.com/techpapers/2021/vsphere70u2-cpu-sched-amd-epyc.html" rel="noopener noreferrer"&gt;Performance Optimizations in VMware vSphere 7.0 U2 CPU Scheduler for AMD EPYC Processors&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blogs.vmware.com/cloud-foundation/2017/03/09/virtual-machine-vcpu-and-vnuma-rightsizing-rules-of-thumb/" rel="noopener noreferrer"&gt;Virtual Machine vCPU and vNUMA Rightsizing – Guidelines (Mark Achtemichuk)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://frankdenneman.nl/2021/12/02/vsphere-7-cores-per-socket-and-virtual-numa/" rel="noopener noreferrer"&gt;vSphere 7 Cores per Socket and Virtual NUMA (Frank Denneman, 2021)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://frankdenneman.nl/2022/11/03/vsphere-8-cpu-topology-for-large-memory-footprint-vms-exceeding-numa-boundaries/" rel="noopener noreferrer"&gt;vSphere 8 CPU Topology for Large Memory Footprint VMs (Frank Denneman, 2022)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://frankdenneman.nl/2016/12/12/decoupling-cores-per-socket-virtual-numa-topology-vsphere-6-5/" rel="noopener noreferrer"&gt;Decoupling of Cores per Socket from Virtual NUMA Topology in vSphere 6.5 (Frank Denneman)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/tuning-guides/58003_amd-epyc-9004-tg-vmware-vsphere.pdf" rel="noopener noreferrer"&gt;AMD EPYC 9004 VMware vSphere Tuning Guide (AMD)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.vmware.com/docs/perf-latency-tuning-vsphere8" rel="noopener noreferrer"&gt;Performance Tuning for Latency-Sensitive Workloads in vSphere 8 (January 2025)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://williamlam.com/2022/11/virtual-numa-vnuma-and-cpu-hot-add-support-in-vsphere-8.html" rel="noopener noreferrer"&gt;vNUMA and CPU Hot-Add support in vSphere 8 (William Lam)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://frankdenneman.nl/category/numa/" rel="noopener noreferrer"&gt;NUMA Deep Dive Series (Frank Denneman)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>redis</category>
      <category>a11y</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
    <item>
      <title>InterSystems Data Platforms and performance – VM Backups and IRIS freeze/thaw scripts</title>
      <dc:creator>InterSystems Developer</dc:creator>
      <pubDate>Wed, 25 Mar 2026 19:36:07 +0000</pubDate>
      <link>https://dev.to/intersystems/intersystems-data-platforms-and-performance-vm-backups-and-iris-freezethaw-scripts-7gi</link>
      <guid>https://dev.to/intersystems/intersystems-data-platforms-and-performance-vm-backups-and-iris-freezethaw-scripts-7gi</guid>
      <description>&lt;p&gt;Hi, this post was initially written for Caché. In June 2023, I finally updated it for IRIS. If you are revisiting the post since then, the only real change is substituting Caché for IRIS! I also updated the links for IRIS documentation and fixed a few typos and grammatical errors.  Enjoy :)&lt;/p&gt;

&lt;p&gt;In this post, I show strategies for backing up InterSystems IRIS using &lt;em&gt;External Backup&lt;/em&gt; with examples of integrating with snapshot-based solutions. Most solutions I see today are deployed on Linux on VMware, so a lot of the post shows how solutions integrate VMware snapshot technology as examples.&lt;/p&gt;

&lt;h2&gt;
  
  
  IRIS backup - batteries included?
&lt;/h2&gt;

&lt;p&gt;IRIS online backup is included with an IRIS install for uninterrupted backup of IRIS databases. But there are more efficient backup solutions you should consider as systems scale up. &lt;em&gt;External Backup&lt;/em&gt; integrated with snapshot technologies is the recommended solution for backing up systems, including IRIS databases. &lt;/p&gt;

&lt;h2&gt;
  
  
  Are there any special considerations for external backup?
&lt;/h2&gt;

&lt;p&gt;Online documentation for &lt;a href="http://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCDI_backup#GCDI_backup_methods_ext" rel="noopener noreferrer"&gt;External Backup&lt;/a&gt; has all the details. A key consideration is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"To ensure the integrity of the snapshot, IRIS provides methods to freeze writes to databases while the snapshot is created. Only physical writes to the database files are frozen during the snapshot creation, allowing user processes to continue performing updates in memory uninterrupted."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It is also important to note that part of the snapshot process on virtualised systems causes a short pause on a VM being backed up, often called stun time. Usually less than a second, so not noticed by users or impacting system operation; however, in some circumstances, the stun can last longer. If the stun is longer than the quality of service (QoS) timeout for IRIS database mirroring, then the backup node will think there has been a failure on the primary and will failover. Later in this post, I explain how you can review stun times in case you need to change the mirroring QoS timeout.&lt;/p&gt;



&lt;br&gt;
&lt;a href="https://community.intersystems.com/post/capacity-planning-and-performance-series-index" rel="noopener noreferrer"&gt;A list of other InterSystems Data Platforms and performance series posts is here.&lt;/a&gt;

&lt;p&gt;You should also review &lt;a href="http://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCDI_backup" rel="noopener noreferrer"&gt;IRIS online documentation Backup and Restore Guide for this post.&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;h1&gt;
  
  
  Backup choices
&lt;/h1&gt;
&lt;h2&gt;
  
  
  Minimal Backup Solution - IRIS Online Backup
&lt;/h2&gt;

&lt;p&gt;If you have nothing else, this comes in the box with the InterSystems data platform for zero downtime backups. Remember, &lt;em&gt;IRIS online backup&lt;/em&gt; only backs up IRIS database files, capturing all blocks in the databases that are allocated for data with the output written to a sequential file. IRIS Online Backup supports cumulative and incremental backups.  &lt;/p&gt;

&lt;p&gt;In the context of VMware, an IRIS Online Backup is an in-guest backup solution. Like other in-guest solutions, IRIS Online Backup operations are essentially the same whether the application is virtualised or runs directly on a host. IRIS Online Backup must be coordinated with a system backup to copy the IRIS online backup output file to backup media and all other file systems used by your application. At a minimum, system backup must include the installation directory, journal and alternate journal directories, application files, and any directory containing external files the application uses. &lt;/p&gt;

&lt;p&gt;IRIS Online Backup should be considered as an entry-level approach for smaller sites wishing to implement a low-cost solution to back up only IRIS databases or ad-hoc backups; for example, it is helpful in the set-up of mirroring. However, as databases increase in size and as IRIS is typically only part of a customer's data landscape, &lt;em&gt;External Backups&lt;/em&gt; combined with snapshot technology and third-party utilities are recommended as best practice with advantages such as including the backup of non-database files, faster restore times, enterprise-wide view of data and better catalogue and management tools.&lt;/p&gt;



&lt;h2&gt;
  
  
  Recommended Backup Solution - External backup
&lt;/h2&gt;

&lt;p&gt;Using VMware as an example, Virtualising on VMware adds functionality and choices for protecting entire VMs. Once you have virtualised a solution, you have effectively encapsulated your system — including the operating system, the application and the data — all within .vmdk (and some other) files. When required, these files can be straightforward to manage and used to recover a whole system, which is very different from the same situation on a physical system where you must recover and configure the components separately -- operating system, drivers, third-party applications, database and database files, etc. &lt;/p&gt;



&lt;h1&gt;
  
  
  VMware snapshot
&lt;/h1&gt;

&lt;p&gt;VMware’s vSphere Data Protection (VDP) and other third-party backup solutions for VM backup, such as Veeam or Commvault, take advantage of the functionality of VMware virtual machine snapshots to create backups. A high-level explanation of VMware snapshots follows; see the VMware documentation for more details.&lt;/p&gt;

&lt;p&gt;It is important to remember that snapshots are applied to the whole VM and that the operating system and any applications or the database engine are unaware that the snapshot is happening. Also, remember:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;By themselves, VMware snapshots are not backups!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Snapshots &lt;em&gt;enable&lt;/em&gt; backup software to make backups, but they are not backups by themselves.&lt;/p&gt;

&lt;p&gt;VDP and third-party backup solutions use the VMware snapshot process in conjunction with the backup application to manage the creation and, very importantly, deletion of snapshots. At a high level, the process and sequence of events for an external backup using VMware snapshots are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Third-party backup software requests the ESXi host to trigger a VMware snapshot.&lt;/li&gt;
&lt;li&gt;A VM's .vmdk files are put into a read-only state, and a child vmdk delta file is created for each of the VM's .vmdk files.&lt;/li&gt;
&lt;li&gt;Copy on write is used with all changes to the VM written to the delta files. Any reads are from the delta file first.&lt;/li&gt;
&lt;li&gt;The backup software manages copying the read-only parent .vmdk files to the backup target. &lt;/li&gt;
&lt;li&gt;When the backup is complete, the snapshot is committed (VM disks resume writes and updated blocks in delta files written to parent). &lt;/li&gt;
&lt;li&gt;The VMware snapshot is now removed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Backup solutions also use other features such as Change Block Tracking (CBT) to allow incremental or cumulative backups for speed and efficiency (especially important for space saving), and typically also add other important functions such as data deduplication and compression, scheduling, mounting VMs with changed IP addresses for integrity checks etc., full VM and file level restores, and catalogue management.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;VMware snapshots that are not appropriately managed or left to run for a long time can use excessive storage (as more and more data is changed, delta files continue to grow) and also slow down your VMs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You should think carefully before running a manual snapshot on a production instance. Why are you doing this? What will happen if you revert &lt;em&gt;back in time&lt;/em&gt; to when the snapshot was created? What happens to all the application transactions between creation and rollback? &lt;/p&gt;

&lt;p&gt;It is OK if your backup software creates and deletes a snapshot. The snapshot should only be around for a short time. And a crucial part of your backup strategy will be to choose a time when the system has low usage to minimise any further impact on users and performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  IRIS database considerations for snapshots
&lt;/h2&gt;

&lt;p&gt;Before the snapshot is taken, the database must be quiesced so that all pending writes are committed, and the database is in a consistent state. IRIS provides methods and an API to commit and then freeze (stop) writes to databases for a short period while the snapshot is created. This way, only physical writes to the database files are frozen during the creation of the snapshot, allowing user processes to continue performing updates in memory uninterrupted. Once the snapshot has been triggered, database writes are thawed, and the backup continues copying data to backup media. The time between freeze and thaw should be quick (a few seconds).&lt;/p&gt;

&lt;p&gt;In addition to pausing writes, the IRIS freeze also handles switching journal files and writing a backup marker to the journal. The journal file continues to be written normally while physical database writes are frozen. If the system were to crash while the physical database writes are frozen, data would be recovered from the journal as usual during start-up.&lt;/p&gt;

&lt;p&gt;The following diagram shows freeze and thaw with VMware snapshot steps to create a backup with a consistent database image.&lt;/p&gt;



&lt;h2&gt;
  
  
  VMware snapshot + IRIS freeze/thaw timeline (not to scale)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zdgh6huxqq1nod38w7d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zdgh6huxqq1nod38w7d.png" alt=" " width="800" height="731"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note the short time between Freeze and Thaw -- only the time to create the snapshot, not the time to copy the read-only parent to the backup target.&lt;/em&gt;&lt;/p&gt;


&lt;/blockquote&gt;



&lt;h2&gt;
  
  
  Summary - Why do I need to freeze and thaw the IRIS database when VMware is taking a snapshot?
&lt;/h2&gt;

&lt;p&gt;The process of freezing and thawing the database is crucial to ensure data consistency and integrity. This is because:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Consistency:&lt;/strong&gt; IRIS can be writing journals, or the WIJ or doing random writes to the database at any time. A snapshot captures the state of the VM at a specific point in time. If the database is actively being written during the snapshot, it can lead to a snapshot that contains partial or inconsistent data. Freezing the database ensures that all transactions are completed and no new transactions start during the snapshot, leading to a consistent disk state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quiescing the File System:&lt;/strong&gt; VMware's snapshot technology can quiesce the file system to ensure file system consistency. However, this does not account for the application or database level consistency. Freezing the database ensures that the database is in a consistent state at the application level, complementing VMware's quiescing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reducing Recovery Time:&lt;/strong&gt; Restoring from a snapshot that was taken without freezing the database might require additional steps like database repair or consistency checks, which can significantly increase recovery time. Freezing and thawing ensure the database is immediately usable upon restoration, reducing downtime.&lt;/p&gt;



&lt;h1&gt;
  
  
  Integrating IRIS Freeze and Thaw
&lt;/h1&gt;

&lt;p&gt;vSphere allows a script to be automatically called on either side of snapshot creation; this is when IRIS Freeze and Thaw are called. Note: For this functionality to work correctly, the ESXi host requests the guest operating system to quiesce the disks via &lt;em&gt;VMware Tools.&lt;/em&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;VMware tools must be installed in the guest operating system.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The scripts must adhere to strict name and location rules. File permissions must also be set. For VMware on Linux, the script names are:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# /usr/sbin/pre-freeze-script
# /usr/sbin/post-thaw-script
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Below are examples of freeze and thaw scripts our team use with Veeam backup for our internal test lab instances, but these scripts should also work with other solutions.  These examples have been tested and used on vSphere 6 and Red Hat 7.  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;While these scripts can be used as examples and illustrate the method, you must validate them for your environments!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Example pre-freeze-script:
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/sh
#
# Script called by VMWare immediately prior to snapshot for backup.
# Tested on Red Hat 7.2
#

LOGDIR=/var/log
SNAPLOG=$LOGDIR/snapshot.log

echo &amp;gt;&amp;gt; $SNAPLOG
echo "`date`: Pre freeze script started" &amp;gt;&amp;gt; $SNAPLOG
exit_code=0

# Only for running instances
for INST in `iris qall 2&amp;gt;/dev/null | tail -n +3 | grep '^up' | cut -c5-  | awk '{print $1}'`; do

    echo "`date`: Attempting to freeze $INST" &amp;gt;&amp;gt; $SNAPLOG

    # Detailed instances specific log    
    LOGFILE=$LOGDIR/$INST-pre_post.log

    # Freeze
    irissession $INST -U '%SYS' "##Class(Backup.General).ExternalFreeze(\"$LOGFILE\",,,,,,1800)" &amp;gt;&amp;gt; $SNAPLOG $
    status=$?

    case $status in
        5) echo "`date`:   $INST IS FROZEN" &amp;gt;&amp;gt; $SNAPLOG
           ;;
        3) echo "`date`:   $INST FREEZE FAILED" &amp;gt;&amp;gt; $SNAPLOG
           logger -p user.err "freeze of $INST failed"
           exit_code=1
           ;;
        *) echo "`date`:   ERROR: Unknown status code: $status" &amp;gt;&amp;gt; $SNAPLOG
           logger -p user.err "ERROR when freezing $INST"
           exit_code=1
           ;;
    esac
    echo "`date`:   Completed freeze of $INST" &amp;gt;&amp;gt; $SNAPLOG
done

echo "`date`: Pre freeze script finished" &amp;gt;&amp;gt; $SNAPLOG
exit $exit_code
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Example thaw script:
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/sh
#
# Script called by VMWare immediately after backup snapshot has been created
# Tested on Red Hat 7.2
#

LOGDIR=/var/log
SNAPLOG=$LOGDIR/snapshot.log

echo &amp;gt;&amp;gt; $SNAPLOG
echo "`date`: Post thaw script started" &amp;gt;&amp;gt; $SNAPLOG
exit_code=0

if [ -d "$LOGDIR" ]; then

    # Only for running instances    
    for INST in `iris qall 2&amp;gt;/dev/null | tail -n +3 | grep '^up' | cut -c5-  | awk '{print $1}'`; do

        echo "`date`: Attempting to thaw $INST" &amp;gt;&amp;gt; $SNAPLOG

        # Detailed instances specific log
        LOGFILE=$LOGDIR/$INST-pre_post.log

        # Thaw
        irissession $INST -U%SYS "##Class(Backup.General).ExternalThaw(\"$LOGFILE\")" &amp;gt;&amp;gt; $SNAPLOG 2&amp;gt;&amp;amp;1
        status=$?

        case $status in
            5) echo "`date`:   $INST IS THAWED" &amp;gt;&amp;gt; $SNAPLOG
               irissession $INST -U%SYS "##Class(Backup.General).ExternalSetHistory(\"$LOGFILE\")" &amp;gt;&amp;gt; $SNAPLOG$
               ;;
            3) echo "`date`:   $INST THAW FAILED" &amp;gt;&amp;gt; $SNAPLOG
               logger -p user.err "thaw of $INST failed"
               exit_code=1
               ;;
            *) echo "`date`:   ERROR: Unknown status code: $status" &amp;gt;&amp;gt; $SNAPLOG
               logger -p user.err "ERROR when thawing $INST"
               exit_code=1
               ;;
        esac
        echo "`date`:   Completed thaw of $INST" &amp;gt;&amp;gt; $SNAPLOG
    done
fi

echo "`date`: Post thaw script finished" &amp;gt;&amp;gt; $SNAPLOG
exit $exit_code
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Remember to set permissions:
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# sudo chown root.root /usr/sbin/pre-freeze-script /usr/sbin/post-thaw-script
# sudo chmod 0700 /usr/sbin/pre-freeze-script /usr/sbin/post-thaw-script
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing Freeze and Thaw
&lt;/h2&gt;

&lt;p&gt;To test the scripts are running correctly, you can manually run a snapshot on a VM and check the script output. The following screenshot shows the "Take VM Snapshot" dialogue and options. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsolgyriys7js56de5epc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsolgyriys7js56de5epc.png" alt=" " width="427" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deselect&lt;/strong&gt;- "Snapshot the virtual machine's memory".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Select&lt;/strong&gt; - the "Quiesce guest file system (Needs VMware Tools installed)" check box to pause running processes on the guest operating system so that file system contents are in a known consistent state when you take the snapshot.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Important! After your test, remember to delete the snapshot!!!!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If the quiesce flag is true, and the virtual machine is powered on when the snapshot is taken, VMware Tools is used to quiesce the file system in the virtual machine. Quiescing a file system is a process of bringing the on-disk data into a state suitable for backups. This process might include such operations as flushing dirty buffers from the operating system's in-memory cache to disk. &lt;/p&gt;

&lt;p&gt;The following output shows the contents of the &lt;code&gt;$SNAPSHOT&lt;/code&gt; log file set in the example freeze/thaw scripts above after running a backup that includes a snapshot as part of its operation. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Wed Jan  4 16:30:35 EST 2017: Pre freeze script started
Wed Jan  4 16:30:35 EST 2017: Attempting to freeze H20152
Wed Jan  4 16:30:36 EST 2017:   H20152 IS FROZEN
Wed Jan  4 16:30:36 EST 2017:   Completed freeze of H20152
Wed Jan  4 16:30:36 EST 2017: Pre freeze script finished

Wed Jan  4 16:30:41 EST 2017: Post thaw script started
Wed Jan  4 16:30:41 EST 2017: Attempting to thaw H20152
Wed Jan  4 16:30:42 EST 2017:   H20152 IS THAWED
Wed Jan  4 16:30:42 EST 2017:   Completed thaw of H20152
Wed Jan  4 16:30:42 EST 2017: Post thaw script finished
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This example shows 6 seconds of elapsed time between freeze and thaw (16:30:36-16:30:42). User operations are NOT interrupted during this period. &lt;em&gt;You will have to gather metrics from your own systems&lt;/em&gt;, but for some context, this example is from a system running an application benchmark on a VM with no IO bottlenecks and an average of more than 2 million Glorefs/sec, 170,000 Gloupds/sec, and an average 1,100 physical reads/sec and 3,000 writes per write daemon cycle. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Remember that memory is not part of the snapshot, so on restarting, the VM will reboot and recover. Database files will be consistent. You don’t want to "resume" a backup; you want the files at a known point in time. You can then roll forward journals and whatever other recovery steps are needed for the application and transactional consistency once the files are recovered.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For additional data protection, a &lt;a href="http://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCDI_journal#GCDI_journal_util_JRNSWTCH" rel="noopener noreferrer"&gt;journal switch&lt;/a&gt; can be done by itself, and journals can be backed up or replicated to another location, for example, hourly.&lt;/p&gt;

&lt;p&gt;Below is the output of the &lt;code&gt;$LOGFILE&lt;/code&gt;  in the example freeze/thaw scripts above, showing journal details for the snapshot.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;01/04/2017 16:30:35: Backup.General.ExternalFreeze: Suspending system

Journal file switched to:
/trak/jnl/jrnpri/h20152/H20152_20170104.011
01/04/2017 16:30:35: Backup.General.ExternalFreeze: Start a journal restore for this backup with journal file: /trak/jnl/jrnpri/h20152/H20152_20170104.011

Journal marker set at
offset 197192 of /trak/jnl/jrnpri/h20152/H20152_20170104.011
01/04/2017 16:30:36: Backup.General.ExternalFreeze: System suspended
01/04/2017 16:30:41: Backup.General.ExternalThaw: Resuming system
01/04/2017 16:30:42: Backup.General.ExternalThaw: System resumed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  VM Stun Times
&lt;/h1&gt;

&lt;p&gt;At the creation point of a VM snapshot and after the backup is complete and the snapshot is committed, the VM needs to be frozen for a short period. This short freeze is often referred to as  stunning the VM. A good blog post on stun times is &lt;a href="http://cormachogan.com/2015/04/28/when-and-why-do-we-stun-a-virtual-machine/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. I summarise the details below and put them in the context of IRIS database considerations.&lt;/p&gt;

&lt;p&gt;From the post on stun times: “To create a VM snapshot, the VM is “stunned” in order to (i) serialize device state to disk, and (ii) close the current running disk and create a snapshot point.…When consolidating, the VM is “stunned” in order to close the disks and put them in a state that is appropriate for consolidation.”&lt;/p&gt;

&lt;p&gt;Stun time is typically a few 100 milliseconds; however, if there is a very high disk write activity during the commit phase, stun time could be several seconds. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If the VM is a Primary or Backup member participating in IRIS Database Mirroring and the stun time is longer than the mirror Quality of Service (QoS) timeout, the mirror will report the Primary VM as failed and initiate a mirror takeover.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Update March 2018:&lt;/strong&gt;&lt;br&gt;
My colleague, Peter Greskoff, pointed out that a backup mirror member could initiate failover in as short a time as just over half QoS timeout during a VM stun or any other time the primary mirror member is unavailable. &lt;/p&gt;

&lt;p&gt;For a detailed description of QoS considerations and failover scenarios, see this great post: &lt;a href="https://community.intersystems.com/post/quality-service-timeout-guide-mirroring" rel="noopener noreferrer"&gt;Quality of Service Timeout Guide for Mirroring&lt;/a&gt;, however the short story regarding VM stun times and QoS is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If the backup mirror does not receive any messages from the primary mirror within half of the QoS timeout, it will send a message to ensure the primary is still alive. The backup then waits an additional half QoS time for a response from the primary machine. If there is no response from the primary, it is assumed to be down, and the backup will take over.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;On a busy system, journals are continuously sent from the primary to the backup mirror, and the backup would not need to check if the primary is still alive. However, during a quiet time — when backups are more likely to happen — if the application is idle, there may be no messages between the primary and backup mirror for more than half the QoS time.&lt;/p&gt;

&lt;p&gt;Here is Peter’s example; Think about this time frame for an idle system with a QoS timeout of:08  seconds and a VM stun time of:07 seconds:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;:00 Primary pings the arbiter with a keepalive, arbiter responds immediately&lt;/li&gt;
&lt;li&gt;:01 backup member sends keepalive to the primary, primary responds immediately&lt;/li&gt;
&lt;li&gt;:02&lt;/li&gt;
&lt;li&gt;:03 VM stun begins&lt;/li&gt;
&lt;li&gt;:04 primary tries to send keepalive to the arbiter, but it doesn’t get through until stun is complete&lt;/li&gt;
&lt;li&gt;:05 backup member sends a ping to primary, as half of QoS has expired&lt;/li&gt;
&lt;li&gt;:06&lt;/li&gt;
&lt;li&gt;:07&lt;/li&gt;
&lt;li&gt;:08 arbiter hasn’t heard from the primary in a full QoS timeout, so it closes the connection&lt;/li&gt;
&lt;li&gt;:09 The backup hasn’t gotten a response from the primary and confirms with the arbiter that it also lost connection, so it takes over&lt;/li&gt;
&lt;li&gt;:10 VM stun ends, too late!!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Please also read the section, &lt;em&gt;Pitfalls and Concerns when Configuring your Quality of Service Timeout&lt;/em&gt;, in the linked post above to understand the balance to have QoS only as long as necessary. Having QoS too long, especially more than 30 seconds, can also cause problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;End update March 2018:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For more information on Mirroring QoS, also see the &lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_set_tunable_params_qos" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Strategies to keep stun time to a minimum include running backups when database activity is low and having well-set-up storage.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As noted above, when creating a snapshot, there are several options you can specify; one of the options is to include the memory state in the snapshot - Remember, &lt;em&gt;memory state is NOT needed for IRIS database backups&lt;/em&gt;. If the memory flag is set, a dump of the internal state of the virtual machine is included in the snapshot. Memory snapshots take much longer to create. Memory snapshots are used to allow reversion to a running virtual machine state as it was when the snapshot was taken. This is NOT required for a database file backup.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When taking a memory snapshot, the entire state of the virtual machine will be stunned, &lt;strong&gt;stun time is variable&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As noted previously, for backups, the quiesce flag must be set to true for manual snapshots or by the backup software to guarantee a consistent and usable backup. &lt;/p&gt;

&lt;h2&gt;
  
  
  Reviewing VMware logs for stun times
&lt;/h2&gt;

&lt;p&gt;Starting from ESXi 5.0, snapshot stun times are logged in each virtual machine's log file (vmware.log) with messages similar to:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;2017-01-04T22:15:58.846Z| vcpu-0| I125: Checkpoint_Unstun: vm stopped for 38123 us&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Stun times are in microseconds, so in the above example, &lt;code&gt;38123 us&lt;/code&gt; is 38123/1,000,000 seconds or 0.038 seconds. &lt;/p&gt;

&lt;p&gt;To be sure that stun times are within acceptable limits or to troubleshoot if you suspect long stun times are causing problems, you can download and review the vmware.log files from the folder of the VM that you are interested in. Once downloaded, you can extract and sort the log using the example Linux commands below. &lt;/p&gt;

&lt;h3&gt;
  
  
  Example downloading vmware.log files
&lt;/h3&gt;

&lt;p&gt;There are several ways to download support logs, including creating a VMware support bundle through the vSphere management console or from the ESXi host command line. Consult the VMware documentation for all the details, but below is a simple method to create and gather a much smaller support bundle that includes the &lt;code&gt;vmware.log&lt;/code&gt; file so you can review stun times. &lt;/p&gt;

&lt;p&gt;You will need the long name of the directory where the VM files are located. Log on to the ESXi host where the database VM is running using ssh and use the command:  &lt;code&gt;vim-cmd vmsvc/getallvms &lt;/code&gt;  to list vmx files and the long names unique associated with them. &lt;/p&gt;

&lt;p&gt;For example, the long name for the example database VM used in this post is output as:&lt;br&gt;
&lt;code&gt;26     vsan-tc2016-db1              [vsanDatastore] e2fe4e58-dbd1-5e79-e3e2-246e9613a6f0/vsan-tc2016-db1.vmx              rhel7_64Guest           vmx-11&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, run the command to gather and bundle only log files:&lt;br&gt;&lt;br&gt;
&lt;code&gt;vm-support -a VirtualMachines:logs&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;The command will echo the location of the support bundle, for example:&lt;br&gt;
 &lt;code&gt;To see the files collected, check '/vmfs/volumes/datastore1 (3)/esx-esxvsan4.iscinternal.com-2016-12-30--07.19-9235879.tgz'&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;You can now use sftp to transfer the file off the host for further processing and review. &lt;/p&gt;

&lt;p&gt;In this example, after uncompressing the support bundle navigate to the path corresponding to the database VMs long name. For example, in this case:&lt;br&gt;
 &lt;code&gt;&amp;lt;bundle name&amp;gt;/vmfs/volumes/&amp;lt;host long name&amp;gt;/e2fe4e58-dbd1-5e79-e3e2-246e9613a6f0&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;You will see several numbered log files; the most recent log file has no number, i.e. &lt;code&gt;vmware.log&lt;/code&gt;. The log may be only a few 100 KB, but there is a lot of information; however, we care about the stun/unstun times, which are easy enough to find with &lt;code&gt;grep&lt;/code&gt;. For example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ grep Unstun vmware.log
2017-01-04T21:30:19.662Z| vcpu-0| I125: Checkpoint_Unstun: vm stopped for 1091706 us
--- 
2017-01-04T22:15:58.846Z| vcpu-0| I125: Checkpoint_Unstun: vm stopped for 38123 us
2017-01-04T22:15:59.573Z| vcpu-0| I125: Checkpoint_Unstun: vm stopped for 298346 us
2017-01-04T22:16:03.672Z| vcpu-0| I125: Checkpoint_Unstun: vm stopped for 301099 us
2017-01-04T22:16:06.471Z| vcpu-0| I125: Checkpoint_Unstun: vm stopped for 341616 us
2017-01-04T22:16:24.813Z| vcpu-0| I125: Checkpoint_Unstun: vm stopped for 264392 us
2017-01-04T22:16:30.921Z| vcpu-0| I125: Checkpoint_Unstun: vm stopped for 221633 us
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can see two groups of stun times in the example, one from snapshot creation and a second set 45 minutes later for each disk when the snapshot is deleted/consolidated (e.g. after the backup software has completed copying the read-only vmx file). The above example shows that most stun times are sub-second, although the initial stun time is just over one second. &lt;/p&gt;

&lt;p&gt;Short stun times are not noticeable to an end user. However, system processes such as IRIS Database Mirroring continuously monitor whether an instance is ‘alive’. If the stun time exceeds the mirroring QoS timeout, the node may be considered uncontactable and ‘dead’, and a failover will be triggered. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tip:&lt;/em&gt; To review all the logs or for trouble-shooting, a handy command is to grep all the &lt;code&gt;vmware*.log&lt;/code&gt; files and look for any outliers or instances where stun time is approaching QoS timeout. The following command pipes the output to awk for formatting:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;grep Unstun vmware* | awk '{ printf ("%'"'"'d", $8)} {print " ---" $0}' | sort -nr&lt;/code&gt;&lt;/p&gt;



&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;You should monitor your system regularly during normal operations to understand stun times and how they may impact QoS timeout for HA, such as mirroring. As noted, strategies to keep stun/unstun time to a minimum include running backups when database and storage activity is low and having well-set-up storage. For constant monitoring, logs may be processed by using VMware Log Insight or other tools.&lt;/p&gt;

&lt;p&gt;In future posts, I will revisit backup and restore operations for InterSystems Data Platforms. But for now, if you have any comments or suggestions based on the workflows of your systems, please share them via the comments sections below.&lt;/p&gt;

</description>
      <category>backup</category>
      <category>beginners</category>
      <category>cache</category>
      <category>productivity</category>
    </item>
    <item>
      <title>InterSystems Data Platforms and performance – Part 9 InterSystems IRIS VMware Best Practice Guide</title>
      <dc:creator>InterSystems Developer</dc:creator>
      <pubDate>Wed, 25 Mar 2026 19:26:55 +0000</pubDate>
      <link>https://dev.to/intersystems/intersystems-data-platforms-and-performance-part-9-intersystems-iris-vmware-best-practice-guide-4ooi</link>
      <guid>https://dev.to/intersystems/intersystems-data-platforms-and-performance-part-9-intersystems-iris-vmware-best-practice-guide-4ooi</guid>
      <description>&lt;p&gt;This post provides guidelines for configuration, system sizing and capacity planning when deploying IRIS and IRIS on a VMware ESXi. This post is based on and replaces the earlier IRIS-era guidance and reflects current VMware and InterSystems recommendations.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Last update Jan 2026. These guidelines are a best effort, remember requirements and capabilities of VMware and IRIS can change.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I jump right in with recommendations assuming you already have an understanding of VMware vSphere virtualization platform. The recommendations in this guide are not specific to any particular hardware or site specific implementation, and are not intended as a fully comprehensive guide to planning and configuring a vSphere deployment -- rather this is a check list of best practice configuration choices you can make. I expect that the recommendations will be evaluated for a specific site by your expert VMware implementation team.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://community.intersystems.com/post/capacity-planning-and-performance-series-index" rel="noopener noreferrer"&gt;A list of other posts in the InterSystems Data Platforms and performance series is here.&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;
  
  
  Are InterSystems' products supported on ESXi?
&lt;/h2&gt;

&lt;p&gt;It is InterSystems policy and procedure to verify and release InterSystems’ products against processor types and operating systems including when operating systems are virtualised. For specifics see &lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ISP_technologies" rel="noopener noreferrer"&gt;InterSystems Supported Technologies&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Note: If you do not write your own applications you must also check your application vendors support policy.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Supported Hardware
&lt;/h3&gt;

&lt;p&gt;VMware virtualization works well for IRIS when used with current server and storage components. IRIS using VMware virtualization has been deployed succesfully at customer sites and has been proven in benchmarks for performance and scalability. There is no significant performance impact using VMware virtualization on properly configured storage, network and servers with later model Intel Xeon processors and AMD EPYC processors.&lt;/p&gt;

&lt;p&gt;Generally IRIS and applications are installed and configured on the guest operating system in the same way as for the same operating system on bare-metal installations. &lt;/p&gt;

&lt;p&gt;It is the customers responsibility to check the &lt;a href="http://www.vmware.com/resources/compatibility/search.php" rel="noopener noreferrer"&gt;VMware compatibility guide&lt;/a&gt; for the specific servers and storage being used.&lt;/p&gt;



&lt;h1&gt;
  
  
  Virtualised architecture
&lt;/h1&gt;

&lt;p&gt;I see VMware commonly used in two standard configurations with IRIS applications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Where primary production database operating system instances are on a ‘bare-metal’ cluster, and VMware is only used for additional production and non-production instances such as web servers, printing, test, training and so on.&lt;/li&gt;
&lt;li&gt;Where ALL operating system instances, including primary production instances are virtualized.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This post can be used as a guide for either scenario, however the focus is on the second scenario where all operating system instances including production are virtualised. The following diagram shows a typical physical server set up for that configuration.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcommunity.intersystems.com%2Fsites%2Fdefault%2Ffiles%2Finline%2Fimages%2Fcachebestpractice2016_201.png" class="article-body-image-wrapper"&gt;&lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcommunity.intersystems.com%2Fsites%2Fdefault%2Ffiles%2Finline%2Fimages%2Fcachebestpractice2016_201.png" width="514" height="556"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 1. Simple virtualised IRIS architecture&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Figure 1 shows a common deployment with a minimum of three physical host servers to provide N+1 capacity and availability with host servers in a VMware HA cluster. Additional physical servers may be added to the cluster to scale resources. Additional physical servers may also be required for backup/restore media management and disaster recovery.&lt;/p&gt;




&lt;p&gt;For recommendations specific to &lt;em&gt;VMware vSAN&lt;/em&gt;, VMware's Hyper-Converged Infrastructure solution, see the following post: &lt;a href="https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-8-hyper-converged-infrastructure-capacity" rel="noopener noreferrer"&gt;Part 8 Hyper-Converged Infrastructure Capacity and Performance Planning&lt;/a&gt;. Most of the recommendations in this post can be applied to vSAN -- with the exception of some of the obvious differences in the Storage section below.&lt;/p&gt;



&lt;h1&gt;
  
  
  VMWare versions
&lt;/h1&gt;

&lt;p&gt;The following table shows key recommendations for IRIS:&lt;/p&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;Recommendation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ESXi:&lt;/td&gt;
&lt;td&gt;Minimum vSphere 7.x or 8.x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;vCenter:&lt;/td&gt;
&lt;td&gt;Required (VCSA preferred)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Licensing:&lt;/td&gt;
&lt;td&gt;Enterprise Plus strongly recommended. Contact VMware.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;ul&gt;
&lt;li&gt;DRS, HA, vMotion, vDS, and storage APIs are mandatory for production IRIS.
&lt;/li&gt;
&lt;li&gt;“Free” ESXi is &lt;strong&gt;not suitable&lt;/strong&gt; for enterprise IRIS deployments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;vSphere is a suite of products including vCenter Server that allows centralised system management of hosts and virtual machines via the vSphere client.&lt;/p&gt;

&lt;p&gt;VMware has several licensing models; ultimately choice of version is based on what best suits your current and future infrastructure planning. Contact Broadcom for the latest VMware licensing choices.&lt;/p&gt;



&lt;h1&gt;
  
  
  ESXi Host BIOS settings
&lt;/h1&gt;

&lt;p&gt;The ESXi host is the physical server. Before configuring BIOS you should:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check with the hardware vendor that the server is running the latest BIOS&lt;/li&gt;
&lt;li&gt;Check whether there are any server/CPU model specific BIOS settings for VMware.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Default settings for server BIOS may not be optimal for VMware. The following settings can be used to optimize the physical host servers to get best performance. Not all settings in the following table are available on all vendors’ servers.&lt;/p&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Setting&lt;/th&gt;
&lt;th&gt;Required Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;All CPU cores:&lt;/td&gt;
&lt;td&gt;Enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hyper-Threading:&lt;/td&gt;
&lt;td&gt;Enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Turbo Boost:&lt;/td&gt;
&lt;td&gt;Enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NUMA:&lt;/td&gt;
&lt;td&gt;Enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hardware Virtualization (VT-x / AMD-V):&lt;/td&gt;
&lt;td&gt;Enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Power Management:&lt;/td&gt;
&lt;td&gt;OS / ESXi controlled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Unused devices:&lt;/td&gt;
&lt;td&gt;Disabled&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;p&gt;&lt;strong&gt;AMD EPYC note (Zen 3/4/5):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Review &lt;strong&gt;NUMA Per Socket (NPS)&lt;/strong&gt; settings.
&lt;/li&gt;
&lt;li&gt;NPS=1 or NPS=2 is typically optimal for IRIS database workloads.&lt;/li&gt;
&lt;/ul&gt;



&lt;h1&gt;
  
  
  Memory
&lt;/h1&gt;

&lt;p&gt;The following key rules must be considered for memory allocation:&lt;/p&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Rule&lt;/th&gt;
&lt;th&gt;Recommendation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;VM Memory Sizing:&lt;/td&gt;
&lt;td&gt;Size vRAM to fit within physical memory available&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Production Database VMs:&lt;/td&gt;
&lt;td&gt;Reserve 100% memory (full reservation)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory Overcommitment:&lt;/td&gt;
&lt;td&gt;Avoid for production; acceptable for non-production&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NUMA Consideration:&lt;/td&gt;
&lt;td&gt;Ideally size VMs to keep memory local to NUMA node&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VMware Tools:&lt;/td&gt;
&lt;td&gt;Must be installed for memory management features&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Large Pages:&lt;/td&gt;
&lt;td&gt;Enable for database VMs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Swap:&lt;/td&gt;
&lt;td&gt;Avoid any swapping for production database VMs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Mandatory
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;All production IRIS database VMs MUST have 100% memory reservation.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Failure to do this causes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shared memory swapping
&lt;/li&gt;
&lt;li&gt;Severe and unpredictable latency&lt;/li&gt;
&lt;li&gt;Database instability&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;When running multiple IRIS instances or other applications on a single physical host VMware has several technologies for efficient memory management such as transparent page sharing (TPS), ballooning, swap, and memory compression. For example when multiple OS instances are running on the same host TPS allows overcommitment of memory without performance degradation by eliminating redundant copies of pages in memory, which allows virtual machines to run with less memory than on a physical machine. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: VMware Tools must be installed in the operating system to take advantage of these and many other features of VMware.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Although these features exist to allow for overcommitting memory, the recommendation is to always start by sizing vRAM of all VMs to fit within the physical memory available. Especially important in production environments is to carefully consider the impact of overcommitting memory and overcommit only after collecting data to determine the amount of overcommitment possible. To determine the effectiveness of memory sharing and the degree of acceptable overcommitment for a given IRIS instance, run the workload and use Vmware commands &lt;code&gt;resxtop&lt;/code&gt; or &lt;code&gt;esxtop&lt;/code&gt; to observe the actual savings. &lt;/p&gt;

&lt;p&gt;A good reference is to go back and look at the &lt;a href="https://community.intersystems.com/post/intersystems-data-platforms-and-performance-part-4-looking-memory" rel="noopener noreferrer"&gt;fourth post in this series on memory&lt;/a&gt; when planning your IRIS instance memory requirements. Especially the section "VMware Virtualisation considerations" where I point out:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Set VMware memory reservation on production systems.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You &lt;em&gt;must&lt;/em&gt; avoid any swapping for shared memory. &lt;strong&gt;Reserve the full production database VMs memory (100% reservation)&lt;/strong&gt; to guarantee memory is available for your IRIS instance so there will be no  swapping or ballooning which will negatively impact database performance.&lt;/p&gt;

&lt;p&gt;Notes: Large memory reservations will impact vMotion operations so it is important to take this into consideration when designing the vMotion/management network. A virtual machine can only be live migrated, or started on another host with Vmware HA if the target host has free physical memory greater than or equal to the size of the reservation. This is especially important for production IRIS VMs. For example pay particular attention to HA Admission Control policies.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ensure capacity planning allows for distribution of VMs in event of HA failover. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For non-production environments (test, train, etc) more aggressive memory overcommitment is possible, however do not over commit IRIS shared memory, instead limit shared memory in the IRIS instance by having less global buffers. &lt;/p&gt;

&lt;p&gt;Current Intel processor architecture has a NUMA topology. Processors have their own local memory and can access memory on other processors in the same host. Not surprisingly accessing local memory has lower latency than remote. For a discussion of CPU check out the &lt;a href="https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-3-focus-cpu" rel="noopener noreferrer"&gt;third post in this series&lt;/a&gt; including a discussion about NUMA in the &lt;em&gt;comments section&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;As noted in the BIOS section above a strategy for optimal performance is to ideally size VMs only up to maximum of number of cores and memory on a single processor. For example if your capacity planning shows your biggest production IRIS database VM will be 14 vCPUs and 112 GB memory then consider whether a a cluster of servers with 2x 16-core processor and 256 GB memory is a good fit.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Ideally&lt;/strong&gt; size VMs to keep memory local to a NUMA node. But dont get too hung up on this.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you need a "Monster VM" bigger than a NUMA node that is OK, VMware will manage NUMA for optimal performance. It also important to right-size your VMs and not allocate more resources than are needed (see below).&lt;/p&gt;



&lt;h2&gt;
  
  
  CPU
&lt;/h2&gt;

&lt;p&gt;The following key rules should be considered for virtual CPU allocation:&lt;/p&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Rule&lt;/th&gt;
&lt;th&gt;Guidance&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Initial sizing:&lt;/td&gt;
&lt;td&gt;Match bare-metal core count&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;vCPU oversizing:&lt;/td&gt;
&lt;td&gt;Avoid&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hyper-Threading:&lt;/td&gt;
&lt;td&gt;Does &lt;strong&gt;not&lt;/strong&gt; double capacity&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CPU Ready:&lt;/td&gt;
&lt;td&gt;Must remain low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Consolidation:&lt;/td&gt;
&lt;td&gt;Only after measurement&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;p&gt;&lt;strong&gt;Key rule&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;1 physical core (with HT) ≈ 1 vCPU&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Hyper-Threading typically provides ~20–30% uplift, workload-dependent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Processor selection&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Prefer &lt;strong&gt;high-frequency CPUs&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AMD EPYC “F” series &lt;/li&gt;
&lt;li&gt;Intel Xeon Gold / Platinum high-GHz SKUs&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Avoid excessive core counts at low clock speeds for DB servers&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;




&lt;p&gt;Production IRIS systems should be sized based on benchmarks and measurements at live customer sites. For production systems use a strategy of initially sizing the system the same as bare-metal CPU cores and as per best practice monitoring to see if virtual CPUS (vCPUs) can be reduced. &lt;/p&gt;

&lt;h3&gt;
  
  
  Hyperthreading and capacity planning
&lt;/h3&gt;

&lt;p&gt;A good starting point for sizing &lt;strong&gt;production database&lt;/strong&gt; VMs based on your rules for physical servers is to calculate physical server CPU requirements for the target processor with hyper-threading enabled then simply make the transaltaion:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;One physical CPU (includes hyperthreading) = One vCPU (includes hyperthreading).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A common misconception is that hyper-threading somehow doubles vCPU capacity. This is NOT true for physical servers or for logical vCPUs. Hyperthreading on a bare-metal server may give a 30% uplift in performance over the same server without hyperthreading, but this can also be variable depending on the application.&lt;/p&gt;

&lt;p&gt;For initial sizing assume is that the vCPU has full core dedication. For example; if you have a 32-core (2x 16-core) server – size for a total of up to 32 vCPU capacity knowing there may be available headroom.  This configuration assumes hyper-threading is enabled at the host level. VMware will manage the scheduling between all the applications and VMs on the host. Once you have spent time monitoring the appliaction, operating system and VMware performance during peak processing times you can decide if higher consolidation is possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Licencing
&lt;/h3&gt;

&lt;p&gt;In vSphere you can configure a VM with a certain number of sockets or cores. For example, if you have a dual-processor VM (2 vCPUs), it can be configured as two CPU sockets, or as a single socket with two CPU cores. From an execution standpoint it does not make much of a difference because the hypervisor will ultimately decide whether the VM executes on one or two physical sockets. However, specifying that the dual-CPU VM really has two cores instead of two sockets could make a difference for software licenses.&lt;/p&gt;



&lt;h1&gt;
  
  
  Storage
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;This section applies to the more traditional storage model using a shared storage array. For &lt;em&gt;vSAN&lt;/em&gt; recommendations also see the following post: &lt;a href="https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-part-8-hyper-converged-infrastructure-capacity" rel="noopener noreferrer"&gt;Part 8 Hyper-Converged Infrastructure Capacity and Performance Planning&lt;/a&gt;&lt;/p&gt;


&lt;/blockquote&gt;

&lt;p&gt;The following key rules should be considered for storage:&lt;/p&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Area&lt;/th&gt;
&lt;th&gt;Requirement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Sizing metric:&lt;/td&gt;
&lt;td&gt;IOPS and latency, not GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Production disks:&lt;/td&gt;
&lt;td&gt;Thick-provisioned, eager-zeroed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Disk controllers:&lt;/td&gt;
&lt;td&gt;Multiple &lt;strong&gt;PVSCSI&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;I/O separation:&lt;/td&gt;
&lt;td&gt;DB data vs journals vs backups&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VMFS vs RDM:&lt;/td&gt;
&lt;td&gt;VMFS preferred&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VAAI:&lt;/td&gt;
&lt;td&gt;Required where supported&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Best practice&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Separate physical disk groups (or tiers) for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Random DB I/O&lt;/li&gt;
&lt;li&gt;Sequential journal / backup I/O&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Datastore separation alone is insufficient without physical isolation.    &lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;NVMe is strongly recommended for IRIS journal performance and in general.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Avoid thin-on-thin provisioning (array + VM).&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  Size storage for performance
&lt;/h2&gt;

&lt;p&gt;Bottlenecks in storage is one of the most common problems affecting IRIS system performance, the same is true for VMware vSphere configurations. The most common problem is sizing storage simply for GB capacity, rather than allocating a high enough number of  IOPS. Storage problems can be even more severe in VMware because more hosts can be accessing the same storage over the same physical connections.&lt;/p&gt;

&lt;h2&gt;
  
  
  VMware Storage overview
&lt;/h2&gt;

&lt;p&gt;VMware storage virtualization can be categorized into three layers, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The storage array is the bottom layer, consisting of physical storage presented as logical disks (storage array volumes or LUNs) to the layer above.&lt;/li&gt;
&lt;li&gt;The next layer is the virtual environment occupied by vSphere. Storage array LUNs are presented to ESXi hosts as datastores and are formatted as VMFS volumes.&lt;/li&gt;
&lt;li&gt;Virtual machines are made up of files in the datastore and include virtual disks are presented to the guest operating system as disks that can be partitioned and used in file systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;VMware offers two choices for managing disk access in a virtual machine—VMware Virtual Machine File System (VMFS) and raw device mapping (RDM), both offer similar performance. For simple management VMware generally recommends VMFS, but there may be situations where RDMs are required. As a general recommendation – unless there is a particular reason to use RDM choose VMFS, &lt;em&gt;new development by VMware is directed to VMFS and not RDM.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Virtual Machine File System (VMFS)
&lt;/h3&gt;

&lt;p&gt;VMFS is a file system developed by VMware that is dedicated and optimized for clustered virtual environments (allows read/write access from several hosts) and the storage of large files. The structure of VMFS makes it possible to store VM files in a single folder, simplifying VM administration. VMFS also enables VMware infrastructure services such as vMotion, DRS and VMware HA.&lt;/p&gt;

&lt;p&gt;Operating Systems, applications, and data are stored in virtual disk files (.vmdk files). vmdk files are stored in the Datastore.  A single VM can be made up of multiple vmdk files spread over several datastores. As the production VM in the diagram below shows a VM can include storage spread over several data stores. For production systems best performance is achieved with one vmdk file per LUN, for non-production systems (test, training etc) multiple VMs vmdk files can share a datastore and a LUN. &lt;/p&gt;

&lt;p&gt;When deploying IRIS typically multiple VMFS volumes mapped to LUNs on separate disk groups are used to separate IO patterns and improve performance. For example random or sequential IO disk groups or to separate production IO from IO from other environments. &lt;/p&gt;

&lt;p&gt;The following diagram shows an overview of an example VMware VMFS storage used with IRIS:&lt;/p&gt;


 &lt;br&gt;
&lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcommunity.intersystems.com%2Fsites%2Fdefault%2Ffiles%2Finline%2Fimages%2Fcachebestpractice2016_206.png" width="567" height="424"&gt;

&lt;p&gt;&lt;em&gt;Figure 2. Example IRIS storage on VMFS&lt;/em&gt;&lt;/p&gt;



&lt;h3&gt;
  
  
  RDM
&lt;/h3&gt;

&lt;p&gt;RDM allows management and access of raw SCSI disks or LUNs as VMFS files. An RDM is a special file on a VMFS volume that acts as a proxy for a raw device. VMFS is recommended for most virtual disk storage, but raw disks might be desirable in some cases. RDM is only available for Fibre Channel or iSCSI storage. &lt;/p&gt;

&lt;h3&gt;
  
  
  VMware vStorage APIs for Array Integration (VAAI)
&lt;/h3&gt;

&lt;p&gt;For the best storage performance, customers should consider using VAAI-capable storage hardware. VAAI can improve the performance in several areas including virtual machine provisioning and of thin-provisioned virtual disks. VAAI may be available as a firmware update from the array vendor for older arrays.&lt;/p&gt;

&lt;h3&gt;
  
  
  Virtual Disk Types
&lt;/h3&gt;

&lt;p&gt;ESXi supports multiple virtual disk types:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thick Provisioned&lt;/strong&gt; – where space is allocated at creation. There are further types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Eager Zeroed – writes 0’s to the entire drive. This increases the time it takes to create the disk, but results in the best performance, even on the first write to each block.&lt;/li&gt;
&lt;li&gt;Lazy Zeroed – writes 0’s as each block is first written to. Lazy zero results in a shorter creation time, but reduced performance the first time a block is written to. Subsequent writes, however, have the same performance as on eager-zeroed thick disks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Thin Provisioned&lt;/strong&gt; – where space is allocated and zeroed upon write. There is a higher I/O cost (similar to that of lazy-zeroed thick disks) during the first write to an unwritten file block, but on subsequent writes thin-provisioned disks have the same performance as eager-zeroed thick disks&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In all disk types VAAI can improve performance by offloading operations to the storage array.&lt;/em&gt; Some arrays also support thin provisioning at the array level, do not thin provision ESXi disks on thin provisioned array storage as there can be conflicts in provisioning and management. &lt;/p&gt;

&lt;h3&gt;
  
  
  Other Notes
&lt;/h3&gt;

&lt;p&gt;As noted above for best practice use the same strategies as bare-metal configurations; production storage may be separated at the array level into several disk groups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Random access for IRIS production databases&lt;/li&gt;
&lt;li&gt;Sequential access for backups and journals, but also a place for other non-production storage such as test, train, and so on&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Remember that a datastore is an abstraction of the storage tier and, therefore, it is a logical representation not a physical representation of the storage. Creating a dedicated datastore to isolate a particular I/O workload (whether journal or database files), without isolating the physical storage layer as well, does not have the desired effect on performance.&lt;/p&gt;

&lt;p&gt;Although performance is key, choice of shared storage depends more on existing or planned infrastructure at site than impact of VMware. As with bare-metal implementations FC SAN is the best performing and is recommended.  For FC minimum of 8Gbps adapters are the recommended minimum. iSCSI storage is only supported if appropriate network infrastructure is in place,  including; minimum 10Gb Ethernet and jumbo frames (MTU 9000) must be supported on all components in the network between server and storage with separation from other traffic.  &lt;/p&gt;

&lt;p&gt;Use multiple VMware Paravirtual SCSI (PVSCSI) controllers for the database virtual machines or virtual machines with high I/O load. PVSCSI can provide some significant benefits by increasing overall storage throughput while reducing CPU utilization.  The use of multiple PVSCSI controllers allows the execution of several parallel I/O operations inside the guest operating system. It is also recommended to separate journal I/O traffic from the database I/O traffic through separate virtual SCSI controllers. As a best practice, you can use one controller for the operating system and swap, another controller for journals, and one or more additional controllers for database data files (depending on the number and size of the database data files). &lt;/p&gt;

&lt;p&gt;Aligning file system partitions is a well-known storage best practice for database workloads. Partition alignment on both physical machines and VMware VMFS partitions prevents performance I/O degradation caused by I/O crossing track boundaries. VMware test results show that aligning VMFS partitions to 64KB track boundaries results in reduced latency and increased throughput. VMFS partitions created using vCenter are aligned on 64KB boundaries as recommended by storage and operating system vendors.&lt;/p&gt;



&lt;h1&gt;
  
  
  Networking
&lt;/h1&gt;

&lt;p&gt;The following key rules should be considered for networking:&lt;/p&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Requirement&lt;/th&gt;
&lt;th&gt;Guidance&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Adapter:&lt;/td&gt;
&lt;td&gt;VMXNET3 only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VMware Tools:&lt;/td&gt;
&lt;td&gt;Mandatory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Traffic separation:&lt;/td&gt;
&lt;td&gt;Mgmt / vMotion / Storage / App&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Switch type:&lt;/td&gt;
&lt;td&gt;Distributed vSwitch&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bandwidth:&lt;/td&gt;
&lt;td&gt;≥10 Gb minimum&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;For large-memory IRIS VMs, consider &lt;strong&gt;25 Gb+&lt;/strong&gt; for vMotion networks.
&lt;/li&gt;
&lt;li&gt;Intra-host VM traffic is significantly faster; use DRS affinity rules carefully.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;As noted above VMXNET adapaters have better capabilities than the default E1000 adapter. VMXNET3 allows 10Gb and uses less CPU where as E1000 is only 1Gb. If there is only 1 gigabit network connections between hosts there is not a lot of difference for client to VM communication. However with VMXNET3 it will allow 10Gb between VMs on the same host, which does make a difference especially in multi-tier deployments or where there is high network IO requirements between instances. This feature should also be taken into consideration when planning affinity and antiaffinity DRS rules to keep VMs on the same or separate virtual switches.&lt;/p&gt;

&lt;p&gt;The E1000 use universal drivers that can be used in Windows or Linux. Once VMware Tools is installed on the guest operating system VMXNET virtual adapters can be installed.&lt;/p&gt;

&lt;p&gt;The following diagram shows a typical small server configuration with four physical NIC ports, two ports have been configured within VMware for infrastructure traffic: dvSwitch0 for Management and vMotion, and two ports for application use by VMs. NIC teaming and load balancing is used for best throughput and HA.&lt;/p&gt;


 &lt;br&gt;
&lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcommunity.intersystems.com%2Fsites%2Fdefault%2Ffiles%2Finline%2Fimages%2Fcachebestpractice2016_207_1.png" width="497" height="359"&gt;

&lt;p&gt;&lt;em&gt;Figure 3. A typical small server configuration with four physical NIC ports.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;h1&gt;
  
  
  Guest Operating Systems
&lt;/h1&gt;

&lt;p&gt;The following are recommended:&lt;/p&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;Recommendation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;OS:&lt;/td&gt;
&lt;td&gt;RHEL 8 / 9 (or equivalent)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Architecture:&lt;/td&gt;
&lt;td&gt;64-bit only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VMware Tools:&lt;/td&gt;
&lt;td&gt;Installed and current&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Time sync:&lt;/td&gt;
&lt;td&gt;NTP (not VMware Tools)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OS tuning:&lt;/td&gt;
&lt;td&gt;Same as bare-metal IRIS&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;blockquote&gt;
&lt;p&gt;It is very important to load VMware tools in to all VM operating systems and keep the tools current. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;VMware Tools is a suite of utilities that enhances the performance of the virtual machine's guest operating system and improves management of the virtual machine. Without VMware Tools installed in your guest operating system, guest performance lacks important functionality.&lt;/p&gt;

&lt;p&gt;Its vital that the time is set correctly on all ESXi hosts - it ends up affecting the Guest VMs. The default setting for the VMs is not to sync the guest time with the host - but at certain times the guest still do sync their time with the host and if the time is out has been known to cause major issues. VMware recommends using NTP instead of VMware Tools periodic time synchronization. NTP is an industry standard and ensures accurate timekeeping in your guest. It may be necessary to open the firewall (UDP 123) to allow NTP traffic.&lt;/p&gt;



&lt;h1&gt;
  
  
  DNS Configuration
&lt;/h1&gt;

&lt;p&gt;If your DNS server is hosted on virtualized infrastructure and becomes unavailable, it prevents vCenter from resolving host names, making the virtual environment unmanageable -- however the virtual machines themselves keep operating without problem.&lt;/p&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Rule&lt;/th&gt;
&lt;th&gt;Requirement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DNS availability:&lt;/td&gt;
&lt;td&gt;Mandatory for vCenter&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DNS redundancy:&lt;/td&gt;
&lt;td&gt;Required&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Failure testing:&lt;/td&gt;
&lt;td&gt;Required&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;blockquote&gt;
&lt;p&gt;Virtual machines continue running without DNS, but &lt;strong&gt;management does not&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Best practice&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure at least one DNS resolver exists &lt;strong&gt;outside&lt;/strong&gt; the vSphere failure domain.&lt;/li&gt;
&lt;/ul&gt;



&lt;h1&gt;
  
  
  High Availability
&lt;/h1&gt;

&lt;p&gt;High availability is provided by features such as VMware vMotion, VMware Distributed Resource Scheduler (DRS) and VMware High Availability (HA). IRIS Database mirroring can also be used to increase uptime.&lt;/p&gt;

&lt;p&gt;It is important that IRIS production systems are designed with n+1 physical hosts. There must be enough resources (e.g. CPU and Memory) for all the VMs to run on remaining hosts in the event of a single host failure.  In the event of server failure if VMware cannot allocate enough CPU and memory resources on the remaining server VMware HA will not restart VMs on the remaining servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  vMotion
&lt;/h2&gt;

&lt;p&gt;vMotion can be used with IRIS. vMotion allows migration of a functioning VM from one ESXi host server to another in a fully transparent manner. The OS and applications such as IRIS running in the VM have no service interruption. &lt;/p&gt;

&lt;p&gt;When migrating using vMotion, only the status and memory of the VM—with its configuration—moves. The virtual disk does not need to move; it stays in the same shared-storage location. Once the VM has migrated, it is operating on the new physical host. &lt;/p&gt;

&lt;p&gt;vMotion can function only with a shared storage architecture (such as Shared SAS array, FC SAN or iSCSI). As IRIS is usually configured to use a large amount of shared memory it is important to have adequare network capacity available to vMotion, a 1Gb nework may be OK, however higher bandwidth may be required or multi-NIC vMotion can be configured.&lt;/p&gt;

&lt;h2&gt;
  
  
  DRS
&lt;/h2&gt;

&lt;p&gt;Distributed Resource Scheduler (DRS) is a method of automating the use of vMotion in a production environment by sharing the workload among different host servers in a cluster.&lt;br&gt;
DRS also presents the ability to implement QoS for a VM instance to protect resources for Production VMs by stopping non-production VMs over using resources.  DRS collects information about the use of the cluster’s host servers and optimize resources by distributing the VMs’ workload among the cluster’s different servers. This migration can be performed automatically or manually.&lt;/p&gt;

&lt;h2&gt;
  
  
  IRIS Database Mirror
&lt;/h2&gt;

&lt;p&gt;For mission critical tier-1 IRIS database application instances requiring the highest availability consider also using &lt;a href="http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror#GHA_mirror_set_bp_vm" rel="noopener noreferrer"&gt;InterSystems synchronous database mirroring.&lt;/a&gt; Additional advantages of also using mirroring include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Separate copies of up-to-date data.&lt;/li&gt;
&lt;li&gt;Failover in seconds (faster than restarting a VM then operating System then recovering IRIS).&lt;/li&gt;
&lt;li&gt;Failover in case of application/IRIS failure (not detected by VMware).&lt;/li&gt;
&lt;/ul&gt;



&lt;h1&gt;
  
  
  vCenter Appliance
&lt;/h1&gt;

&lt;p&gt;The vCenter Server Appliance is a preconfigured Linux-based virtual machine optimized for running vCenter Server and associated services. I have been recommending sites with small clusters to use the VMware vCenter Server Appliance as an alternative to installing vCenter Server on a Windows VM. In vSphere 6.5 the appliance is recommended for all deployments. &lt;/p&gt;



&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;This post is a rundown of key best practices you should consider when deploying IRIS on VMware. Most of these best practices are not unique to IRIS but can be applied to other tier-1 business critical deployments on VMware.&lt;/p&gt;

&lt;p&gt;If you have any questions please let me know via the comments below.&lt;/p&gt;

</description>
      <category>redis</category>
      <category>beginners</category>
      <category>performance</category>
      <category>programming</category>
    </item>
    <item>
      <title>Building a Medical History Chatbot - FHIR, Vector Search and RAG for beginners</title>
      <dc:creator>InterSystems Developer</dc:creator>
      <pubDate>Tue, 17 Mar 2026 17:24:11 +0000</pubDate>
      <link>https://dev.to/intersystems/building-a-medical-history-chatbot-fhir-vector-search-and-rag-for-beginners-27b8</link>
      <guid>https://dev.to/intersystems/building-a-medical-history-chatbot-fhir-vector-search-and-rag-for-beginners-27b8</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Earlier this year, I set about creating kit to introduce young techy folk at a Health Tech hackathon to using InterSystems IRIS for health, particularly focusing on using FHIR and vector search.&lt;/p&gt;

&lt;p&gt;I wanted to publish this to the developer community because the tutorials included in the kit make a great introduction to using FHIR and to building a basic RAG system in IRIS. Its an all inclusive set of tutorials to show in detail how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to IRIS with Python &lt;/li&gt;
&lt;li&gt;Use the InterSystems FHIR Server &lt;/li&gt;
&lt;li&gt;Convert FHIR data into relational data with the &lt;strong&gt;FHIR-SQL builder&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Use InterSystems &lt;strong&gt;Vector Search&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;As a bonus using &lt;strong&gt;Ollama&lt;/strong&gt; to prompt local AI models &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This repo contains a full series of Jupyter Notebook tutorials for developing a medical history chatbot, as well as various other tutorials on using a FHIR server, so forgive me if this article is slightly light technical detail, but there's plenty of information in the linked Open Exchange Package!&lt;/p&gt;

&lt;h3&gt;
  
  
  Designing the Demo
&lt;/h3&gt;

&lt;p&gt;The design brief I was given was to build a hackathon kit (which I defined as a fully-worked through, easy to follow demo app) that used FHIR data and AI. &lt;/p&gt;

&lt;p&gt;The first question with this kind of project is where the data is coming from. I needed &lt;strong&gt;FHIR Data&lt;/strong&gt; with some sort of &lt;strong&gt;plain text&lt;/strong&gt; which could be vectorized for Vector Search. Here I had two problems: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Real Patient data isn't easy to come across. 
    - &lt;strong&gt;Solution&lt;/strong&gt; - use synthetically generated patient data with Synthea&lt;/li&gt;
&lt;li&gt;Plain text resources are generally clinical notes in Document Reference FHIR resources.
    - &lt;strong&gt;Solution&lt;/strong&gt; - Use GenAI to write my own clinical notes and load them into FHIR Resource bundles&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Coming up with a source of plain text clinical data suitable for vectorization was my first major stumbling point, as I struggled to find anything worthwhile. The inspiration of using clinical notes to create a patient chatbot did not appear from nowhere. Instead, I saw a similar demonstration by &lt;a class="mentioned-user" href="https://dev.to/simon"&gt;@simon&lt;/a&gt;.Sha in the 2025 Demo Games. This was a great demo, so I wanted to create something similar to use for a fully guided tutorial!&lt;/p&gt;

&lt;h3&gt;
  
  
  Simplifying FHIR server set-up
&lt;/h3&gt;

&lt;p&gt;The first step of the tutorial was running an instance of IRIS for Health with a FHIR server, ideally with data pre-loaded. For this, I decided to use an Open Exchange template. If you are lost at where to start on a project, the Open Exchange is often a great place to have a look! &lt;/p&gt;

&lt;p&gt;I found two FHIR templates, &lt;a href="https://openexchange.intersystems.com/package/iris-fhir-template" rel="noopener noreferrer"&gt;iris-fhir-template&lt;/a&gt; by &lt;a class="mentioned-user" href="https://dev.to/evgeny"&gt;@evgeny&lt;/a&gt;.Shvarov, and  &lt;a href="https://github.com/pjamiesointersystems/Dockerfhir" rel="noopener noreferrer"&gt;Dockerfhir&lt;/a&gt; by &lt;a class="mentioned-user" href="https://dev.to/patrick"&gt;@patrick&lt;/a&gt;.Jamieson3621. Both of these templates are excellent, and in my final version of the hackathon kit, I ended up using a combination of them. If I was starting over, I would recommend the &lt;a href="https://openexchange.intersystems.com/package/iris-fhir-template" rel="noopener noreferrer"&gt;iris-fhir-template&lt;/a&gt;  because this has a built in user interface and swagger-UI to test the FHIR endpoints. Trying to combine the two at a later date became a nightmare because the iris-fhir-template has the FHIR server endpoint hardcoded. &lt;/p&gt;

&lt;p&gt;On the bright-side, the day I spent building and rebuilding docker containers made me much more confident on how a Dockerfile, module.xml and iris.script setup works. If you haven't already, I recommend breaking one of the many dev-templates available on the open exchange and learning how to rebuild or fix it. Its really useful to understand how these work when creating your own projects.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Vector Search
&lt;/h3&gt;

&lt;p&gt;In my eyes, the remarkable thing about vector search is how easy it is to set-up and perform, particularly in IRIS. Sure, there's refinement that can be done later, like using a hybrid vector/keyword search or adding some sort of re-ranking system, but the basic steps of: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Importing a model&lt;/li&gt;
&lt;li&gt;Creating Vectors from plain text&lt;/li&gt;
&lt;li&gt;Inserting vectors into a table in IRIS&lt;/li&gt;
&lt;li&gt;Converting a query to a vector&lt;/li&gt;
&lt;li&gt;Querying the database with the query vector&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Can all be performed in ~50 lines of Python code. &lt;/p&gt;

&lt;p&gt;This makes it a great place for newcomers to IRIS to start developing, which is why it was chosen for this hackathon kit. &lt;/p&gt;

&lt;h3&gt;
  
  
  Prompting with Ollama
&lt;/h3&gt;

&lt;p&gt;I've always liked the idea of prompting local models, knowing that it will always be free, doesn't need any API key set-up, and doesn't involving sending your data elsewhere. This last point can be particularly important with medical records, when its important to keep data private, and restrict third-party access. In the past, I used models with Hugging Faces Transformer module, and the results were incredibly slow, and incredibly poor. &lt;/p&gt;

&lt;p&gt;For this project I tried Ollama, which was a great improvement on Hugging Faces. Models that 'weigh' less than a Gigabyte, like gemma-1b give surprisingly coherent, and even accurate responses. The speed of response (at least on my computer) can be quite slow, particularly for large context windows, but if you are patient (or like taking constant tea-breaks while waiting for a model response), they perform quite well! &lt;/p&gt;

&lt;p&gt;I enjoyed putting together the Ollama prompting section, even if at a real hackathon, all the competitors just did the sensible thing and used the OpenAI API...&lt;/p&gt;

&lt;h3&gt;
  
  
  Real-life use
&lt;/h3&gt;

&lt;p&gt;We shared this tutorial with teams at the Hackjak Brno Healthcare hackathon in November 2025 and received good feedback. 11 (out of 25) teams used aspects of the kit in their final solutions, &lt;/p&gt;

&lt;p&gt;The solutions built by hackathon teams were impressive and inspirational, with use cases ranging from using IRIS vector search in a RAG pipeline, to creating tools to fill out medical forms which connect directly to a FHIR server back-end. One of the teams (VIPIK) even uploaded their solution to &lt;a href="https://openexchange.intersystems.com/package/VIPIK" rel="noopener noreferrer"&gt;Open Exchange&lt;/a&gt;, which was really nice to see. &lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;This demo was really fun to build and I'm really glad it proved useful at the hackathon in Czech Republic. I hope it will be used more in future, as its a nice entrypoint to using FHIR data with IRIS, Python and Vector Search!&lt;/p&gt;

&lt;p&gt;Thanks for reading, and check out the full tutorial on Open Exchange! &lt;/p&gt;

&lt;h3&gt;
  
  
  Acknowledgements
&lt;/h3&gt;

&lt;p&gt;Thanks to @Ruby.Howard, @tomd , &lt;a class="mentioned-user" href="https://dev.to/daniel"&gt;@daniel&lt;/a&gt;.Kutac and &lt;a class="mentioned-user" href="https://dev.to/ondrej"&gt;@ondrej&lt;/a&gt;.Hoferek for working through the tutorial and providing feedback and &lt;a class="mentioned-user" href="https://dev.to/simon"&gt;@simon&lt;/a&gt;.Sha for the original inspiration with your entry to the Demo Games last year.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>beginners</category>
      <category>python</category>
      <category>ai</category>
    </item>
    <item>
      <title>OMOP Odyssey - Vanna AI ( The Underworld )</title>
      <dc:creator>InterSystems Developer</dc:creator>
      <pubDate>Tue, 17 Mar 2026 17:17:50 +0000</pubDate>
      <link>https://dev.to/intersystems/omop-odyssey-vanna-ai-the-underworld--4okh</link>
      <guid>https://dev.to/intersystems/omop-odyssey-vanna-ai-the-underworld--4okh</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu86ayuwlgqwahyzdoxrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu86ayuwlgqwahyzdoxrv.png" alt=" " width="800" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Vanna.AI - Personalized AI InterSystems OMOP Agent&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7v2ixddkr99hv3gi6gw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7v2ixddkr99hv3gi6gw.png" alt=" " width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Along this &lt;a href="https://community.intersystems.com/smartsearch?search=OMOP+Odyssey" rel="noopener noreferrer"&gt;OMOP Journey,&lt;/a&gt; from the OHDSI book to Achilles, you can begin to understand the power of the OMOP Common Data Model when you see the mix of well written R and SQL deriving results for large scale analytics that are shareable across organizations.  I however do not have a third normal form brain and about a month ago on the Journey &lt;a href="https://community.intersystems.com/post/omop-odyssey-no-code-cdm-exploration-databricks-aibi-genie-island-aeolus" rel="noopener noreferrer"&gt;we employed Databricks Genie&lt;/a&gt; to generate sql for us utilizing InterSystems OMOP and Python interoperability.  This was fantastic, but left some magic under the hood in Databricks on how the RAG "model" was being constructed and the LLM in use to pull it off. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;This point in the OMOP Journey we met Vanna.ai on the same beaten path...&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Vanna is a Python package that uses retrieval augmentation to help you generate accurate SQL queries for your database using LLMs. Vanna works in two easy steps - train a RAG “model” on your data, and then ask questions which will return SQL queries that can be set up to automatically run on your database. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ql5zu4r5e1zzzjzpzyr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ql5zu4r5e1zzzjzpzyr.png" alt=" " width="501" height="157"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Vanna exposes all the pieces to do it ourselves with more control and our own stack against the OMOP Common Data Model.&lt;/p&gt;

&lt;p&gt;The approach from the Vanna camp I found particularly fantastic, and conceptually it felt like a magic trick was being performed, and one could certainly argue that was exactly what was happening.&lt;/p&gt;

&lt;p&gt;Vanna needs 3 choices to pull of its magic trick, a sql database, a vector database, and an LLM.  Just envision a dealer handing you out three piles and making you choose from each one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegzmhg7r25yktpsfzkxz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegzmhg7r25yktpsfzkxz.png" alt=" " width="753" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So if its not obvious, our sql database is InterSystems OMOP implementing the Common Data Model, our LLM of choice is Gemini, and for the quick and dirty evaluation we are using Chroma DB for a vector to get to the point quickly in python.&lt;/p&gt;

&lt;h2&gt;Gemini&lt;/h2&gt;

&lt;p&gt;So I cut a quick key and grew up a little bit and actually paid for it, I tried the free route with the rate limits of 50 prompts a day, and 1 per minute and it was unsettling... I may be happier being completely broke anyway, so we will see.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4r0v0ines7osb94f0rn1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4r0v0ines7osb94f0rn1.png" alt=" " width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;InterSystems OMOP&lt;/h2&gt;

&lt;p&gt;I am using my same fading trial as the &lt;a href="https://community.intersystems.com/smartsearch?search=OMOP+Journey" rel="noopener noreferrer"&gt;other posts&lt;/a&gt;.  The CDM is loaded with about 100 patient pop per United State region with the pracs and orgs to boot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1xadg2xhnej4ehfaooc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1xadg2xhnej4ehfaooc.png" alt=" " width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Vanna&lt;/h2&gt;

&lt;p&gt;Let's turn the letters (get it?) notebook style and spin the wheel (get it again?) and put Vanna to work...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;pip3&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;vanna[chromadb,gemini,sqlalchemy-iris]&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets organize our pythons.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;vanna.chromadb&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChromaDB_VectorStore&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;vanna.google&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;GoogleGeminiChat&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sqlalchemy&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;create_engine&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ssl&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sqlalchemy&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;create_engine&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initialize the star of our show and introduce her to our model.  Kind of weird right, Vanna (White) is a model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MyVanna&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ChromaDB_VectorStore&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;GoogleGeminiChat&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;ChromaDB_VectorStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;GoogleGeminiChat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;api_key&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;shaazButt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;model&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemini-2.0-flash&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="n"&gt;vn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MyVanna&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's connect to our InterSystems OMOP Cloud deployment using &lt;a href="https://github.com/caretdev/sqlalchemy-iris" rel="noopener noreferrer"&gt;sqlalchemy-iris&lt;/a&gt; from @caretdev.  The work done with this dialect is quickly becoming the key ingredient for modern data interoperability of iris products in the data world.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;engine&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_engine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;iris://SQLAdmin:LordFauntleroy!!!@k8s-0a6bc2ca-adb040ad-c7bf2ee7c6-e6b05ee242f76bf2.elb.us-east-1.amazonaws.com:443/USER&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;connect_args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sslcontext&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;SSLContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PROTOCOL_TLS_CLIENT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;verify_mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CERT_OPTIONAL&lt;/span&gt;
&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;check_hostname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load_verify_locations&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vanna-omop.pem&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;conn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;engine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You define a function that takes in a SQL query as a string and returns a pandas dataframe.  This gives Vanna a function that it can use to run the SQL on the OMOP Common Data Model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run_sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DataFrame&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_sql_query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;

&lt;span class="n"&gt;vn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;run_sql&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;run_sql&lt;/span&gt;
&lt;span class="n"&gt;vn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;run_sql_is_set&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;Feeding the Model with a Menu&lt;/h2&gt;

&lt;p&gt;The information schema query may need some tweaking depending on your database. This is a good starting point.&lt;br&gt;
This will break up the information schema into bite-sized chunks that can be referenced by the LLM...&lt;br&gt;
If you like the plan, then uncomment this and run it to train Vanna.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;df_information_schema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run_sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SELECT * FROM INFORMATION_SCHEMA.COLUMNS&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;plan&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_training_plan_generic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df_information_schema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;plan&lt;/span&gt;

&lt;span class="n"&gt;vn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;train&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;plan&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;plan&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;Training&lt;/h2&gt;

&lt;p&gt;The following are methods for adding training data. Make sure you modify the examples to match your database.&lt;br&gt;
DDL statements are powerful because they specify table names, column names, types, and potentially relationships.  These ddl's are generated with the now supported DataBaseConnector as outlined in this &lt;a href="https://community.intersystems.com/post/omop-odyssey-celebration-house-hades" rel="noopener noreferrer"&gt;post&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;vn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;train&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ddl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
--iris CDM DDL Specification for OMOP Common Data Model 5.4
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.person (
            person_id integer NOT NULL,
            gender_concept_id integer NOT NULL,
            year_of_birth integer NOT NULL,
            month_of_birth integer NULL,
            day_of_birth integer NULL,
            birth_datetime datetime NULL,
            race_source_concept_id integer NULL,
            ethnicity_source_value varchar(50) NULL,
            ethnicity_source_concept_id integer NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.observation_period (
            observation_period_id integer NOT NULL,
            person_id integer NOT NULL,
            observation_period_start_date date NOT NULL,
            observation_period_end_date date NOT NULL,
            period_type_concept_id integer NOT NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.visit_occurrence (
            visit_occurrence_id integer NOT NULL,
            discharged_to_source_value varchar(50) NULL,
            preceding_visit_occurrence_id integer NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.visit_detail (
            visit_detail_id integer NOT NULL,
            person_id integer NOT NULL,
            visit_detail_concept_id integer NOT NULL,
            provider_id integer NULL,
            care_site_id integer NULL,
            visit_detail_source_value varchar(50) NULL,
            visit_detail_source_concept_id Integer NULL,

--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.condition_occurrence (
            condition_occurrence_id integer NOT NULL,
            person_id integer NOT NULL,
            visit_detail_id integer NULL,
            condition_source_value varchar(50) NULL,
            condition_source_concept_id integer NULL,
            condition_status_source_value varchar(50) NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.drug_exposure (
            drug_exposure_id integer NOT NULL,
            person_id integer NOT NULL,
            dose_unit_source_value varchar(50) NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.procedure_occurrence (
            procedure_occurrence_id integer NOT NULL,
            person_id integer NOT NULL,
            procedure_concept_id integer NOT NULL,
            procedure_date date NOT NULL,
            procedure_source_concept_id integer NULL,
            modifier_source_value varchar(50) NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.device_exposure (
            device_exposure_id integer NOT NULL,
            person_id integer NOT NULL,
            device_concept_id integer NOT NULL,
            unit_source_value varchar(50) NULL,
            unit_source_concept_id integer NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.observation (
            observation_id integer NOT NULL,
            person_id integer NOT NULL,
            observation_concept_id integer NOT NULL,
            observation_date date NOT NULL,
            observation_datetime datetime NULL,
&amp;lt;SNIP&amp;gt;

&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sometimes you may want to add documentation about your business terminology or definitions, here I like to add the resource names from FHIR that were transformed to OMOP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;vn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;train&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;documentation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Our business is to provide tools for generating evicence in the OHDSI community from the CDM&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;vn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;train&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;documentation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Another word for care_site is organization.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;vn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;train&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;documentation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Another word for provider is practitioner.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now lets add all the data from the InterSystems OMOP Common Data Model, probably a better way to do this, but I get paid by the byte.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;cdmtables&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;care_site&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cdm_source&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cohort&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cohort_definition&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;concept&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;concept_ancestor&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;concept_class&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;concept_relationship&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;concept_synonym&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;condition_era&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;condition_occurrence&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cost&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;death&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;device_exposure&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;domain&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dose_era&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;drug_era&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;drug_exposure&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;drug_strength&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;episode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;episode_event&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;fact_relationship&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;location&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;measurement&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;metadata&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;note&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;note_nlp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;observation&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;observation_period&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;payer_plan_period&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;person&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;procedure_occurrence&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;provider&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;relationship&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;source_to_concept_map&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;specimen&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;visit_detail&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;visit_occurrence&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vocabulary&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;table&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;cdmtables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;vn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;train&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SELECT * FROM  WHERE OMOPCDM54.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;I added the ability for Gemini to see the data here, ensure you want to do this in your travels or give Google your OMOP data with slight of hand.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Lets do our best &lt;a href="https://en.wikipedia.org/wiki/Pat_Sajak" rel="noopener noreferrer"&gt;Pat Sajak,&lt;/a&gt; and boot the shiny Vanna app.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;vanna.flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;VannaFlaskApp&lt;/span&gt;
&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;VannaFlaskApp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;allow_llm_to_see_data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;debug&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9caqb9y521v3zsz013h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9caqb9y521v3zsz013h.jpg" alt=" " width="800" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Skynet!&lt;/h2&gt;

&lt;p&gt;This is a bit hackish, but really where I want to go with AI future forward integrating with apps, here we ask in natural language a question, which returns a sql query, then we immediately use that query against the InterSystems OMOP deployment using sqlalchemy-iris.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;io&lt;/span&gt;
    &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;
    &lt;span class="n"&gt;old_stdout&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stdout&lt;/span&gt;
    &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stdout&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;StringIO&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;  &lt;span class="c1"&gt;# Redirect stdout to a dummy stream
&lt;/span&gt;
    &lt;span class="n"&gt;question&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;How Many Care Sites are there in Los Angeles?&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
    &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stdout&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;old_stdout&lt;/span&gt;

    &lt;span class="n"&gt;sql_query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Ask Vanna to generate a query from a question of the OMOP database...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c1"&gt;#print(type(sql_query))
&lt;/span&gt;    &lt;span class="n"&gt;raw_sql_to_send_to_sqlalchemy_iris&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sql_query&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Vanna returns the query to use against the database.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;gar&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;raw_sql_to_send_to_sqlalchemy_iris&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FROM care_site&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FROM OMOPCDM54.care_site&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;gar&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Now use sqlalchemy-iris with the generated query back to the OMOP database...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exec_driver_sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;gar&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c1"&gt;#print(result)
&lt;/span&gt;    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;Utilities&lt;/h2&gt;

&lt;p&gt;At any time you can inspect what OMOP data the Vanna package is able to reference. You can also remove training data if there's obsolete/incorrect information (you can do this through the UI too).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;training_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_training_data&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;training_data&lt;/span&gt;
&lt;span class="n"&gt;vn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remove_training_data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;omop-ddl&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;About Using IRIS Vectors&lt;/h2&gt;

&lt;p&gt;Wish me luck here, if I manage to crush all the things to crush and resist the sun coming out, Ill implement iris vectors in vanna with the following repo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sween/vanna-iris-vector" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhx0v0nxt3hiam6pspap0.png" alt=" " width="519" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>beginners</category>
      <category>python</category>
    </item>
  </channel>
</rss>
