DEV Community

HK Verma
HK Verma

Posted on • Originally published at litenai.com on

Tutorial: Running LitenAI Locally using Docker Image

Introduction

Contact us to get the latest docker image.

All data is local and accessible only to the user. LitenAI does not collect any data from user.

Installation Steps

Prepare Your Environment

Load the Docker Image

  • Download the Docker image from the LitenAI Store.
  • Load the image using the following command. Version is the release version number like v0.1.112
docker load < litenai.<version>.tar.gz
Enter fullscreen mode Exit fullscreen mode
  • Verify the image load with the below command. Note down the tag number, it will be needed to start the container.
docker image ls
Enter fullscreen mode Exit fullscreen mode

Set Environment Variables

  • Set your API key to a valid Azure, OpenAI or local API key
export LITENAI_API_KEY="api_key"
Enter fullscreen mode Exit fullscreen mode

Bring up the docker container

  • Replace with the number from ‘docker image ls’ command above
docker run -d --name litenai_container -p 8210:8210 -p 8221:8221 -e LITENAI_API_KEY=${LITENAI_API_KEY} litenai/litenai:<tag>
Enter fullscreen mode Exit fullscreen mode

Access the Chat Interface in your local browser, by visiting the URL below

http://localhost:8210
Enter fullscreen mode Exit fullscreen mode

Stop the container

docker stop litenai_container
Enter fullscreen mode Exit fullscreen mode

Sample Chat Sessions

Access the Chat Interface in your local browser, by visiting the URL below

http://localhost:8210
Enter fullscreen mode Exit fullscreen mode

Click on the Lake tab. The Docker environment comes pre-loaded with two lakes: logreason and techassist.

To explore the logreason lake, select it from the Lake tab. You can then follow the sample chat session described in the blog to understand how to analyze customer sessions and identify the root causes of performance issues.

To explore the techassist lake, select it from the Lake tab. After the lake loads, you can follow the sample chat session described in the blog to understand how technicians maintain and repair medical devices.

Configuration for Self-hosted LLM

LitenAI supports the use of locally hosted LLMs. To configure this, you need to provide the URL for the LLM call and specify the LLM model being used. An example configuration is shown below.

Set Environment Variables

export LITENAI_SERVE_URL="http://localhost:8000/v1"export LITENAI_LLM_MODEL="meta-llama/Llama-3.2-1B-Instruct"
Enter fullscreen mode Exit fullscreen mode

Run the Docker command below

docker run -d --name liten_container -p 8210:8210 -p 8221:8221 -e LITENAI_API_KEY=${LITENAI_API_KEY} -e LITENAI_SERVE_URL=${LITENAI_SERVE_URL} -e LITENAI_LLM_MODEL=${LITENAI_LLM_MODEL} -e LITENAI_AZURE_API_VERSION="" litenai/litenai:<tag>
Enter fullscreen mode Exit fullscreen mode

Access the Chat Interface in your local browser, by visiting the URL below

http://localhost:8210
Enter fullscreen mode Exit fullscreen mode

Configuration for OpenAI Service

LitenAI is configured to use the Azure OpenAI service by default. However, it can also integrate with the OpenAI service.

Set Environment Variables

You will need to obtain an OpenAI API key from your OpenAI account.

export LITENAI_API_KEY=<OpenAI-API-Key>
Enter fullscreen mode Exit fullscreen mode

Run the Docker command below

Set the serve URL and Azure API versions to empty values.

docker run -d --name liten_container -p 8210:8210 -p 8221:8221 -e LITENAI_API_KEY=${LITENAI_API_KEY} -e LITENAI_SERVE_URL="" -e LITENAI_AZURE_API_VERSION="" litenai/litenai:<tag>
Enter fullscreen mode Exit fullscreen mode

Access the Chat Interface in your local browser, by visiting the URL below

http://localhost:8210
Enter fullscreen mode Exit fullscreen mode

Top comments (0)