DEV Community

Davide Santangelo
Davide Santangelo

Posted on

Ollama and Ruby: Building Powerful AI-Powered Applications

Introduction

Ruby is a dynamic, open-source programming language highly regarded for its intuitive syntax, object-oriented philosophy, and strong focus on “developer happiness.” Thanks to its simplicity, Ruby promotes developer productivity, enabling the creation of applications in a fast and clean way. Ollama, on the other hand, is an AI framework specialized in running and efficiently managing Large Language Models (LLMs). LLMs are deep learning models trained on massive amounts of textual data, capable of understanding natural language and generating coherent text on a variety of topics, providing features like sentence completion, machine translation, text classification, and more.

Combining Ruby with Ollama opens up exciting development opportunities, making it easier to integrate AI features into existing Ruby projects or brand-new applications. Ollama streamlines the management and execution of large-scale language models, optimizing scalability and performance. This means that even those without extensive machine learning expertise can take advantage of sophisticated, complex algorithms.

In the context of web applications, for instance, one could use Ruby on Rails to build the backbone of a website and integrate Ollama for advanced natural language processing (NLP) features: from intelligent chatbots to sentiment analysis systems or content recommendation engines. This setup can improve the user experience by providing faster, more accurate responses and automating repetitive tasks, such as text categorization or summary generation.

In this article, we will explore how to integrate Ollama with Ruby using a step-by-step approach. We’ll begin with environment configuration, illustrating how to install the necessary dependencies (for example, the Ollama gem, if available, or how to connect to the Ollama API if you’re integrating via external services). Then, we’ll move on to implementing practical functionalities, from loading a language model to creating an endpoint that processes user requests through an LLM.

To make everything more concrete, we’ll provide Ruby code snippets demonstrating how to:

  1. Load and initialize Ollama: We’ll walk you through installing the required libraries, setting up authentication (if needed), and selecting the desired language model.
  2. Send text completion requests: Using Ollama’s API, you’ll learn how to dynamically generate text—for instance, to answer user queries in a chatbot or to auto-fill suggestions in a form.
  3. Perform sentiment analysis: We’ll use specific methods to classify text based on emotional tone (positive, negative, neutral), integrating these analyses directly into your Ruby application’s business logic.
  4. Create RESTful endpoints in Ruby on Rails (or Sinatra): We’ll explain how to expose AI functionalities through a web API, allowing you to connect Ollama’s logic with various front-end applications (web, mobile, or third-party services).

The goal is to give you practical, in-depth examples so you can build a Ruby-based AI application from scratch or add AI to an existing project. Whether you want to implement a virtual assistant, a recommendation engine, or a text analysis system, Ollama offers scalable, customizable support, while Ruby ensures clean, maintainable code.

Continue reading to discover every step in detail: from the initial configuration of your development environment to best practices for optimizing your AI-driven system in production. By integrating Ruby with Ollama, you can harness the full power of next-generation language models, opening new possibilities for building “intelligent” applications in a simple and effective way.

Why Ruby and Ollama?

  • Productivity: Ruby’s straightforward, expressive syntax makes development faster and more enjoyable.
  • Performance: Ollama is built to handle the heavy lifting of large-scale AI models, so your Ruby application doesn’t get bogged down in computational overhead.
  • Scalability: Both Ruby (particularly with frameworks like Rails) and Ollama allow you to scale your application as user demands grow.
  • Community: Ruby has a vibrant ecosystem of gems, and Ollama is gaining traction in the AI community, ensuring continued updates and support.

In web application contexts, you might employ Ruby on Rails to build the main structure of your site and Ollama to handle advanced Natural Language Processing (NLP) features. This could range from chatbots that provide human-like interactions and automated customer service to content recommendation systems that adapt to user preferences in real time. By automating repetitive tasks like bulk text classification or report generation, you can significantly streamline your workflow and reduce manual labor.

In this article, we will walk you through a step-by-step guide to integrating Ollama with Ruby. We’ll start by setting up the environment—installing Ruby, Ollama, and the necessary dependencies. We’ll then dive into practical examples, showing you how to load language models, create RESTful endpoints, and implement features such as text completion, sentiment analysis, and an AI-driven chatbot.

Ultimately, the goal is to provide you with actionable code snippets and a clear roadmap so you can quickly prototype and deploy your own AI solutions. Whether you aim to enhance your user experience with intelligent chatbots or automate data analysis tasks, combining Ruby with Ollama positions you to tap into the power of modern, large-scale language models.


1. Setting Up the Environment

1.1 Install Ollama

Ollama can be installed on your system via a package manager:

brew install ollama
Enter fullscreen mode Exit fullscreen mode

Verify the installation:

ollama --version
Enter fullscreen mode Exit fullscreen mode

1.2 Download and Run a Model

After installing Ollama, the next step is to download one or more language models. Ollama supports various models, including popular ones like LLaMA 2, Mistral, and GPT-4-style architectures. For demonstration purposes, let’s pull the LLaMA 2 model:

ollama pull llama2
Enter fullscreen mode Exit fullscreen mode

This command downloads the model files to your local machine. The download time depends on your internet speed and the model size.

Once the download completes, you can launch the model with:

ollama run llama2
Enter fullscreen mode Exit fullscreen mode

This starts a local server (default: port 11434) to handle API requests. Verify that the model is running and responsive with the following cURL command:

curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt": "Write a Ruby function to reverse a string."
}'
Enter fullscreen mode Exit fullscreen mode

Key Parameters:

  • model: Specifies the model name, in this case, llama2.
  • prompt: The query or task for the model to process.

Sample Response:

{
  "response": "def reverse_string(str)\n  str.reverse\nend"
}
Enter fullscreen mode Exit fullscreen mode

Testing Prompts:

Try modifying the prompt to explore different use cases, such as:

Summarization:

curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt": "Summarize the key features of Ruby programming language."
}'
Enter fullscreen mode Exit fullscreen mode

Sentiment Analysis:

curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt": "Analyze the sentiment of this text: 'Ruby is amazing and fun to learn.'"
}'
Enter fullscreen mode Exit fullscreen mode

Question Answering:

curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt": "What are the main features of Ruby programming?"
}'
Enter fullscreen mode Exit fullscreen mode

These examples demonstrate how versatile Ollama can be when combined with simple prompts. It allows you to quickly prototype AI features without extensive machine learning expertise, paving the way for advanced integrations in Ruby applications.

1.3 Install Ruby and Bundler

Ensure you have Ruby installed:

ruby -v
Enter fullscreen mode Exit fullscreen mode

Install Bundler for dependency management:

gem install bundler
Enter fullscreen mode Exit fullscreen mode

Create a new Ruby project:

mkdir ollama_ruby_app && cd ollama_ruby_app
bundle init
Enter fullscreen mode Exit fullscreen mode

2. Building the Ruby-Ollama Integration

2.1 Add Dependencies

Edit the Gemfile to include required gems:

source 'https://rubygems.org'

gem 'httparty'
gem 'json'
Enter fullscreen mode Exit fullscreen mode

Then install the dependencies:

bundle install
Enter fullscreen mode Exit fullscreen mode

2.2 Fetch Responses from Ollama API

Example 1: Simple Prompt to Ollama

require 'httparty'
require 'json'

class OllamaClient
  BASE_URL = 'http://localhost:11434/api/generate'

  def initialize(model = 'llama2')
    @model = model
  end

  def generate_response(prompt)
    response = HTTParty.post(BASE_URL,
      body: { model: @model, prompt: prompt }.to_json,
      headers: { 'Content-Type' => 'application/json' }
    )
    JSON.parse(response.body)['response']
  end
end

client = OllamaClient.new
puts client.generate_response("Write a Ruby function to reverse a string.")
Enter fullscreen mode Exit fullscreen mode

Output:

def reverse_string(str)
  str.reverse
end
Enter fullscreen mode Exit fullscreen mode

2.3 AI Chatbot in Ruby with Ollama

Example 2: Chat Application

require 'httparty'
require 'json'

class OllamaChatbot
  BASE_URL = 'http://localhost:11434/api/generate'

  def initialize(model = 'llama2')
    @model = model
  end

  def chat
    puts "Chatbot: Hello! How can I assist you today?"
    loop do
      print "You: "
      input = gets.chomp
      break if input.downcase == "exit"

      response = send_prompt(input)
      puts "Chatbot: #{response}"
    end
  end

  private

  def send_prompt(prompt)
    response = HTTParty.post(BASE_URL,
      body: { model: @model, prompt: prompt }.to_json,
      headers: { 'Content-Type' => 'application/json' }
    )
    JSON.parse(response.body)['response']
  end
end

bot = OllamaChatbot.new
bot.chat
Enter fullscreen mode Exit fullscreen mode

Usage:

ruby chatbot.rb
Enter fullscreen mode Exit fullscreen mode

3. Advanced Use Cases

3.1 Sentiment Analysis Using AI

require 'httparty'
require 'json'

class SentimentAnalyzer
  BASE_URL = 'http://localhost:11434/api/generate'

  def initialize(model = 'llama2')
    @model = model
  end

  def analyze_sentiment(text)
    prompt = "Analyze the sentiment of the following text and classify it as Positive, Negative, or Neutral:\n\n#{text}"
    response = HTTParty.post(BASE_URL,
      body: { model: @model, prompt: prompt }.to_json,
      headers: { 'Content-Type' => 'application/json' }
    )
    JSON.parse(response.body)['response']
  end
end

analyzer = SentimentAnalyzer.new
puts analyzer.analyze_sentiment("I love programming in Ruby! It's so intuitive and fun.")
Enter fullscreen mode Exit fullscreen mode

Output:

Positive
Enter fullscreen mode Exit fullscreen mode

4. Conclusion

In this article, we explored the integration of Ollama with Ruby to create AI-powered applications. We covered basic and advanced examples, including sentiment analysis, AI chatbots, and code generation.

The combination of Ollama's powerful AI models and Ruby's simplicity makes it easy to build intelligent, scalable applications. Whether you're building chatbots, automating tasks, or analyzing text, this setup can be extended to fit your needs.


5. Next Steps

  1. Experiment with different Ollama models such as GPT-4, Claude, or Mistral.
  2. Deploy your Ruby-Ollama app on cloud platforms like Heroku or AWS.
  3. Extend the chatbot with features like context memory and database integration.

Let me know if you have any questions or need additional examples!

Top comments (0)