DEV Community

SINAPTIA
SINAPTIA

Posted on • Originally published at sinaptia.dev

MCP on Rails

This year, we started deep diving into AI, specifically focusing on two aspects. First, what tools can we use to be even more productive? There are a lot of models, coding agents, and editor combinations to try, and a lot of new ones coming every day. Second, how can we use AI to improve the apps we're working on?

On the first aspect, there's not much we can say yet. Like I said, there are many options, and more keep appearing every day. Too soon to jump to conclusions.

On the second aspect, we already wrote a few articles about it: We scaled image classification with AI, upscaled images with AI, and improved a similarity search with AI. And we're looking forward to building more intelligent applications with Rails.

There is a third aspect that we haven't tried yet until now: how can we provide tools and context to AI models?

Model Context Protocol

MCP is an open protocol that standardizes how applications provide context (data sources and tools) to large language models (LLMs). Simply put, you can connect your AI agent with an MCP server, and you'll have access to resources and tools provided by that server. For example, you can connect your AI agent with Google Calendar's MCP server and ask:

> do I have any meetings today?
Enter fullscreen mode Exit fullscreen mode

The AI model will understand that you're asking it to read your calendar, and will either request the resources or call the tools that correspond to your prompt. The response will be something like:

Yes, you have 2 meetings scheduled for today:
1. Team standup - 11:00 AM to 11:15 AM (15 minutes)
  * With multiple team members (8 attendees total)
2. Executive meeting - 2:00 PM to 5:00 PM (3 hours)
  * With one colleague
The first is a brief team sync, and the second is a longer strategic meeting.
Enter fullscreen mode Exit fullscreen mode

The Google Calendar MCP server can only read and search your events and cannot create new ones, but other MCP servers have the ability to also write new objects. For example, you can connect your model with the official GitHub MCP server and create comments, open pull requests, etc.

MCP is a very powerful protocol. Still in its early stages, we can see there's a bright future ahead. Imagine the possibilities: you could connect your model to several MCP servers and interact with it to accomplish a task, or you could build certain resources or tools you can use to integrate with other resources or tools. What you get is, instead of a traditional web application, a conversational application that is basically the glue for various services.

This is something we wanted to try.

Timetracker

We've developed a time tracker for internal use. Everyone at SINAPTIA tracks their time every day, and by the end of the month, we run reports that we use for invoicing. It's a simple and extremely effective solution. We've been using it for at least 4 years without any issues, and no major modifications.

In terms of UI/UX, it's also quite effective: every day at 5 pm, you'll receive a Slack message from the timetracker app asking you to track your time. And if you have missing entries this month, it'll remind you so you're up to date with your updates. The message includes a link to the time tracker, so the only thing you have to do is follow the link, log in (if you're not logged in), and create a new entry for the work you did today. If you need to create more than one entry, you can create them in bulk.

But what if you could talk to the timetracker? What if you could say:

> log 8 hours today to the Test project
Enter fullscreen mode Exit fullscreen mode

or even:

> log 8 hours to the Test project for the last week
Enter fullscreen mode Exit fullscreen mode

That would be great. Let's see how easy it is...

But before, let's scope the project: we're only going to create new entries. And this is what an entry looks like:

  create_table "entries", force: :cascade do |t|
    t.float "duration"
    t.string "comments"
    t.datetime "date", precision: nil
    t.bigint "project_id", null: false
    t.bigint "user_id", null: false
    t.datetime "created_at", null: false
    t.datetime "updated_at", null: false
    t.index ["project_id"], name: "index_entries_on_project_id"
    t.index ["user_id"], name: "index_entries_on_user_id"
  end
Enter fullscreen mode Exit fullscreen mode

fast-mcp

fast-mcp is an MCP implementation for Ruby. Adding an MCP server in Rails with fast-mcp is simple, and it will only take 3 steps:

  1. add fast-mcp to your Gemfile and install it with bundle install
  2. run rails generate fast_mcp:install
  3. add resources and tools

The fast_mcp:install generator creates:

  • an initializer to configure the server, such as the name of the MCP server, the allowed origins, the auth token, etc.
  • a sample resource
  • a sample tool

Next time you start your Rails app, the MCP server will be up and running, and you'll be able to connect your MCP client (typically an AI model) to use the provided resources and tools. To connect your MCP client to the MCP server, refer to your MCP client documentation, as each one has its own configuration.

Also, to make sure everything's working, you can connect to your MCP server with the official MCP inspector by running npx @modelcontextprotocol/inspector. The inspector is an interactive dev tool for testing and debugging MCP servers. Once it's running, you can connect the inspector to your MCP server and fetch the resources and call tools for testing purposes.

Resources

Resources provide structured access to information that the host application can retrieve and provide to AI models as context. Resources are application-controlled. This means applications decide how they retrieve, process, and present available context. Common interaction patterns include tree or list views for browsing resources in familiar folder-like structures, search and filter interfaces for finding specific resources, automatic context inclusion based on heuristics or AI selection, and manual selection interfaces.

Resources are identified with a unique URI-based string. For our timetracker MCP server, we're going to define the timetracker://projects resource, which will return the active projects. We need this resource because the entry belongs to a project, so we need to know it before we can create an entry. We will define it like this:

class ProjectsResource < ApplicationResource
  uri "timetracker://projects"
  resource_name "Projects"
  description "Active projects"
  mime_type "application/json"

  def content
    JSON.generate(Project.active.as_json)
  end
end
Enter fullscreen mode Exit fullscreen mode

Entries belong to users, but we won't have a timetracker://users resource. This is because we don't want the LLM to decide which user it will call the tool with. The tool should know who the user is who's creating the entry. And for that, we need authentication.

Authentication

Before continuing, we need to secure our connections.

FastMCP supports token authentication. This means that connections from an MCP client to an MCP server must include a bearer token to establish the connection. We can turn on the token authentication by configuring the initializer. This is the first step. No one without the bearer token can connect to our server to query our resources or call our tools.

But that's not enough. The bearer token does not identify a user within the server. We need to authenticate users so that only real users can have access to our resources and tools.

To do that, we need the timetracker users to have a secure token. Let's add it:

$ rails g migration add_mcp_token_to_users mcp_token:string
Enter fullscreen mode Exit fullscreen mode

Then let's update the user model so it has the secure token:

class User < ApplicationRecord
  # ...
  has_secure_token :mcp_token
  # ...
end
Enter fullscreen mode Exit fullscreen mode

Finally, to make sure all active users have an MCP token:

class AddMcpTokenToUsers < ActiveRecord::Migration[8.0]
  def change
    add_column :users, :mcp_token, :string

    User.active.each(&:regenerate_mcp_token)
  end
end
Enter fullscreen mode Exit fullscreen mode

Now, with our MCP server secured with the token authentication, we need to update the MCP configuration. Each AI provider has its own configuration, so make sure you edit your MCP client configuration and add the following headers:

  • Authorization: your token from config/initializers/fast_mcp.rb
  • X-MCP-Token: the mcp token that identifies you as a user (eg, User.first.mcp_token)

To make sure users are authenticated with the MCP token, we need to define a current_user method in app/tools/application_tool.rb:

class ApplicationTool < ActionTool::Base
  private

  def current_user
    user = User.active.find_by mcp_token: headers["x-mcp-token"]

    raise "MCP Token is invalid" unless user.present?

    user
  end
end
Enter fullscreen mode Exit fullscreen mode

We could do the same with resources, but in this particular case, it's not important.

Now that we have authentication in place, let's move on to our tools and see how we define them.

Tools

Tools enable AI models to perform actions through server-implemented functions. The model requests tool execution based on context.

Tools are schema-defined interfaces that LLMs can invoke. MCP uses JSON Schema for validation. Each tool performs a single operation with clearly defined inputs and outputs. Most importantly, tool execution requires explicit user approval, ensuring users maintain control over actions taken by a model.

With the authentication we just implemented, the tools we define will have access to the authenticated user by just calling current_user. The "create entry tool" will use the current user to track time against it. So let's define the create entry tool:

class CreateEntryTool < ApplicationTool
  description "Create an entry"

  arguments do
    required(:project_id).filled(:integer).description("ID of the project, use the projects resource if it's unknown")
    optional(:date).maybe(:date).description("Date of the entry")
    required(:duration).filled(:string).description("Duration of the entry, always a float (eg. 2.0)")
    optional(:comments).maybe(:string).description("Comments, leave empty if it hasn't been provided")
  end

  def call(project_id:, duration:, comments: nil, date: Date.current)
    Entry.create(date: date, duration: duration.to_f, project_id: project_id, user: current_user, comments: comments)

    "Entry was successfully created."
  end
end
Enter fullscreen mode Exit fullscreen mode

If you're used to reading Ruby code, you'll find the tool's code quite self-explanatory: it has a description, an arguments block, and a call method. The arguments block defines the tool's input as a dry-schema. In this case, we're going to receive:

  • a required project_id
  • an optional date that defaults to today (see the call method signature)
  • a required duration
  • an optional comments string, that defaults to nil (see the call method signature)

Pay attention to the descriptions, they're super important! This is the piece of information the LLM will use to determine when and how to call this tool. You can think of it as the "system prompt" of the tool. So, all you know about prompt engineering applies here, too. For example, the description of the project_id tells the LLM to use the projects resource if the user hasn't specified the project ID. It feels like cheating!

The call method is the actual tool. It receives the arguments the LLM sent and creates an entry, as you would do in a normal controller. Unlike resources, the output is a string that has the info you want to send back to the model.

Now that we have a working tool, we can ask the LLM:

> log 4 hours to the Test project yesterday
Enter fullscreen mode Exit fullscreen mode

The agent, if it's properly connected, will know before you ask it what the type of each parameter is. This way, it will take your prompt and convert the information it was given:

  • the duration becomes "4.0"
  • yesterday's date becomes a date in YYYY-MM-DD format

And it doesn't know the project ID. But it understands the hints from the description. So instead of asking which project ID it should use, it will call the timetracker://projects resource, and match the Test project against them. If it can't find it, it will ask you.

That's pretty cool.

So now let's see what the response looks like:

⏺ I'll help you log 4 hours to the Test project for yesterday. Let me first get the project information and then create the entry.
⏺ listMcpResources(List MCP resources from server "timetracker")
  ⎿  [
       {
         "name": "Projects",
     …
⏺ readMcpResource(Read resource "timetracker://projects" from server "timetracker")
  ⎿  {
       "contents": [
         {
     …
⏺ I found the Test project (ID: 12). Now I'll create a time entry for 4 hours yesterday
  (2025-08-26).
⏺ timetracker - CreateEntryTool (MCP)(project_id: 12, duration: "4.0", date: "2025-08-26", comments:
                                     "")
  ⎿  Entry was successfully created.
⏺ Created a 4-hour time entry for the Test project on August 26, 2025.
Enter fullscreen mode Exit fullscreen mode

Conclusion

MCP is a very powerful protocol that allows simple applications to become part of a more complex conversational interface. Imagine the possibilities: instead of using unintuitive, complex UIs for CRM, inventory management, or project management tools, we could simply describe what we want to accomplish, and the LLM would handle it for us. It would be like having a conversation with our tools.

On the technical side, and thanks to Ruby and fast-mcp, providing an MCP server for your app is practically as simple as it could be. The complexity lies in properly authenticating and authorizing users, and filtering the tools and resources available to them. This is not covered in this post, but it's possible with fast-mcp. At the end of the day, you should consider tools and resources as regular Rails actions and take the same security measures you would take with them.


At SINAPTIA, we specialize in helping businesses implement AI solutions that deliver real value. If you want to discuss how conversational UIs could benefit your business workflows, we'd love to help you explore what's possible.

Top comments (0)