DEV Community

Cover image for How to chat with Local LLM in Obsidian
Luca Liu
Luca Liu

Posted on

6

How to chat with Local LLM in Obsidian

Introduction

In Why You Should Try a Local LLM Model—and How to Get Started, I introduced how to set up a local LLM model using LM Studio.

In this article, I will show you how to chat with a local LLM model in Obsidian.

Method

Obsidian’s Copilot plugin allows you to connect to a custom model, enabling you to use AI-generated insights directly within your markdown workspace. Here’s a step-by-step guide to setting it up.

Step 1: Install the Copilot Plugin in Obsidian

  1. Open Obsidian and go to Settings > Community Plugins.
  2. Enable Community Plugins if you haven’t already.
  3. Search for CopilotCopilot in the plugin library and click Install.
  4. Once installed, enable the plugin and access its settings via Settings > Copilot.

Step 2: Add Your Local LLM Model in Copilot

  1. In the Provider field, select lm-studio.
  2. Enter the Model Name.
  3. Click Verify Connection to ensure that Copilot can communicate with the model.

Once the connection is verified successfully, Click Add Model. Your custom model will now appear in the list of available models.

Step 3. Enable the Custom Model in General Settings

  1. In Copilot > General Settings, Select the custom model you just added.
  2. make sure to enable CORS in LM Studio and Copilot

Step 4: open chat box

Press Ctrl + Shift + P(Windows)or Cmd + Shift + P(Mac) and select Open Copilot Chat Window

Now you can chat with your local LLM model in Obsidian's sidebar.

Conclusion

This is a simple way to chat with your local LLM model in Obsidian.


Explore more

Thank you for taking the time to explore data-related insights with me. I appreciate your engagement.

🚀 Connect with me on LinkedIn

API Trace View

How I Cut 22.3 Seconds Off an API Call with Sentry 🕒

Struggling with slow API calls? Dan Mindru walks through how he used Sentry's new Trace View feature to shave off 22.3 seconds from an API call.

Get a practical walkthrough of how to identify bottlenecks, split tasks into multiple parallel tasks, identify slow AI model calls, and more.

Read more →

Top comments (0)

The Most Contextual AI Development Assistant

Pieces.app image

Our centralized storage agent works on-device, unifying various developer tools to proactively capture and enrich useful materials, streamline collaboration, and solve complex problems through a contextual understanding of your unique workflow.

👥 Ideal for solo developers, teams, and cross-company projects

Learn more

👋 Kindness is contagious

If this post resonated with you, feel free to hit ❤️ or leave a quick comment to share your thoughts!

Okay