DEV Community

Orquesta𝄢
Orquesta𝄢

Posted on • Originally published at orquesta.live

Manage Local LLMs with Orquesta CLI and Dashboard Sync

Originally published at orquesta.live/blog/manage-local-llms-with-orquesta-cli

Harnessing the power of large language models (LLMs) locally offers distinct advantages, from maintaining the privacy of your code to leveraging full processing power without latency issues. Orquesta CLI is designed to make this process straightforward and integrated with cloud-based tools, offering a powerful solution for managing your local LLMs like Claude, OpenAI, Ollama, and vLLM seamlessly.

Running LLMs Locally: Why It Matters

Running LLMs locally on your own infrastructure provides a spectrum of benefits. Primarily, it ensures that your data never leaves your secured environment, a crucial consideration for teams handling sensitive information. Furthermore, local deployment allows you to leverage your machine's full capabilities, bypassing the latency that can come with cloud-based processing.

With Orquesta CLI, we ensure that you can have the best of both worlds: the privacy and power of local execution with the collaborative and management benefits of cloud integration.

Orquesta CLI: Local Management Meets Cloud Integration

The Orquesta CLI is more than just a command-line tool; it's a bridge between local operations and cloud-based management. Here's how it stands out:

  • Local LLM Management: With support for Claude, OpenAI, Ollama, and vLLM, Orquesta CLI enables you to manage any LLM setup comfortably on your local machine.
  • Cloud Dashboard Sync: While your code and data remain local, Orquesta supports seamless synchronization of configurations and prompt history with the cloud dashboard.
  • Prompt History Tracking: Every interaction, every prompt, and the corresponding model response are tracked. This enables teams to maintain an audit trail and revisit any past interactions for analysis or debugging.

Key Features in Detail

  1. Org-Scoped Tokens and Configuration Sync

Orquesta uses organization-scoped tokens to manage access and permissions across your team. This means each team member can interact with the LLMs without compromising security.

The bi-directional configuration sync between your local environment and the Orquesta dashboard ensures that any changes made locally or on the cloud are consistently reflected across both platforms. This includes model configurations, prompt history, and access permissions.

   {
       "token": "org-scoped-token",
       "model": "openai-gpt-3",
       "config": {
           "temperature": 0.7,
           "max_tokens": 150
       }
   }
Enter fullscreen mode Exit fullscreen mode
  1. Prompt History and Audit Trails

Every prompt you execute is logged with a timestamp and the response from the LLM, creating a comprehensive history accessible from both the CLI and the cloud dashboard. This audit trail is crucial for debugging, improving prompts, and maintaining transparency across team activities.

   # Example CLI command to view history
   orquesta-cli history --model=openai-gpt-3
Enter fullscreen mode Exit fullscreen mode
  1. Bidirectional Sync

Unlike solutions that only allow you to push configurations one way, Orquesta’s bi-directional sync ensures that your settings are always up-to-date, regardless of where changes are made. This eliminates configuration drift and keeps your team aligned.

   # Sync local changes to cloud
   orquesta-cli sync --direction=push

   # Sync cloud changes to local
   orquesta-cli sync --direction=pull
Enter fullscreen mode Exit fullscreen mode

Real-World Use Cases

Consider a scenario where your team is developing a feature that requires iterative prompt tuning and model configuration. With Orquesta CLI, team members can fine-tune prompts locally, experiment with different model configurations, and instantly share these trials with the entire organization. This fosters a collaborative environment where learning from each other's experiments becomes second nature.

Another case could be compliance and audit requirements. With prompt history tracked and stored securely, any organization can easily demonstrate how models were used and the outputs they generated, a critical feature for industries with strict regulatory standards.

Conclusion

Orquesta CLI is not just about local LLM execution; it's a thoughtful integration of local and cloud-based operations, designed to provide the best of both worlds. By using Orquesta, teams can manage LLMs securely, efficiently, and collaboratively, ensuring that everyone is on the same page, whether they are operating locally or working from the cloud dashboard. This synergy not only simplifies LLM management but also significantly enhances collaboration and transparency within teams.

Top comments (0)