Originally published at orquesta.live/blog/orquesta-cli-local-llm-management-dashboard-sync
In building Orquesta, we recognized the need to provide robust local management of large language models (LLMs) while maintaining seamless synchronization with cloud-based dashboards. Enter Orquesta CLI: a tool designed for developers who need the power of local AI execution without sacrificing the convenience of cloud features.
The Need for Local LLM Management
Developers often face the challenge of managing AI models across various use cases and infrastructure setups. Running LLMs locally offers several advantages:
- Data Privacy: Keeping data within your infrastructure minimizes exposure.
- Performance: Local execution can tap into existing hardware capabilities, reducing latency.
- Customization: Tailored environments meet specific project needs.
With Orquesta CLI, you can manage these models locally and sync configurations and history to the cloud dashboard.
Supported LLMs
Orquesta CLI currently supports several leading language models:
- Claude: Known for its nuanced understanding and response generation.
- OpenAI: A versatile model with numerous applications in natural language processing.
- Ollama: Optimized for conversational AI and dialogue systems.
- vLLM: A high-performance, lightweight model ideal for low-resource settings.
Bi-directional Configuration Sync
One of the standout features of Orquesta CLI is its ability to sync configurations bi-directionally. This means any changes made locally are reflected in the cloud dashboard and vice versa. This feature ensures consistency and transparency across all environments.
Example Workflow
- Local Setup: You configure your LLMs on your local machine using Orquesta CLI.
orquesta-cli setup --model claude --config /path/to/config.yaml
- Cloud Sync: Once configured, sync these settings to the cloud dashboard.
orquesta-cli sync --to-cloud
Dashboard Changes: Adjust settings in the cloud dashboard as needed.
Local Update: Pull down any configuration changes.
orquesta-cli sync --to-local
This seamless synchronization ensures that all team members work with the most up-to-date configurations, enhancing collaboration and reducing errors.
Managing Prompt History
Prompt history tracking is another crucial feature of Orquesta CLI. Whether for troubleshooting, auditing, or refining AI interactions, maintaining a detailed history is invaluable.
- Persistent Logging: All prompts and their responses are logged locally and can be synced to the dashboard for centralized access.
- Version Control: Utilize git-like features to track changes to your prompt configurations over time.
Example of Prompt History Management
To access the prompt history, use:
orquesta-cli history --view
This command displays a detailed chronicle of interactions, enabling you to analyze and optimize your AI prompts effectively.
Organization-Scoped Tokens
Orquesta CLI simplifies managing tokens across your organization. Tokens are essential for authenticating users and managing permissions within your infrastructure.
- Scoped Access: Limit token access to specific models or functionalities.
- Centralized Management: Update or revoke tokens centrally, syncing changes instantly across environments.
Token Management Commands
Generate a new token for your organization:
orquesta-cli tokens --generate --scope org-wide
Revoke an existing token:
orquesta-cli tokens --revoke --id [TOKEN_ID]
Conclusion
Orquesta CLI bridges the gap between local execution and cloud management, providing developers with a powerful tool for managing LLMs. By syncing configurations, tracking prompt history, and managing organization-scoped tokens, it offers a comprehensive solution that caters to modern AI development needs.
The ability to run any of the supported LLMs locally while syncing seamlessly to a centralized dashboard not only enhances productivity but also ensures that your team's AI endeavors remain secure, efficient, and organized.
Top comments (0)