π― Goal: Quickly set up LiteLLM proxy on Mac in 15-20 minutes using the modern UV package manager without creating a formal project structure.
π What You'll Need
- Mac with macOS 10.15 or newer
- Internet connection
- 20 minutes of your time
- One API key (OpenAI, Anthropic, Groq, or other)
π Step 1: Install Required Tools
1.1 Install Homebrew (if not already installed)
Open Terminal and run:
# Check if Homebrew is installed
brew --version
# If you get an error, install Homebrew:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
1.2 Install UV (modern Python package manager)
# Install UV via Homebrew
brew install uv
# Verify installation
uv --version
1.3 Install PostgreSQL
# Install PostgreSQL
brew install postgresql@14
# Start PostgreSQL service
brew services start postgresql
# Verify it's running
brew services list | grep postgresql
πΎ Step 2: Create Database
# Create database for LiteLLM
createdb litellm_db
# Verify database was created
psql -l | grep litellm_db
# Get your username for database connection
whoami
Write down the result of whoami
- this is your database username!
π Step 3: Set Up Virtual Environment
Instead of initializing a full project, we'll create a simple directory with a virtual environment.
# Go to your preferred directory, e.g., Documents
cd ~/Documents
# Create a folder for our setup and navigate into it
mkdir litellm-setup
cd litellm-setup
# Create a virtual environment using UV
uv venv
# Activate the virtual environment
source .venv/bin/activate
# Your terminal prompt should now start with (.venv)
β οΈ Important: All subsequent commands should be run inside this activated environment.
π¦ Step 4: Install LiteLLM and Dependencies
# Install LiteLLM with proxy support and database drivers
uv pip install "litellm[proxy]" prisma psycopg2-binary
# Verify installation (since the environment is active, you don't need "uv run")
litellm --version
π§ Step 5: Generate Prisma Client
LiteLLM uses Prisma for database interactions. We need to generate the Python client:
# Find the exact path to schema.prisma within the installed litellm package
find .venv -name "schema.prisma" -path "*/litellm/*"
# Generate Prisma client (replace the path with the output from the command above)
python -m prisma generate --schema .venv/lib/python3.12/site-packages/litellm/proxy/schema.prisma
Important: Your Python version (e.g., python3.12
) may differ. Use the find
command to get the correct path.
βοΈ Step 6: Create Configuration
6.1 Create config.yaml
# Create configuration file
nano config.yaml
Insert the following content:
litellm_settings:
drop_params: true
general_settings:
# Main settings will be taken from environment variables
# List of available models
model_list:
- model_name: groq-llama3-fast # Your custom alias (any name)
litellm_params:
model: groq/llama3-8b-8192 # Real model name from LiteLLM
api_key: os.environ/GROQ_API_KEY # Key via environment variables
- model_name: gpt-3.5-turbo # Additional example with OpenAI
litellm_params:
model: gpt-3.5-turbo
api_key: os.environ/OPENAI_API_KEY
Save file: Ctrl+O
, then Enter
, then Ctrl+X
6.2 Create environment variables file
# Create .env file
nano .env
Insert (replace YOUR_USERNAME with the result of the whoami
command):
# LiteLLM settings
LITELLM_MASTER_KEY="sk-1234"
LITELLM_SALT_KEY="sk-1234"
LITELLM_DROP_PARAMS=True
# Server settings
PORT=4000
STORE_MODEL_IN_DB="True" # Allow storing models in DB
# Database connection (replace 'admin' with YOUR_USERNAME if it's different)
DATABASE_URL=postgresql://admin@localhost:5432/litellm_db
# API keys (add your real keys)
GROQ_API_KEY="your-groq-api-key-here"
OPENAI_API_KEY="your-openai-key-here"
ANTHROPIC_API_KEY="your-anthropic-key-here"
Save file: Ctrl+O
, then Enter
, then Ctrl+X
π Step 7: Start LiteLLM
# Start LiteLLM proxy (it will automatically load the .env file)
litellm --config config.yaml --port 4000
# On the first run, LiteLLM will automatically create tables in the database
# You'll see:
# LiteLLM: Proxy initialized with Config, Set models:...
# INFO: Uvicorn running on http://0.0.0.0:4000
π Step 8: Verify Everything Works
8.1 Open UI
In your browser, go to: http://localhost:4000
You'll see the LiteLLM web interface with model management, keys, and statistics!
8.2 Test Model via UI
- In the web interface, find the "Test Key" section.
- Select the
groq-llama3-fast
model. - Write a test message, e.g.: "Hello! How are you?"
- Click send.
- You'll get a response from the model!
8.3 Test API via Terminal
Open a new terminal and run:
curl -X POST 'http://localhost:4000/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' \
-d '{
"model": "groq-llama3-fast",
"messages": [{"role": "user", "content": "Hello! How are you?"}]
}'
If everything works correctly, you'll get a response from the Groq Llama3 model!
ποΈ LiteLLM Database Management
After starting the LiteLLM Proxy, it will begin storing data in your PostgreSQL database. Here's how you can manage this data.
Checking Database Content
To view the tables and data in your litellm_db
database:
-
Connect to the
litellm_db
database usingpsql
:
psql -U admin -h localhost -p 5432 litellm_db
Enter your password when prompted.
-
View the list of tables:
\dt
You will see a list of tables created by LiteLLM, such as:
LiteLLM_AuditLog
,LiteLLM_BudgetTable
,LiteLLM_Config
,LiteLLM_CredentialsTable
,LiteLLM_CronJob
,LiteLLM_DailyTagSpend
,LiteLLM_DailyTeamSpend
,LiteLLM_DailyUserSpend
,LiteLLM_EndUserTable
,LiteLLM_ErrorLogs
,LiteLLM_GuardrailsTable
,LiteLLM_HealthCheckTable
,LiteLLM_InvitationLink
,LiteLLM_MCPServerTable
,LiteLLM_ManagedFileTable
,LiteLLM_ManagedObjectTable
,LiteLLM_ManagedVectorStoresTable
,LiteLLM_ModelTable
,LiteLLM_ObjectPermissionTable
,LiteLLM_OrganizationMembership
,LiteLLM_OrganizationTable
,LiteLLM_PromptTable
,LiteLLM_ProxyModelTable
,LiteLLM_SpendLogs
,LiteLLM_TeamMembership
,LiteLLM_TeamTable
,LiteLLM_UserNotifications
,LiteLLM_UserTable
,LiteLLM_VerificationToken
,_prisma_migrations
. -
View the schema of a specific table (optional):
For example, for theLiteLLM_SpendLogs
table:
\d "LiteLLM_SpendLogs"
(Use double quotes if the table name contains uppercase letters or special characters).
-
View a few records from a table (optional):
For example, for theLiteLLM_SpendLogs
table:
SELECT * FROM "LiteLLM_SpendLogs" LIMIT 5;
-
Exit
psql
:
\q
Deleting the Database
If you want to completely delete the litellm_db
database (e.g., to start with a clean slate), follow these steps. This operation is irreversible and will result in the loss of all data in this database.
-
Connect to PostgreSQL to any other database (e.g.,
postgres
), as you cannot delete a database you are currently connected to:
psql -U admin -h localhost -p 5432 postgres
Enter your password.
-
Delete the
litellm_db
database:
DROP DATABASE litellm_db;
-
Exit
psql
:
\q
The next time LiteLLM starts, it will automatically recreate the
litellm_db
database and apply all migrations.
Clearing Data (Logs, Spend, Tokens)
If you want to clear only specific data (e.g., logs or spend records) without deleting the entire database, you can use the TRUNCATE TABLE
command. This will remove all records from the table but leave the table itself and its structure intact.
Important: Make sure the LiteLLM Proxy is not running when you execute these commands to avoid conflicts.
-
Connect to the
litellm_db
database usingpsql
:
psql -U admin -h localhost -p 5432 litellm_db
Enter your password.
Clear the desired tables:
* **Request and error logs:**
```sql
TRUNCATE TABLE "LiteLLM_AuditLog" RESTART IDENTITY;
TRUNCATE TABLE "LiteLLM_ErrorLogs" RESTART IDENTITY;
```
(`RESTART IDENTITY` resets auto-incrementing ID counters, if any).
* **Detailed spend records (price, tokens):**
```sql
TRUNCATE TABLE "LiteLLM_SpendLogs" RESTART IDENTITY;
```
* **Aggregated daily spend (users, teams, tags):**
```sql
TRUNCATE TABLE "LiteLLM_DailyUserSpend" RESTART IDENTITY;
TRUNCATE TABLE "LiteLLM_DailyTeamSpend" RESTART IDENTITY;
TRUNCATE TABLE "LiteLLM_DailyTagSpend" RESTART IDENTITY;
```
* **Other tables that may contain data you want to clear (depends on your needs):**
* `LiteLLM_HealthCheckTable`
* `LiteLLM_CronJob`
* `LiteLLM_UserNotifications`
-
Exit
psql
:
\q
π Step 9: Create Alias for Convenience (Optional)
To run LiteLLM from any folder, create an alias:
9.1 Add alias to your shell
# Open shell configuration file (for zsh)
nano ~/.zshrc
# Or for bash:
# nano ~/.bash_profile
Add at the end of the file (replace /Users/admin/Documents/litellm-setup
with your actual path):
# LiteLLM alias for quick start
alias litellm-start="cd /Users/admin/Documents/litellm-setup && source .venv/bin/activate && litellm --config config.yaml --port 4000"
9.2 Reload configuration
# For zsh:
source ~/.zshrc
# For bash:
# source ~/.bash_profile
9.3 Now you can start from anywhere:
# From any folder:
litellm-start
# LiteLLM will start automatically!
π Troubleshooting
Problem: "command not found: litellm"
Solution: You have forgotten to activate the virtual environment. Run this command from your project folder:
source .venv/bin/activate
After this, the litellm
command will be available.
Problem: "ModuleNotFoundError: No module named 'prisma'"
Solution: Your installation was likely incomplete. Make sure prisma
is installed in your active environment:
# Make sure .venv is active
uv pip install prisma
Problem: "The Client hasn't been generated yet..."
Solution: Run the Prisma generate command again, ensuring the path to schema.prisma
is correct.
Problem: "Database connection failed" or "Database not connected"
Solution:
# Check if PostgreSQL is running
brew services list | grep postgresql
# If not running:
brew services start postgresql
# Check if database exists:
psql -l | grep litellm_db
# Ensure the DATABASE_URL in your .env file is correct (username, database name).
Problem: "Config file not found"
Solution: Make sure the config.yaml
file is in the same directory from which you are running the litellm
command.
Problem: "Invalid API key"
Solution: Double-check the API keys in your .env
file.
Problem: "Port 4000 already in use"
Solution: Use a different port:
litellm --config config.yaml --port 4001
π― What You've Achieved
β
LiteLLM proxy server on localhost:4000
β
Web UI for model management
β
Unified API for all LLM providers
β
Ready-to-share project setup
β
Modern UV Python ecosystem
β
Database integration for advanced features
β
Convenient alias for daily use
Congratulations! You can now use any LLM through a single interface! π
π‘ Why This Setup is Powerful
- Unified Interface: Access OpenAI, Anthropic, Groq, and 100+ other models through one API
- Cost Tracking: Built-in spend monitoring and budget controls
- Rate Limiting: Prevent API abuse and manage usage
- Modern Tooling: UV provides faster, more reliable package management
- Production Ready: PostgreSQL backend for scaling
- Developer Friendly: Easy to share and reproduce across teams
π Next Steps
- Add more models to your
config.yaml
- Create API keys with different permissions and limits
- Set up monitoring and alerts
- Integrate with your applications using the OpenAI-compatible API
- Scale by deploying to cloud platforms
Happy coding! π
Top comments (0)