DEV Community

Cover image for The Future of Local LLM Execution: Running Language Models Locally with Ollama, ONNX, and More
Kenan Bharatbhai Gain
Kenan Bharatbhai Gain

Posted on

The Future of Local LLM Execution: Running Language Models Locally with Ollama, ONNX, and More

In today’s rapidly evolving AI landscape, large language models (LLMs) have revolutionized how we interact with technology. From text generation to advanced chatbots, LLMs can assist in diverse applications like customer service, content creation, and more. Traditionally, most developers rely on cloud-based solutions such as OpenAI’s GPT or Google’s Bard. However, these come with drawbacks like high costs, latency, and concerns over data privacy.

Thankfully, new tools are making it easier to run LLMs locally, providing cost-effective, privacy-conscious alternatives without sacrificing performance. This post explores various local LLM execution platforms such as Ollama, ONNX, Wasm, GPT-J, and TensorFlow.js. We’ll dive into the pros and cons of each, explain how to set them up, and explore why local execution may be the future of LLM deployment.

Why Run LLMs Locally?

Running LLMs locally has several advantages:

  1. Cost-Effectiveness: Avoid cloud infrastructure costs and the need for expensive APIs.
  2. Data Privacy: All data processing happens on your machine, ensuring privacy.
  3. Customization: Local models allow you to tweak and fine-tune based on your specific needs.
  4. Latency Reduction: Running models locally reduces network delays.

Now, let’s dive into the top platforms for local execution and how to use them effectively.


1. Wasm (WebAssembly)

Wasm is a powerful technology that lets you run lightweight models entirely in the browser. This makes it perfect for applications that require client-side execution, such as basic NLP tasks without needing a backend.

Pros:

  • Cross-Platform: Works on any browser, so no installations are needed.
  • Client-Side Execution: Keeps everything local, ideal for privacy-sensitive applications.

Cons:

  • Limited Model Size: Can’t efficiently handle larger models like GPT-3 due to browser memory constraints.
  • Lower Performance: More suitable for smaller models like GPT-2 or distilled models.

How to Run:

To run a model using Wasm, simply embed the necessary WebAssembly code into your HTML and execute it client-side. Libraries such as Transformers.js can make this easier, as we’ll see later.

Best for: Lightweight models and basic tasks where no server infrastructure is required.


Image description

2. ONNX (Open Neural Network Exchange)

ONNX is a format for deep learning models that enables them to be executed locally, either on CPU or GPU. It supports a wide range of frameworks, including PyTorch and TensorFlow, and allows for hardware acceleration, making it one of the best tools for running larger, more complex models locally.

Pros:

  • Local Execution: Models are optimized for local hardware, offering performance benefits.
  • GPU Support: If you have the hardware, ONNX allows for GPU acceleration, which can drastically improve inference times.

Cons:

  • Complex Setup: Requires model conversion and optimization.
  • High Resource Usage: Larger models will need strong hardware.

How to Run:

  1. Install ONNX runtime:
   pip install onnxruntime
Enter fullscreen mode Exit fullscreen mode
  1. Convert your model to ONNX format:
   import torch
   import onnx
   torch.onnx.export(your_model, dummy_input, "model.onnx")
Enter fullscreen mode Exit fullscreen mode
  1. Run the model locally:
   import onnxruntime as rt
   session = rt.InferenceSession("model.onnx")
Enter fullscreen mode Exit fullscreen mode

Best for: Users with custom-trained models who need optimized performance on their local hardware.


3. GPT-J / GPT-Neo

OpenAI’s GPT-3 models are powerful but come with restrictions in terms of usage and control. GPT-J and GPT-Neo are open-source alternatives that can be run locally, giving you more flexibility without sacrificing performance.

Pros:

  • Open Source: Full control over the model and its setup.
  • High Quality: Competitive with GPT-3, providing powerful text generation capabilities.

Cons:

  • Hardware Requirements: Needs a high-performance GPU (8GB+ VRAM) to run efficiently.
  • Complex Setup: Requires a fair bit of technical expertise to run locally.

How to Run:

  1. Install GPT-J:
   git clone https://github.com/kingoflolz/mesh-transformer-jax
   cd mesh-transformer-jax
   pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode
  1. Fine-tune or run the model locally by setting up the environment and passing input data.

Best for: Running powerful, customizable models for local AI applications.


4. Transformers.js

Transformers.js allows you to run NLP models directly in the browser using JavaScript, leveraging the power of WebAssembly (Wasm). It’s ideal for browser-based applications that need LLM functionality without depending on a server.

Pros:

  • Runs in Browser: Everything stays client-side, which is great for privacy.
  • Popular Models Available: You can run pre-trained models like GPT-2, BERT, etc.

Cons:

  • Limited Model Size: Larger models like GPT-3 cannot run effectively in the browser.
  • Performance: Browser execution may be slower compared to server-side.

How to Run:

  1. Install Transformers.js:
   <script src="https://cdn.jsdelivr.net/npm/@xenova/transformers"></script>
Enter fullscreen mode Exit fullscreen mode
  1. Load and run a model:
   const pipeline = await transformers.pipeline('text-generation');
   const output = await pipeline('Hello, I am a language model');
Enter fullscreen mode Exit fullscreen mode

Best for: Small NLP tasks that need to run in-browser with no backend dependencies.


5. TensorFlow.js

TensorFlow.js is another powerful library for running machine learning models in the browser or in a Node.js environment. It supports both CPU and GPU execution, making it a flexible tool for web-based ML applications.

Pros:

  • **Runs i
`

{% embed  %}
`
Enter fullscreen mode Exit fullscreen mode

n Browser or Node.js**: Offers flexible deployment options.

  • Pre-Trained Models: Models like BERT are available for local execution.

Cons:

  • Limited by Model Size: Large models are constrained by browser memory.
  • Complexity: Requires models to be converted to TensorFlow.js format for browser use.

How to Run:

  1. Convert your model to TensorFlow.js format:
   tensorflowjs_converter --input_format=tf_saved_model /path/to/saved_model /path/to/tfjs_model
Enter fullscreen mode Exit fullscreen mode
  1. Run in the browser:
   const model = await tf.loadGraphModel('path/to/model.json');
   const prediction = model.predict(tf.tensor(inputData));
Enter fullscreen mode Exit fullscreen mode

Best for: Web-based applications requiring flexible deployment in both client and server environments.


Image description

6. Ollama (Local LLM Execution)

Ollama is a newcomer to the local LLM scene, offering a streamlined experience for running models like LLaMA and Mistral directly on your machine. It’s incredibly user-friendly and removes much of the complexity traditionally associated with running LLMs locally.

Pros:

  • Easy to Use: Simple setup with commands like ollama run and ollama list.
  • Local Execution: Runs entirely on your machine, which means zero cloud costs and complete control over data privacy.
  • Versatile Models: Supports various LLMs, including LLaMA and Mistral.

Cons:

  • Hardware Dependency: Running larger models will require a machine with sufficient memory and processing power.
  • Limited Scalability: While Ollama works great for small to medium projects, it lacks the scalability of cloud-based solutions for larger deployments.

How to Run:

  1. Install Docker and pull Ollama’s Docker image:
   docker pull ollama/ollama
Enter fullscreen mode Exit fullscreen mode
  1. Run Ollama:
   docker run -it ollama/ollama
Enter fullscreen mode Exit fullscreen mode
  1. Run a specific model:
   ollama run <model_name>
Enter fullscreen mode Exit fullscreen mode

Best for: Developers who want to run LLMs locally without complex setups, making it ideal for small to medium projects.


Conclusion: Which Local Execution Option Is Right for You?

If you’re just starting out or need a simple, user-friendly setup, Ollama is the perfect tool. For users needing more performance and hardware-accelerated execution, ONNX or GPT-J may be better choices. Meanwhile, browser-based options like Wasm and Transformers.js provide excellent lightweight alternatives for client-side applications.

Running LLMs locally is quickly becoming a practical, efficient alternative to cloud solutions, empowering developers with more control, lower costs, and greater privacy. Ready to dive in?

Top comments (0)