DEV Community

Cover image for πŸš€ Using Local AI Models and APIs as a JavaScript Developer
MUHAMMAD USMAN AWAN
MUHAMMAD USMAN AWAN

Posted on

πŸš€ Using Local AI Models and APIs as a JavaScript Developer

πŸš€ Key Points on Using Local AI Models and APIs as a JavaScript Developer

  • Research suggests Ollama is straightforward for running open-source LLMs locally on macOS, supporting CPU-only mode without a dedicated GPU by default, while automatically utilizing Apple's Metal framework on M-series chips for acceleration where available.
  • Hugging Face models can run locally in JavaScript via Transformers.js, enabling browser or Node.js execution of open-source models without servers, though performance may vary on macOS without GPU optimization.
  • As a JS developer, LangChain.js and LangGraph.js provide tools for chaining prompts, building agents, and integrating LLMs, with seamless support for local models like those from Ollama or Hugging Face.
  • OpenAI and Gemini APIs offer cloud alternatives with rate limits; evidence leans toward OpenAI's tiered pricing starting at $0.20/1M tokens for smaller models, while Gemini provides a free tier but charges $0.075–$2.00/1M for inputs in paid plans, acknowledging potential costs for high usage.
  • It seems likely that starting with CPU-only setups on macOS is accessible for beginners, but scaling to advanced agents may involve API calls from JS to local servers for better performance.

πŸ“– Overview for JavaScript Developers

As a JavaScript developer new to this, focus on Node.js for server-side setups or browser for lightweight experiments. Local tools like Ollama run as a background service you can call via HTTP from JS, while Transformers.js runs directly in JS environments. LangChain.js helps chain operations, and LangGraph.js builds complex agents. Open-source models (e.g., Llama, Mistral) are free and privacy-focused. For macOS, no extra GPU is neededβ€”CPU works, but M-series chips boost speed via Metal.

🎬 Beginner Steps: Setting Up Local Models

Start with Ollama for simplicity: Download from ollama.com/download, install, and run ollama run llama3.1 in terminal. From JS, use fetch to call its API at http://localhost:11434. For Hugging Face, install @huggingface/transformers via npm and load models like BERT.

βš’οΈ Integrating with LangChain.js and APIs

Use LangChain.js to wrap models: Install @langchain/core, create prompts, and chain to LLMs. Add OpenAI/Gemini via their SDKs, but monitor limitsβ€”OpenAI caps free tiers at low RPM, Gemini at 1,500 requests/day free.


πŸ‘‰ Comprehensive Guide to Local AI Models, LangChain/LangGraph, and APIs for JavaScript Developers

This detailed survey covers everything from beginner basics to advanced integrations, tailored for a JavaScript developer. We'll start with foundational setups for running open-source AI models locally using Ollama and Hugging Face, addressing with/without GPU scenarios (macOS relies on CPU or Metal acceleration on Apple Silicon). Then, we'll explore LangChain.js and LangGraph.js for building applications, and finally incorporate OpenAI and Gemini APIs with their limits. All steps are step-by-step, assuming you're starting from scratch with Node.js installed. Open-source models emphasized here are free, community-driven, and available on platforms like Hugging Face Hub.

Section 1: Understanding Local AI Setups

Local AI means running models on your machine for privacy, cost savings, and offline use. Open-source models (e.g., from Meta, Mistral AI) are pre-trained LLMs you download once. On macOS, without a dedicated NVIDIA GPU, you use CPU inference, which is slower for large models but feasible for smaller ones (e.g., 7B parameters). Apple M-series chips (M1+) enable GPU-like acceleration via Metal, improving speeds 2-5x without extra hardware. If no M-chip, pure CPU works but limit to quantized models (reduced precision for efficiency).

Popular Open-Source Models for Local Use:
From Ollama library and Hugging Face:

  • Llama 3.1 (Meta): 8B-405B params, general-purpose, multilingual.
  • Qwen 3 (Alibaba): 0.6B-235B, strong in reasoning/tools.
  • Mistral (Mistral AI): 7B, efficient for code/text.
  • Gemma 3 (Google): 270M-27B, lightweight for single-GPU/CPU.
  • Phi 3 (Microsoft): 3.8B-14B, high performance on small hardware.
  • Others: DeepSeek-V3.2 (685B, advanced but heavy), NVIDIA Nemotron (8B-32B, optimized).

These are GGUF/ONNX formats for local efficiency.

Section 2: Step-by-Step for Ollama (Local LLM Runner)

Ollama is a lightweight tool for running open-source LLMs locally, with a REST API perfect for JS integration. It supports CPU-only and auto-detects Metal on M-series macOS.

Beginner Steps: Installation and Basic Use

  1. Download the installer from ollama.com/download (macOS .pkg file).
  2. Run the installer; it sets up a background service.
  3. Open Terminal (via Spotlight) and verify: ollama --version.
  4. Pull a model: ollama pull llama3.1 (downloads ~4GB for 8B version; use smaller like gemma3:2b for testing).
  5. Run interactively: ollama run llama3.1 – type prompts in terminal.
  6. Without GPU: Ollama defaults to CPU; on M-series, it uses Metal automatically (check Activity Monitor for GPU usage). For pure CPU force, set env var OLLAMA_NO_GPU=1 before running.
  7. Test a prompt: In terminal, ask "Explain JavaScript promises simply."

Intermediate Steps: Custom Models and Optimization

  1. List models: ollama list.
  2. Create custom model: Make a Modelfile (text file) with FROM llama3.1 and custom system prompt, then ollama create mymodel -f Modelfile.
  3. Quantization for no-GPU: Use pre-quantized tags like llama3.1:Q4_0 (4-bit, faster on CPU).
  4. Run in background: ollama serve to start API server.
  5. Manage resources: Set OLLAMA_KEEP_ALIVE=5m env var to unload models after idle.

Advanced Steps: Integrating with JavaScript

  1. Start Ollama server: ollama serve.
  2. In Node.js project: npm init -y; npm install node-fetch (or use built-in fetch in Node 18+).
  3. Create index.js:
   const fetch = require('node-fetch');
   async function generateResponse(prompt) {
     const response = await fetch('http://localhost:11434/api/generate', {
       method: 'POST',
       headers: { 'Content-Type': 'application/json' },
       body: JSON.stringify({ model: 'llama3.1', prompt, stream: false })
     });
     const data = await response.json();
     return data.response;
   }
   generateResponse('Write a JS function for Fibonacci').then(console.log);
Enter fullscreen mode Exit fullscreen mode
  1. Run: node index.js.
  2. Streaming: Set stream: true, handle response as ReadableStream.
  3. Chat mode: Use /api/chat with messages array for conversational context.
  4. Tools: Define functions in JSON for agent-like behavior (e.g., weather API call).
  5. Multimodal: For models like Llava, add base64 images in requests.

Section 3: Step-by-Step for Hugging Face Models Locally with Transformers.js

Transformers.js runs Hugging Face's open-source models directly in JS (browser/Node), using ONNX for local execution. Ideal for JS devs; no Python needed. On macOS without GPU, use CPU; WebGPU for M-series acceleration in browsers.

Beginner Steps: Installation and Basic Inference

  1. Create Node project: npm init -y.
  2. Install: npm install @huggingface/transformers.
  3. Import pipeline:
   import { pipeline } from '@huggingface/transformers';
   async function main() {
     const pipe = await pipeline('sentiment-analysis');
     const result = await pipe('I love AI!');
     console.log(result);
   }
   main();
Enter fullscreen mode Exit fullscreen mode
  1. Run: node --experimental-wasm-bigint index.js (for WASM support).
  2. Use open-source model: pipeline('text-generation', 'Xenova/gpt2').
  3. Without GPU: Defaults to CPU; test small models like 'Xenova/distilbert-base-uncased'.

Intermediate Steps: Optimization and Tasks

  1. Quantization: Add { dtype: 'q4' } to pipeline options for smaller/faster models.
  2. GPU on macOS: In Safari/Chrome (with WebGPU flag), { device: 'webgpu' }.
  3. Vision/Audio: pipeline('image-classification', 'Xenova/vit-base-patch16-224').
  4. Custom models: Download from Hugging Face Hub, load locally via path.
  5. Browser setup: Use script tag <script src="https://cdn.jsdelivr.net/npm/@huggingface/transformers"></script>.

Advanced Steps: Building Applications

  1. Private models: Add { token: 'hf_...' } (get from huggingface.co/settings/tokens).
  2. Convert models: Use Python's Optimum to export to ONNX, then load in JS.
  3. Integrate with frameworks: In React, use useEffect for async pipeline loading.
  4. Multimodal: pipeline('zero-shot-image-classification', 'Xenova/clip-vit-base-patch16').
  5. Server-side: Build Express API wrapping pipelines for production.

Section 4: LangChain.js for Chaining AI Operations in JavaScript

LangChain.js is the JS port of LangChain, for composing prompts, models, and tools. Great for JS devs building apps.

Beginner Steps: Setup and Prompts

  1. Install: npm install @langchain/core @langchain/groq (or OpenAI/HF integrations).
  2. Basic chain:
   import { ChatGroq } from '@langchain/groq';
   import { PromptTemplate } from '@langchain/core/prompts';
   const model = new ChatGroq({ apiKey: 'your-key', model: 'llama3-8b-8192' });
   const prompt = PromptTemplate.fromTemplate('Tell a joke about {topic}.');
   const chain = prompt.pipe(model);
   const response = await chain.invoke({ topic: 'JavaScript' });
   console.log(response.content);
Enter fullscreen mode Exit fullscreen mode
  1. Add output parser: Install @langchain/core/output_parsers, use StringOutputParser.

Intermediate Steps: Memory and Tools

  1. Conversation memory: Use BufferMemory to store chat history.
  2. Tools: Define JS functions, bind to model for agentic calls.
  3. Local integration: Use @langchain/community for Ollama/HF wrappers.

Advanced Steps: Full Apps

  1. Agents: Create with createAgent for decision-making.
  2. RAG: Add vector stores (e.g., in-memory) for document search.
  3. Frameworks: Integrate with Next.js for web apps.

Section 5: LangGraph.js for Building AI Agents in JavaScript

LangGraph.js builds graph-based workflows for agents, extending LangChain.js.

Beginner Steps: Basic Graph

  1. Install: npm install @langchain/langgraph.
  2. Simple agent:
   import { Graph } from '@langchain/langgraph';
   const graph = new Graph();
   // Add nodes/tools, edges
Enter fullscreen mode Exit fullscreen mode
  1. Define nodes: Functions for LLM calls, tools.

Intermediate Steps: Agents with Tools

  1. Add LLM node: Use LangChain models.
  2. Human-in-loop: Pause for user input.
  3. Persistence: Save state for resuming.

Advanced Steps: Complex Workflows

  1. Custom graphs: Mix deterministic/agentic paths.
  2. Streaming: Real-time outputs.
  3. Scale: Integrate with databases for memory.

Section 6: Integrating OpenAI and Gemini APIs with Limits

For cloud backups, use SDKs in JS.

OpenAI API:

  • Install: npm install openai.
  • Usage: const openai = new OpenAI({ apiKey: 'sk-...' }); await openai.chat.completions.create({ model: 'gpt-5-mini', messages: [...] });.
  • Limits/Pricing: Free tier low RPM; paid from $0.20/1M input (gpt-5-mini) to $21/1M (gpt-5.2 pro). Hard/soft billing limits in dashboard; batch 50% off.

Gemini API:

  • Install: npm install @google/generative-ai.
  • Usage: const genAI = new GoogleGenerativeAI('API_KEY'); const model = genAI.getGenerativeModel({ model: 'gemini-2.5-flash' }); await model.generateContent('Prompt');.
  • Limits/Pricing: Free tier up to 1,500 RPD; paid from $0.075/1M input (Flash-Lite) to $2/1M (Pro). Context caching/storage extra; grounding tools $25-35/1k beyond free.

Tables for Quick Reference

Ollama vs. Hugging Face Comparison

Aspect Ollama Hugging Face (Transformers.js)
Installation Download .pkg, terminal commands npm install
GPU Support (macOS) Auto Metal on M-series WebGPU in browser
JS Integration HTTP API (fetch) Direct in code
Models GGUF, easy pull ONNX, Hub download
Beginner Ease High (CLI first) Medium (code-based)

API Pricing Summary (per 1M Tokens)

Model/API Input (Base) Output (Base) Free Tier Limits
OpenAI GPT-5 Mini $0.25 $2.00 Low RPM, credit-based
Gemini 2.5 Flash $0.30 $2.50 1,500 RPD
OpenAI GPT-5.2 $1.75 $14.00 N/A (paid only)
Gemini 2.5 Pro $1.25 $10.00 1,500 RPD

This covers a complete path from setup to production-grade agents, ensuring you can experiment locally before scaling.

Thanks for reading! πŸ™Œ
Until next time, 🫑
Usman Awan (your friendly dev πŸš€)

Top comments (0)