DEV Community

# ollama

Posts

👋 Sign in for the ability to sort posts by relevant, latest, or top.
Local AI WebAPI with Semantic Kernel and Ollama

Local AI WebAPI with Semantic Kernel and Ollama

Comments
2 min read
How to Run DeepSeek R1 Locally on Your Android Device

How to Run DeepSeek R1 Locally on Your Android Device

126
Comments 14
2 min read
An underwhelming story about trying to run DeepSeek R1 on AWS Free Tier

An underwhelming story about trying to run DeepSeek R1 on AWS Free Tier

8
Comments
2 min read
How to Create a Node.js Proxy Server for Hosting the DeepSeek-R1 7B Model

How to Create a Node.js Proxy Server for Hosting the DeepSeek-R1 7B Model

7
Comments
6 min read
Running Out of Space? Move Your Ollama Models to a Different Drive 🚀

Running Out of Space? Move Your Ollama Models to a Different Drive 🚀

Comments
1 min read
Working with LLMs in .NET using Microsoft.Extensions.AI

Working with LLMs in .NET using Microsoft.Extensions.AI

1
Comments
6 min read
Extendiendo Semantic Kernel: Creando Plugins para Consultas Dinámicas

Extendiendo Semantic Kernel: Creando Plugins para Consultas Dinámicas

2
Comments
8 min read
Semantic Kernel: Crea un API para Generación de Texto con Ollama y Aspire

Semantic Kernel: Crea un API para Generación de Texto con Ollama y Aspire

2
Comments
8 min read
Building an Ollama-Powered GitHub Copilot Extension

Building an Ollama-Powered GitHub Copilot Extension

23
Comments
5 min read
Local AI apps with C#, Semantic Kernel and Ollama

Local AI apps with C#, Semantic Kernel and Ollama

Comments
2 min read
Step-by-Step Guide: Write Your First AI Storyteller with Ollama (llama3.2) and Semantic Kernel in C#

Step-by-Step Guide: Write Your First AI Storyteller with Ollama (llama3.2) and Semantic Kernel in C#

7
Comments 2
5 min read
Run LLMs Locally with Ollama & Semantic Kernel in .NET: A Quick Start

Run LLMs Locally with Ollama & Semantic Kernel in .NET: A Quick Start

6
Comments
6 min read
How to Set Up a Local Ubuntu Server to Host Ollama Models with a WebUI

How to Set Up a Local Ubuntu Server to Host Ollama Models with a WebUI

6
Comments 1
4 min read
Ollama 0.5 Is Here: Generate Structured Outputs

Ollama 0.5 Is Here: Generate Structured Outputs

1
Comments
3 min read
Building AI-Powered Apps with SvelteKit: Managing HTTP Streams from Ollama Server

Building AI-Powered Apps with SvelteKit: Managing HTTP Streams from Ollama Server

5
Comments
6 min read
Run Llama 3 Locally

Run Llama 3 Locally

Comments
2 min read
Building 5 AI Agents with phidata and Ollama

Building 5 AI Agents with phidata and Ollama

27
Comments 1
6 min read
Run Ollama on Intel Arc GPU (IPEX)

Run Ollama on Intel Arc GPU (IPEX)

21
Comments 2
5 min read
Quick tip: Running OpenAI's Swarm locally using Ollama

Quick tip: Running OpenAI's Swarm locally using Ollama

Comments
2 min read
Langchain4J musings

Langchain4J musings

12
Comments
8 min read
How to deploy SmolLM2 1.7B on a Virtual Machine in the Cloud with Ollama?

How to deploy SmolLM2 1.7B on a Virtual Machine in the Cloud with Ollama?

11
Comments
6 min read
Ollama - Custom Model - llama3.2

Ollama - Custom Model - llama3.2

21
Comments 3
4 min read
Coding Assistants and Artificial Intelligence for the Rest of Us

Coding Assistants and Artificial Intelligence for the Rest of Us

Comments
1 min read
Using a Locally-Installed LLM to Fill in Client Requirement Gaps

Using a Locally-Installed LLM to Fill in Client Requirement Gaps

1
Comments
6 min read
Consuming HTTP Streams in PHP with Symfony HTTP Client and Ollama API

Consuming HTTP Streams in PHP with Symfony HTTP Client and Ollama API

11
Comments 2
3 min read
loading...