DEV Community

Kailash Sankar
Kailash Sankar

Posted on

1

Quick guide to setting up Deepseek (containers in portainer)

I'm running this setup on a mini pc running Intel N100(4core, 6w TDP) with 16G ram using portainer to setup the relevant containers. You can directly do this via docker as well.

This guide will setup two applications:

On portainer, add a new container for open web ui. The image is available in the here. Map 8080 to your desired port
Open Web UI setup

The startup takes a few minutes, you can see the progress on container logs. Once the server has started, visit via browser through your localhost/server:port and setup your account.

Next, deploy the ollama container. The official image is available on dockerhub
Ollama setup

Once the container is running, visit the url to see a confirmation message (default port is 11434).

Now log back in to open web ui and check if ollama is connecting, configure port and host if required and test.
Configure Ollama

Select the download icon to manage ollama, search for the desired deepseek model here. Keep an eye on the size of the model.
On the manage input field, enter the model name:tag, click on the download button and wait.
Download Model

Once download is complete, go back to the home screen. Select a model from top left dropdown and start chatting. You can download and keep multiple models.
Select Model

My idle CPU usage is 5% with a power consumption of 12w. My idle stats did not change after the setup. On interaction, being a low end CPU it spikes to 100% and power consumption went up to 30w. You need a system with a good GPU to get a fast response. But it works and thinks.
Chat

Distilled model is recommend for low spec systems, you can search by the tags on ollama page.

Recommended system requirements can be found here

Top comments (0)