Generate_Dockerfile_using_Ollama_Hosted(DeepSeek)_LLM project-3
Required:
Ollama → runtime to manage and run the model.
LLaMA / deepseek model → the actual AI model files.
Not required:
Python / virtual environment → only needed if you want to run Python scripts that call the model programmatically (like your Dockerfile generator script).
Step A) Setup ollama in your local EC2 server
Step 1) Create EC2 instance with enough Storage and Ram instance i have used (t3.medium) and ubuntu OS
Step 2) go to ollama.com website search llama3 and select the version llama3.2:1b
Step 3) you can see in website Download button click on it and you see there Linux so copy following commands and paste in your instace
a) Download & Install Ollama for linux:
curl -fsSL https://ollama.com/install.sh | sh
b) Start ollama service: ollama serve or start &
c) Run the model: ollama run llama3.2:1b
<<.......... now you can write here your query e.g 'Create a Dockerfile based on Java application'
....................................................................................................................................................
Step B) go on ollama.com website and find the model of 'DeepSeek' choose the version 'deepseek-r1:1.5b'
Step 4) run this command: ollama run deepseek-r1
....................................................................................................................................................
if you want to visual view for your ai model then you can download Chatbox ..... and here we can use like local model




Top comments (0)