DEV Community

Samarth Funde
Samarth Funde

Posted on

AI_Assisted_Devops_Day-3_Project-2_Gemini_LLM

Generate_Dockerfile_using_Ollama_Hosted(Gemini)_LLM project-2

Step A) Setup ollama in your local EC2 server

Step 1) Create EC2 instance with enough Storage and Ram instance i have used (t3.medium) and ubuntu OS

Step 2) go to ollama.com website search llama3 and select the version llama3.2:1b

Step 3) you can see in website Download button click on it and you see there Linux so copy following commands and paste in your instace

a) Download & Install Ollama for linux:
curl -fsSL https://ollama.com/install.sh | sh
b) Start ollama service: ollama serve or start &
c) Run the model: ollama run llama3.2:1b
<<.......... now you can write here your query e.g 'Create a Dockerfile based on Java application'

............................................................................................................................................

Step B) Setup Python Virtual Environment in EC2 instance

Create Virtual Environment copy & paste the following cmds
sudo apt update
sudo apt install python3 python3-venv python3-pip -y
python3 -m venv venv
source venv/bin/activate # On Linux/MacOS

....................................................................................................................................................

Step C) generate the API Key from AI Google Studio

Step 5) find the API Key button and generate the API key for gemini

....................................................................................................................................................

Step D) Create a Python files for run the Local-LLM

Step 6) echo "google.generativeai" > requirement.py
install dependancies:
pip3 install -r requirement.txt

Step 7) nano generate_dockerfile.py ...... and copy paste code

import os
import google.generativeai as genai
import ollama

Set your API key here
os.environ["GOOGLE_API_KEY"] = "XXXXXXXXXXXXXXXXXXX"

Configure Gemini Model
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
model = genai.GenerativeModel('gemini-1.5-pro')

PROMPT = """
Generate an ideal Dockerfile for {language} with best practices.

Include:

  • Base image
  • Installing dependencies
  • Setting WORKDIR
  • Adding application source code
  • Multi-stage build (builder + production)
  • Command to run the application
    """
    def generate_dockerfile(language):
    response = ollama.chat(
    model='llama3.2:1b',
    messages=[{'role': 'user', 'content': PROMPT.format(language=language)}]
    )

    Correct response key
    return response['message']['content']

if name == 'main':
language = input("Enter the programming language: ")
dockerfile = generate_dockerfile(language)
print("\nGenerated Dockerfile:\n")
print(dockerfile)

Step 8) after run above file you can see output 'Enter the programming language: node.js' so write as per your required language eg, java, groovy, python,nodejs etc.. it will generate the Dockerfile

Top comments (0)