DEV Community

Shrijith Venkatramana
Shrijith Venkatramana

Posted on • Edited on

Chat With Your DB via DBChat & Gemini (Part 4)

Hello, I'm Shrijith. I'm building git-lrc, an AI code reviewer that runs on every commit. It is free, unlimited, and source-available on Github. Star Us to help devs discover the project. Do give it a try and share your feedback for improving the product.

In this tutorial series, I am on a journey to build for myself DBChat - a simple tool for using AI chat to explore and evolve databases.

See previous posts to get more context:

  1. Building DBChat - Explore and Evolve Your DB with Simple Chat (Part 1)
  2. DBChat: Getting a Toy REPL Going in Golang (Part 2)
  3. DBChat Part 3 - Configure , Connect & Dump Databases

Crafting A Working LLM Prompt To Generate SQL

In the previous posts we could start up a shell, connect to you a database, and get the schema context for further use.

Now the problem can be split it into three subtasks:

  1. Taking in arbitrary user requests
  2. Get schema definition
  3. Combine (1) and (2) into a coherent prompt

To handle arbitrary user requests, we want the "default" case in our command handling to do its magic. Essentially what we say is:

  1. By default we try to figure out if the words input by the user are indeed a command (or command variation)
  2. If not, we assume it to be a genuine user query

So now the default handler looks like this:

    default:
        if h.queryHandler == nil {
            return "Query handler not available. Check Gemini API key configuration."
        }
        return h.queryHandler.HandleQuery(h.db, line)
Enter fullscreen mode Exit fullscreen mode

And in dbchat/cmd/dbchat/query we build up a prompt like this:

const promptTemplate = `Consider yourself as an SQL and databases expert. Your task is to carefully look at a user's request in the context of a database schema, and provide structured answer to the request. 

The output structure can have two components of a valid JSON like so:


{
    "suggested_queries": [{"sql": "query 1", "explanation": ...}, {"sql": "query 2", "explanation": ...} ...],
    "explanation": "explanation"
}


At least one of the components must be there in the answer, and both the components can also be there.

Suggested_queries must be in the order of expected execution. And each suggested SQL must also come with an explanation.

You must follow the above instructions thoroughly. The next sections will follow up with the user's request or query, and the database schema/context:

User Query:

{{.Query}}

Database Schema/Context:

{{.Schema}}
`
Enter fullscreen mode Exit fullscreen mode

Learning to Interact with Gemini (with TOML configuration support)

The next step is to send the prompt to Gemini. Head over to Google's AI Studio and get a free API key.

Then we use some generated code from Google (mostly) to create some basic LLM functions:

func (g *GeminiClient) GetResponse(prompt string) (string, error) {
    ctx := context.Background()

    // Add explicit instruction for JSON format
    prompt = prompt + "\nPlease provide your response in valid JSON format only, without any additional text or markdown formatting."

    resp, err := g.session.SendMessage(ctx, genai.Text(prompt))
    if err != nil {
        return "", fmt.Errorf("error sending message to Gemini: %v", err)
    }

    var response string
    for _, part := range resp.Candidates[0].Content.Parts {
        response += fmt.Sprintf("%v", part)
    }

    // Clean up the response
    cleanedResponse := cleanJSONResponse(response)

    return cleanedResponse, nil
}
Enter fullscreen mode Exit fullscreen mode

I updated the configuration format to include an llm section, with gemini_key value:

# DBChat Sample Configuration File
# Copy this file to ~/.dbchat.toml and modify as needed

[connections]
# Format: name = "connection_string"
local = "postgresql://postgres:postgres@localhost:5432/postgres"
liveapi = "postgresql://user:pwd@ip:5432/db_name" 

[llm]
gemini_key = "the_key"
Enter fullscreen mode Exit fullscreen mode

Intermediate demo

At this point, when we submit a user query, we should get some suggested SQL to try along with some explanations. Let's see how that works:

DBChat Intermediate Demo

Executing Suggested Queries with a New Command: exec <n>

As we can see from previous section - now Gemini comes with queries we can execute in an array.

But executing arbitrary SQL can be dangerous when dealing with databases with precious data.

So I've decided to start a command exec <n>. Essentially - given the list of suggestions, one can run any of them with the exec command.

We use the tablewriter library to beautify the results.

The code looks something like this:

func ExecuteQuery(db *sql.DB, query string) (*QueryResult, error) {
    rows, err := db.Query(query)
    if err != nil {
        return nil, fmt.Errorf("error executing query: %v", err)
    }
    defer rows.Close()

    // Get column names
    columns, err := rows.Columns()
    if err != nil {
        return nil, fmt.Errorf("error getting columns: %v", err)
    }

    // Prepare result container
    result := &QueryResult{
        Columns: columns,
        Rows:    make([][]string, 0),
    }

    // Prepare value containers
    values := make([]interface{}, len(columns))
    valuePtrs := make([]interface{}, len(columns))
    for i := range columns {
        valuePtrs[i] = &values[i]
    }

    // Fetch rows
    for rows.Next() {
        err := rows.Scan(valuePtrs...)
        if err != nil {
            return nil, fmt.Errorf("error scanning row: %v", err)
        }

        // Convert values to strings
        rowStrings := make([]string, len(columns))
        for i, val := range values {
            if val == nil {
                rowStrings[i] = "NULL"
            } else {
                rowStrings[i] = fmt.Sprintf("%v", val)
            }
        }
        result.Rows = append(result.Rows, rowStrings)
    }

    if err = rows.Err(); err != nil {
        return nil, fmt.Errorf("error iterating rows: %v", err)
    }

    return result, nil
}
Enter fullscreen mode Exit fullscreen mode

The Whole Experience - A Small Demo

With that - now we have all the components of a basic interaction with DBChat. You can:

  1. Connect to database
  2. Make a query/request in simple English
  3. Let Gemini suggests SQL to try
  4. Pick and execute any of the suggested queries
  5. See results in a nicely structured table

Take a look at the whole interaction in the following demo:

![DBChat First I## Next Steps

Now that we have a prototype working for DBChat end to end, many new exciting opportunities open up.

In part 1, our goal was to build a VSCode extension, which requires a good LSP support.

But then - it may also be a good idea to setup a build pipeline, and
get the team at Hexmos to try this out first, and collect some feedback.

I believe there is some prioritization work in order, before I can focus
on execution tasks. We will explore how to take DBChat forward in the
next post.

git-lrc
*AI agents write code fast. They also silently remove logic, change behavior, and introduce bugs -- without telling you. You often find out in production.

git-lrc fixes this. It hooks into git commit and reviews every diff before it lands. 60-second setup. Completely free.*

Any feedback or contributors are welcome! It's online, source-available, and ready for anyone to use.

⭐ Star it on GitHub:

GitHub logo HexmosTech / git-lrc

Free, Unlimited AI Code Reviews That Run on Commit

git-lrc logo

git-lrc

Free, Unlimited AI Code Reviews That Run on Commit


git-lrc - Free, unlimited AI code reviews that run on commit | Product Hunt

AI agents write code fast. They also silently remove logic, change behavior, and introduce bugs -- without telling you. You often find out in production.

git-lrc fixes this. It hooks into git commit and reviews every diff before it lands. 60-second setup. Completely free.

See It In Action

See git-lrc catch serious security issues such as leaked credentials, expensive cloud operations, and sensitive material in log statements

git-lrc-intro-60s.mp4

Why

  • 🤖 AI agents silently break things. Code removed. Logic changed. Edge cases gone. You won't notice until production.
  • 🔍 Catch it before it ships. AI-powered inline comments show you exactly what changed and what looks wrong.
  • 🔁 Build a habit, ship better code. Regular review → fewer bugs → more robust code → better results in your team.
  • 🔗 Why git? Git is universal. Every editor, every IDE, every AI…




Top comments (1)

Collapse
 
lovestaco profile image
Athreya aka Maneshwar

Cool stuff Shrijith!