DEV Community

Cover image for Devoxx Genie Plugin : an Update
Stephan Janssen
Stephan Janssen

Posted on

Devoxx Genie Plugin : an Update

When I invited Anton Arhipov from JetBrains to present during the Devoxx Belgium 2023 keynote their early Beta AI Assistant, I was eager to learn if they would support local modals, as shown in the screenshot above.

After seven months without any related news, it seemed unlikely that this would happen. So, I decided to develop my own IDEA plugin to support as many local and event cloud-based LLMs as possible. "DevoxxGenie" was born โค๏ธ

IntelliJ Marketplace
Of course, I conducted a market study and couldn't find any plugins that were fully developed in Java. Even GitHub Copilot, which doesn't allow you to select a local LLM, is primarily developed in Kotlin and native code. But more importantly, are often closed sourced.

I had already built up substantial LLM expertise by integrating LangChain4J into the CFP.DEV web app, as well as developing Devoxx Insights (using Python) in early 2023. More recently, I created RAG Genie, which allows you to debug your RAG steps using Langchain4J and Spring Boot.

Swing Development

I had never developed an IDEA plugin so I started studying some existing plugins to understand how they work. I noticed that some use a local web server, allowing them to more easily output the LLM response in HTML and stream it to the plugin.

Trying to understand how the IDEA plugins work

I wanted to start with a simple input prompt and focus on using the "good-old" JEditorPane Swing component which does support basic HTML rendering.

JEditorPane rendering HTML

By asking the LLM to respond in Markdown, I could parse the Markdown so each document node could be rendered to HTML while adding extra styling and UI components. For example, code blocks would include an easy to use "copy-to-clipboard" button or an "insert code" button (as shown above in screenshot).

Focus on Local LLM's

I focused on supporting Ollama, GPT4All, and LMStudio, all of which run smoothly on a Mac computer. Many of these tools are user-friendly wrappers around Llama.cpp, allowing easy model downloads and providing a REST interface to query the available models.
Last week, I also added "๐Ÿ‘‹๐Ÿผ Jan" support because HuggingFace has endorsed this provider out-of-the-box.

Cloud LLM's, why not?

Because I use ChatGPT on a daily basis and occasionally experiment with Anthropic Claude, I quickly decided to also support LLM cloud providers. A couple of weeks ago, Google released Gemini with API keys for Europe, so I promptly integrated those too. With support for OpenAI, Anthropic, Groq, Mistral, DeepInfra, and Gemini, I believe I have covered all the major players in the field.
Please let me know if I'm missing any?

Snopshot of theDevoxxGenie LLM settings page

Multi-LLM Collaborative Review

The size of the chat memory can now be configured in v0.1.14 in the Settings page. This makes sense when you use an LLM which has a large window context, for example Gemini with 1M tokens.

Chat Memory

The beauty of chat memory supporting different LLM providers is that with a single prompt, you can ask one model to review some code, then switch to another model to review the previous model's answer ๐Ÿคฉ

Multi-LLM Collaborative Review

The end result is a "Multi-LLM Collaborative Review" process, leveraging multiple large language models to sequentially review and evaluate each other's responses, facilitating a more comprehensive and nuanced analysis.

Multi-LLM Collaborative Review

The results are really fascinating, for example I asked Mistral how I could improve a certain Java class and have OpenAI (GPT-4o) review the Mistral response!

Mistral 8x7B using Groq

Switched to OpenAI GPT-4o and asked if it could review the Mistral response

GPT-4o using OpenAI

This all results in better code (refactoring) suggestions ๐Ÿš€

Streaming Responses

The latest version of DevoxxGenie (v0.1.14) now also supports the option to stream the results directly to the plugin, enhancing real-time interaction and responsiveness.

It's still a beta feature because I need to find a way to add "Copy to Clipboard" or "Insert into Code" buttons before each code block starts. I do accept PRs, so if you know how to make this happen, some community โค๏ธ would be very welcome.

Program Structure Interface Driven (PSI) Context Prompt

Another new feature I developed for v0.1.14 is support for "smart(er) prompt context" using Program Structure Interface (PSI). PSI is the layer in the IntelliJ Platform responsible for parsing files and creating the syntactic and semantic code model of a project.

PSI allows me to populate the prompt with more information about a class without the user having to add the extra info. It's similar to Abstract Syntax Tree (AST) in Java but PSI has extra knowledge about the project structure, externally used libraries, search features and much more.

AST Settings

As a result the PSIAnalyzerService class (with a Java focus) can inject automatically more code details in the chat prompt.

PSI driven context prompts are really another way to introduce some basic Retrieval Augmented Generation (RAG) into the equation ๐Ÿ’ช๐Ÿป

What's next?

Auto completion??

I'm not a big fan of auto completion "using TAB" where the editor is constantly bombarded with code suggestions which often don't make sense. Also because the plugin is LLM agnostic it would be much harder to implement because of (lack) of speed and quality while using local LLM's. However it could make sense to support this with currently smarter cloud based LLM's.

RAG support?

Embedding your IDEA project files using a RAG service could make sense. But this would probably need to happen outside of the plugin because of the storage and background processes needed to make this happen? I've noticed that existing plugins use an external Docker image which includes some kind of REST service. Suggestions are welcome.

"JIRA" support?

Wouldn't it be great if you are able to paste a (JIRA) issue and the plugin figures out how to fix/resolve the issue? A bit like what Devin was promised to do...

Compile & Run Unit tests?

When you ask the plugin to write a unit test, the plugin could also compile the suggested code and even run it (using REPL?). That would be an interesting R&D exercise IMHO.

Introduce Agents

All of the above basically results most likely in introducing smart(er) agents which do some extra LLM magic using shell scripts and or Docker services...

Community Support

As of this writing, the plugin has already been downloaded 1,127 times. The actual number is likely higher because the Devoxx Genie GitHub project also publishes plugin builds in the releases, allowing users to manually install them in their IDEA.

IntelliJ Marketplace Downloads

I'm hoping the project will gain more traction and that the developer community will step up to help with new features or even bug fixes. This was one of the main reasons for open-sourcing the project.
"We โค๏ธ Open Source" ๐Ÿ˜œ

We Love Open Source

Top comments (0)