DEV Community

Cover image for JetBrains' AI Assistant
Thomas Scharke
Thomas Scharke

Posted on

JetBrains' AI Assistant

Since a few days, I have had the pleasure of working (and playing ๐Ÿ˜‰) with JetBrains'

AI Assistant ๐Ÿ’ซ

And I'd like to share my initial impressions and experiences with you.

Participation

Firstly, JetBrains' AI Assistant is currently in the Beta phase. To use the AI Assistant, you need a JetBrains account and must sign up for the waiting list.

I was fortunate ๐Ÿ€ and got approved, so now I can participate in the Beta Phase of the AI Assistant.

To understand the registration process, how the AI Assistant looks within an IntelliJ product, its functionalities, and how it's applied within the product, you can read JetBrains' detailed and excellent documentation.

Usage

I mainly use the AI Assistant in my editor and thus directly on the code. This works through the so-called Context Actions. That means, with a keyboard shortcut at the corresponding code location, I can use these Context
Actions
. These include the (new) point AI Actions with the following sub-points:

I'd like to delve into the two AI Actions: Write documentation and Find problems in code and share my impressions with you as I've extensively used these features.

Note

There is also the option to access Context Actions through a light bulb ๐Ÿ’ก using a trackpad/mouse. However, while programming, I prefer not to switch my input method. Therefore, I exclusively use keyboard shortcuts ๐Ÿ˜‰.

AI Actions

Within Context Actions, I can select AI Actions and choose the Sub-AI Action Write documentation, which I'll now discuss.

Write documentation

Let's start with a spoiler, and I'll keep it short - I'm excited! ๐Ÿฅณ

Please note that I'm in the context of my code, and the AI Assistant unfolds its strength here. If I use this AI Action on variables, functions and Functional-Components (React), the AI Assistant generates a comment or a complete and valid JSDoc "inline" - that is, above or at the variable, function, or Functional-Component. This includes descriptions of arguments/parameters, return values, and correct typing.

It's fascinating and saves me a lot of typing. ๐Ÿ˜Ž

I won't hide that it's helpful if variables, functions, etc., already have good and descriptive names. Also, using TypeScript helps for more concrete typing.

What do I appreciate about it?

That with this AI Action, the AI Assistant truly lives up to its name by enriching the code with comments and complete JSDoc. The initial results are already fitting. The AI Assistant remains prosaic, technical and almost beautifully concise about what it is/what it does. I can always adjust and change comments and JSDocs myself. Occasionally, I've done that to loosen up the sobriety a bit ๐Ÿ˜‰โ€”but that's my personal style.

Caveat

When "generating" documentation for Functional-Components, I've experienced a few times that I triggered the AI Action, but nothing happened ๐Ÿคทโ€ I'll attribute that to the Beta phase. ๐Ÿ˜ƒ

It also happened that I had a different expectation ๐Ÿค”. For example, I built a Functional-Component that only returns JSX and contains a kind of instruction in text form within </p>'s. Then I used this AI Action and hoped that the AI Assistant would provide me with fancy documentation. I had no idea how and what to write myself, but "Hey - that's what I have the AI Assistant for." ๐Ÿคฃ And what was the result? Of course, a valid JSDocโ€”no question. But the AI Assistant took the name of the Functional-Component only and summarised (once again) what I had written within the </p>'s ๐Ÿ˜ฎ.

Mhhhโ€ฆ that wasn't what I expected. ๐Ÿ˜ข

Conclusion

Yet I'm excited. Because I had no idea why and how I wanted to enrich the Functional-Component with documentation. And I also had no idea how I would describe it myself. I was probably waiting a bit for a miracle ๐Ÿ’ซ? ๐Ÿค”๐Ÿคฃ

But the AI Assistant neutrally described and summarised what it findsโ€”no more and no less.

Honestly, I couldn't have done it better myself. ๐Ÿคฃ

And let me be honest: If I want to enrich something "nonsensical" with documentation, then it remains nonsensical. Regardless of whether I do it as a human or use the AI Assistant ๐Ÿคฃ

Another AI Action that I use very frequently and is also within Context Actions is: Find problems in code.

Find problems in code

As the name suggests, the goal here is to find potential problems in the code by the AI Assistant or to let the AI Assistant detect them. And I think the latter describes it best. Because the AI Assistant actually acts like a peer, examining the code and sharing its "findings" with you.

Unlike the AI Action Write documentation, the analysis and "exchange" don't take place in the editor and thus on/in the code. Nor is any code changed or added. Instead, this AI Action switches to another IDE window, which is open in addition to the code editor.

It's important to note that this AI Action takes the complete code (from which the AI Action was triggered) copy-and-paste and simultaneously instructs the AI Assistant to look for potential problems in the code.

In other words: I could do this manually by starting a new conversation with the AI Assistant, manually copying and pasting the code, and simultaneously asking it to find potential problems in the code.

I find it successful that syntax highlighting is retained, and the AI Assistant also recognises the corresponding language and displays it to me.

And then it startsโ€ฆ ๐ŸŽ‰

Here we goโ€ฆ

The code is analysed by the AI Assistant, and it immediately shares its "findings." It analyzes the code from top to bottom, describing the possible problems line by line (if any are found).

It describes, in text form, the identified reasons for the potential problem and also presents a solution as a code snippet.

Here, too, the AI Assistant remains prosaic, technical, and above all transparent. Through its description, I can understand its "thought processes" (can you call it that in an AI? ๐Ÿค”). I always feel left in the clear and interpret its description as suggestions or ideas.

At no point did it feel like the AI Assistant was imposing anything on me or even wanting to be "right" ๐Ÿ‘.

Exchange with the AI Assistant

Once its analysis is complete, the AI Assistant waits for me with a so-called User Prompt. In this, I can now engage in the "exchange" ๐Ÿ˜Ž

Since the potential problems are numbered, I can easily refer to the mentioned problems within the User Prompt without adding or duplicating long texts or code.

I shaped my "exchange" with the AI Assistant as if I were sitting across from a human peer. That is, I asked questions and had detailed explanations of analyses.

Transparency

It became clear to me that the AI Assistant really only "sees" the code I entered into the AI Action. Or rather, it only "sees" the code with which the conversation was started. That is, the AI Assistant does not use an AST and cannot analyse functions from other modules or other referenced modules (in the code).

If the AI Assistant comes across, for example, a function call that is not present in the code, its analysis and statement about it remain transparent. It mentions this and points out that no analysis of this function can be made because it has no insight. ๐Ÿ˜Ž

I'm torn here: In some situations, it would be nice if the AI Assistant could look "deeper" and thus analyse all referenced modules as well. And yet, I'm excited that this doesn't happen! ๐Ÿ‘

Because that's exactly what makes it feel so transparent to me; the AI Assistant analyses what I show - nothing more.
๐Ÿ˜Ž

And now?

The "exchange" with the AI Assistant takes place exclusively through the User Prompt and thus in text form. I can enter free text and also write or copy-and-paste Code. Similar to User Prompts of other AIs, I can embed the code in backticks (`). The User Prompt of the AI Assistant even recognises when I write or copy-and-paste code, embeds it automatically, and formats it at the same time.

As a user, I also have the two "quick functions" available for the code snippets from the analysisโ€ฆ

  • Copy to Clipboard and
  • Insert Snippet at Caret

Conclusion

I've already expressed my personal conclusion, and I stick to it

๐Ÿฅณ๐Ÿฅณ๐Ÿฅณ I AM EXCITED ๐Ÿฅณ๐Ÿฅณ๐Ÿฅณ

The AI Assistant integrates wonderfully into the IDE. I have the freedom to choose whether I want to use it or not at any time. And when using it, JetBrains remains true to itself by embedding it in the Context Actions, making the use seamless, setting the context, and allowing me to act via the familiar keyboard shortcuts.

I currently enjoy engaging in the "exchange" with the AI Assistant. I'm happy to update my article from time to time to share my experiences with you.

I'm also interested, in your opinionโ€ฆ

  • Have you already used the JetBrains AI Assistant?
  • What are your experiences with it?
  • Do you have comparisons with other AI systems?

Wishes for JetBrains

Despite all my enthusiasm, there are wishes I would like to convey to JetBrains:

  • It would be nice to be able to interrupt or "stop" the responses of the AI Assistant. Because it often happens that send is pressed too quickly in the User Prompt, even though information or code is not yet fully entered ๐Ÿคทโ€
    • It also happens that I would like to interrupt the AI Assistant if, after its initial runs, it becomes clear to me that I'm going in the "wrong" direction or that what I've written has already inspired me - so I don't need any further "output" from the AI Assistant.
  • For the code snippets, I wish for an additional functionality that precisely takes the snippet to my real code location where the AI Assistant found the potential problem. So an Integrate Snippet.

Credits

Top comments (0)