DEV Community

Cover image for ➑️ AI gives you βœ… π“πˆπŒπ„ not βŒπ‚πŽππ…πˆπƒπ„ππ‚π„ : Developer productivity toolkit
Jonathan Vila
Jonathan Vila

Posted on

➑️ AI gives you βœ… π“πˆπŒπ„ not βŒπ‚πŽππ…πˆπƒπ„ππ‚π„ : Developer productivity toolkit

Let's be real – keeping up with the pace of software development today is intense. New frameworks pop up and the push for faster, better, and more secure code never stops.

This article is all about cutting through the buzz and looking at how AI-powered tools can actually help you, the Java developer, day-to-day. We'll dive into specific ways AI can help you through the whole SDLC:

  • Understanding Complex Tasks
  • Accelerating Code Creation
  • Streamlining Cloud Deployment
  • Creating Effective Tests
  • Increasing Code Quality and Security
  • Improving Code Review

Using AI to Understand Complex Tasks

Okay, first up: wrapping your head around the job at hand. You know those moments where you need to implement a feature based on requirements that feel a bit… fuzzy. Traditionally, this means lots of reading, maybe drawing diagrams, and asking clarifying questions.

Here's where AI can lend a hand. Think of tools like GitHub Copilot, Windsurf and Cursor ,among others, as smart summarizers and brainstorming partners.

  • Digest Docs: Feed the AI that long requirements doc and ask it to summarize the key points related to a specific feature.
  • Clarify Ambiguity: Try phrasing a requirement as a question to the AI. "Explain the user session timeout logic described here "
  • Break It Down: Feeling overwhelmed by a big task? Describe the goal to the AI and ask it to suggest potential steps or components.
  • Connect it to your code base: Feed your code base to the AI assistant , link the requirements doc and ask the AI where you should put the new code.

Getting Specific with Github issues and Your Code:

Now, what about pointing the AI at a specific issue ticket and your actual codebase to figure out where to start coding? This is getting more powerful:

  • IDE Integrations are Key: Tools like GitHub Copilot operate right within your IDE. You can copy the core description from your Jira ticket into the chat panel and ask something like: "Based on this ticket #1, what services might I need to modify?"
  • Codebase-Aware AI : Some newer tools like Cursor can actually index your entire codebase. This allows for more powerful queries. You might be able to ask: "Where in our codebase is the logic related to 'JIRA-456' located?"
  • AI Tool integration: MCP servers (more details below) connect your AI assistant directly to servers in order to add capabilities to the Agent. In this case it’s the Github API that will give read or write information regarding the current repository issues.

As an example, you can use Github Code Spaces with Copilot Agent Mode , and it will give us an explanation of the ticket and the changes to do.

Image description

Important Caveats:

  1. Security First: Be very careful about pasting internal code or sensitive information into public AI tools.
  2. It's a Guide, Not a Guru: Treat the AI's output as educated guesses. It's a starting point to accelerate your own investigation. You still need your developer brain to validate its reasoning.

Using AI this way is about accelerating the initial investigation phase. It helps you form a hypothesis about where to look and what might be involved so you can jump into the interesting design and coding parts sooner.

Accelerating Code Creation

Okay, let's talk about actually producing Java code. This is where tools like GitHub Copilot in VS Code really shine. Think of them as having a pair programmer who types really fast and knows a ton of standard library calls, common patterns and the company's codebase.

  • Killing Boilerplate: We all know Java can be a bit verbose sometimes. Need to write constructors, accessors, equals(), hashCode() for a POJO? These tools can generate them based on the fields you've declared.
  • Generating Snippets and Methods: Write a clear method signature and the AI will generate a surprisingly decent implementation. You can use different methods to generate code:.
    • Inline chats Ask the assistant to generate β€œJava method to fetch data from API endpoint XYZ and parse the JSON response” and wait a bit. The AI might suggest the entire method body using HttpClient or RestTemplate.

Image description

  • Ghost text: Start typing a typical Spring Boot controller method like @GetMapping("/products/{id}") public ResponseEntity<Product> getProductById ... The AI will likely suggest the code to call a service and return the response.

How Context Improves Suggestions:

These tools use the context of your project to tailor suggestions. But how do they get this context?

  • Primarily Your Open Files: The AI heavily analyzes the code in the file(s) you currently have open in your editor.
  • Chat & Explicit Prompts: Mentioning specific class/method names from your project or pasting relevant snippets guides the AI. For example: "Using our CustomerService class, generate the boilerplate code for a new method findCustomerByEmail(String email) that calls the customerRepository."
  • Codebase Awareness: Specialized tools using Agents can be set up to index your entire codebase. This allows for much deeper context, potentially leading to suggestions that understand your project's specific patterns even if the relevant files aren't currently open.

Image description

List of context elements to add in a chat

Why Context Matters: This context is crucial. It means the AI is more likely to suggest code that:

  • Calls your existing helper functions or service methods.
  • Follows the coding style and patterns already present in the file.
  • Uses the correct versions of libraries already defined in your project dependencies.

The result? Code suggestions feel less generic and much more like they actually belong in your specific project.

Using the right context also helps on two important points :

  • Learning New Libraries: Trying out a new Java library or framework feature? You can often just write a comment describing what you want to achieve ("// Use Apache Commons CSV to write records to a file") and let the AI generate a starting example, often using the context of your existing code style.
  • Quick Mockups: Need a quick data transformation or a utility function? Describe it in a comment or start typing, and let the AI fill it in, leveraging context for better results.

For Experienced Devs: Look, this isn't about the AI writing your core, complex business logic. You're still the architect and the problem solver. AI can handle some of the work by leveraging your codebase context, it frees you up to focus on the harder, more valuable parts of the system.

It takes a little getting used to, and you absolutely must review the generated code (even context-aware AI isn't perfect!).

Integrating Specialized Agents and Reasoning in Agentic IDEs: Agentic IDEs can significantly enhance code generation by employing various specialized agents, each tailored for specific tasks like API interaction, database querying, or UI component creation. Moreover, these IDEs can provide detailed reasoning behind the generated code, outlining the steps taken improving developer understanding and trust in the AI's output.

In tools like VS Code with Github Copilot or Cursor, we can even tailor the behaviour of the agents when they generate code.

Image description

Here we can specify to use the latest Java 24 features, or a specific version of Quarkus, or even which front end frameworks to use, among other particularities of your code.

Advanced tip : ask the assistant to create an instructions file based on your current code base structure as the template for all future projects.

MCP servers to the rescue: MCP servers greatly enhance code generation and understanding by facilitating connections to specialized tools, expanding knowledge beyond typical Large Language Model (LLM) training data. Several IDEs like Windsurf, Cursor or VS Code with Copilot support this technology, enabling developers to leverage MCP-driven AI within their coding environment.

For instance, using an MCP server connected to a database tool, the assistant can generate CRUD (Create, Read, Update, Delete) operations tailored to each table in a database, incorporating specific data types, relationships, and constraints.

There are several places where we can get MCP servers for specific tasks and with a very easy installation process : https://mcpservers.org/ , https://mcpmarket.com/, https://mcp.so/ , etc.

This is an example of installation of a Docker MCP server implementing the GitHub tools, that will allow our assistant to connect to our GitHub repository and get issues, branches, PRs, etc.

Image description

These MCP servers expose tools to be used by the agent. In this case the Github MCP Server share 36 tools :

Image description

Streamlining Cloud Deployment

Okay, so your Java code is looking good. Now, how about actually shipping it? Getting applications deployed to the cloud involves writing a lot of configuration – Dockerfiles to containerize your app, Kubernetes YAML for orchestration, and CI/CD pipeline definitions (GitHub Actions, GitLab CI, etc.).

This is another area where AI assistants can save you significant time and effort, acting as configuration generators.

  • Generating Dockerfiles: Need to containerize your Spring Boot or Quarkus app? Instead of starting from scratch, ask your AI assistant: "Generate a multi-stage Dockerfile for a Java 21 Maven project that builds the JAR and runs it using an OpenJDK JRE slim image."
  • Scaffolding Kubernetes Manifests: Get a head start by asking: "Create a Kubernetes Deployment YAML for an app named 'order-service', using image 'my-repo/order-service:v1', with 3 replicas and exposing container port 8080."
  • Infrastructure as Code (IaC) Templates: Need a basic Terraform configuration? Describe what you need: "Write a simple Terraform configuration (HCL) to create an AWS S3 bucket named 'my-app-data-bucket' with versioning enabled."
  • CI/CD Pipeline Starters: Setting up a build and test pipeline? Ask: "Generate a basic GitHub Actions workflow file that checks out code, sets up Java 21, builds with Maven, and runs unit tests."
  • Debugging Config Errors: Pasting a cryptic error message from kubectl into an AI chat and asking "What does this error mean and how can I fix it?" can point you in the right direction faster than searching online.

Agentic MCP AI is your friend: Getting the logs or the configuration for running apps in your cluster can be crucial to produce more aligned code. For instance, getting the CRDs in your cluster can help you create a better Kubernetes operator that reacts to changes on them.

You can rely on the multiple MCP servers with your Agentic AI assistant to consider that information when you are chatting with it. Kubernetes MCP server is a clear example of this and it gives you 15 tools to interact with your K8s cluster.

Image description

VS Code with Github Copilot using Kubernetes MCP to interact with a local cluster

Java Context Matters: When generating configs for Java apps, you can get specific. Ask for Dockerfiles that set appropriate JAVA_OPTS environment variables for JVM memory limits (-Xms, -Xmx).

⚠️ Hold Up! Review Carefully! ⚠️

Just like with generated code, AI-generated configuration files are starting points, NOT final products. You absolutely need to review them carefully:

  • Security is Paramount: This is critical. AI might generate insecure configurations – hardcoded secrets, overly permissive IAM roles or network policies (like 0.0.0.0/0). Security configurations MUST be reviewed by someone knowledgeable. Don't assume the AI got it right.
  • Check for Best Practices: Does the generated config follow current best practices for the specific cloud provider or tool? AI knowledge might be outdated or too generic.
  • Understand, Don't Just Copy: Make sure you understand what the configuration actually does before applying it. You're still responsible for the infrastructure and deployment.
  • Test Thoroughly: Deploy to a non-production environment first and test rigorously.

Reinforce with IaC Static Analysis:

Beyond manual review, remember that specialized static analysis tools can also help validate your Infrastructure as Code (IaC) files. Tools like SonarQube (which supports Terraform, Kubernetes YAML, Dockerfiles, etc.), are designed specifically to scan these configuration files. They check for:

  • Common security misconfigurations
  • Adherence to cloud provider best practices
  • Potential syntax errors or logical issues
  • Secrets misusage

Integrating these IaC scanners into your CI/CD pipeline adds an essential automated check. It complements manual reviews and helps catch issues in both human-written and AI-generated configurations before they potentially impact your deployed environment.

Creating Effective Tests

Ah, testing. We all know it's crucial for catching regressions, ensuring correctness, and enabling confident refactoring. But it can also be time-consuming. Good news! AI can lend a hand here, helping you generate tests faster.

  • Generating Unit Test Scaffolding: You can ask to generate tests to your AI assistant on a Java class or method. The AI will attempt to create a test class (e.g., using JUnit 5) with basic test methods covering the public methods of your source class.
  • Suggesting Test Cases: You can use AI chat features to brainstorm. Paste your method's code and ask: "What are some important edge cases I should test for this Java method?"
  • Creating Mock Objects: Setting up mocks can be tedious. AI assistants can often generate the necessary @Mock annotations, injection points (@InjectMocks), and when(...).thenReturn(...) statements based on how your class interacts with its dependencies. For example: "Generate a JUnit test for this OrderService method, mocking the ProductRepository"

Copilot chat asking it to generate the test methods

Image description

Tests generated by Copilot for this method

Image description

⚠️ HUGE WARNING: Review Generated Tests Like Crazy! ⚠️

This is possibly even more critical than reviewing generated application code: AI-generated tests MUST be thoroughly reviewed and often significantly refined.

Common AI test generations :

  • AI Doesn't Understand Intent: AI tests the code as it's written. It doesn't know the business requirements or the intended behavior. If your code has a bug, the AI might happily generate a test that confirms the buggy behavior!. In this case it doesn’t make any sense to have a total tax of -1000 , and AI has tested that the test is really verifying what the code is doing, including the bug.

Image description

  • Trivial and Meaningless Tests: AI often generates tests for simple getters/setters or very basic logic that might not provide much value. It might miss the truly complex or critical paths.
  • Incorrect Assertions: The assertions generated might be wrong, incomplete, or nonsensical. Don't assume they are correct.
  • Poor Quality: Generated tests might not follow best practices for naming, structure, or readability, making them hard to maintain.
  • Over-reliance on Mocking: AI might excessively mock things, leading to brittle tests that don't actually verify useful interactions.

How to Use AI for Testing Effectively:

  • Use it as a Starting Point: Let AI generate the boilerplate structure, basic happy-path tests, and mock setups.
  • Focus Your Effort: Use the time saved to focus on writing tests for the complex logic, critical business rules and tricky edge cases – the areas where human understanding is essential.
  • Critically Review & Refine: Read every generated test. Does it make sense? Is it testing something important? Is the assertion correct? Is it readable?
  • Don't Chase Coverage Blindly: AI can quickly increase test coverage numbers, but coverage isn't the same as quality. A few meaningful tests are better than hundreds of trivial ones.
  • Provide the right context: including functional testing information, or feature requirements will help AI assistants to tailor the test to what it’s supposed to be tested and not what it is written in the code. In this case we are asking to create the tests but considering the requirements specified in a github issue.

Image description

Image description

Don't Forget Static Analysis for Test Code:

One more point on test quality: don't forget that static analysis tools can also help here! Tools like SonarQube, Checkstyle, and PMD often include specific rule sets designed to analyze your test code, not just your production code. They can check for:

  • Common testing anti-patterns.
  • Adherence to JUnit/testing framework best practices .
  • Potential bugs in tests .
  • Unused test code or helper methods.
  • Consistency in naming conventions for test classes and methods.

Running these analyzers on your test suites is another good practice, especially when incorporating AI-generated tests.

Improving Code Reviews

Alright, let's talk about pull requests (PRs) and code reviews. They're super important for team health and code quality, but they can also be time-consuming and sometimes frustrating. Understand a massive PR, catching subtle issues and providing constructive feedback. And if you're the author, waiting for reviews and addressing comments takes time too.

How AI Helps Reviewers:

  • Quick Summaries: Tools like GitHub Copilot can automatically generate summaries of the changes in a PR. This helps reviewers quickly grasp the purpose and scope of the changes before diving into the code details.

Image description

They can even interact with the PR using extensions in order to provide functionalities that are out of the scope of the LLM like creating Mermaid diagrams for the classes.

  • Automated First Pass (via CI/CD): Remember those static analysis (SAST) and IaC scanning tools we discussed? Integrating them into your CI/CD pipeline to run automatically on PRs is a huge win.

How AI Helps Authors:

  • Pre-Submission Polish: Authors can use AI coding assistants (Copilot Chat, IntelliJ AI Assistant) in their IDE to refine, and improve their code before even creating the PR. Asking "Can this code be simplified?" can catch issues early.
  • Implementing Feedback: If a reviewer asks for a specific change, an author could potentially ask their AI assistant for suggestions on how to implement that feedback efficiently and correctly.
  • Generating Related Artifacts: AI can help generate or update comments (like Javadoc for changed methods) or even draft basic documentation snippets related to the code changes, making the PR more complete.

⚠️ AI Assists, It Doesn't Replace Human Review! ⚠️

This is crucial: AI is a code review assistant, not a replacement for human reviewers.

  • Context is King: AI often lacks the deep understanding of the project's history, overall architecture, business requirements, and long-term goals that experienced human reviewers bring. Add the proper context for each prompt. You can even guide AI answers with the Personal instructions directly in the Github Pull Request page.

Image description

  • Design & Logic Still Need Humans: AI is generally poor at evaluating the appropriateness of a design choice or the correctness of complex business logic. That requires human critical thinking.
  • Knowledge Sharing: Code reviews are vital for team learning and knowledge sharing – something AI assistance doesn't replace.
  • Evaluate AI Output: Reviewers need to critically assess any summaries or issues flagged by AI.

Think of AI in code review as handling the first-pass checks, summarizing changes, and assisting with implementation details. This frees up valuable human reviewer time to focus on the deeper aspects of code quality, design, and correctness.

Increasing Code Quality and Security

Writing high-quality, secure Java code is crucial, going beyond just making things work. It's vital to understand AI limitations, especially regarding reliability. Let's explore how AI can help refine code and how traditional tools remain essential for verification.

Using AI for Code Refinement and Understanding:

Where AI can be a valuable assistant is in helping you, the developer, understand and improve code:

  • Untangling Complexity: Use AI chat assistants to explain complex Java code sections.
  • Refactoring Collaboration: Ask the AI for suggestions on refactoring specific methods for better readability, simplification, or to apply certain patterns ("Refactor this using Java Streams" or "How can I simplify this nested logic?"). Treat these suggestions as ideas to be critically evaluated and adapted by you.
  • Learning Best Practices: Use AI to ask questions about secure coding practices ("What are common pitfalls with Java serialization?") .

⚠️ THE GIANT RED FLAG: AI IS NOT RELIABLE FOR ISSUE DETECTION ⚠️

Image description

https://dl.acm.org/doi/pdf/10.1145/3558489.3559072

Now, regarding finding bugs and security vulnerabilities: relying on AI for direct issue detection is highly risky due to its fundamental lack of correctness and predictability guarantees.

  • Why the Risk? AI Makes Mistakes: AI models can hallucinate findings, generate numerous false positives, miss critical vulnerabilities, or provide incomplete/incorrect security advice. They operate on patterns, not deterministic analysis. Therefore, AI is NOT a substitute for proper analysis tools.

The Solution: Deterministic Static Analysis (SAST)

Because AI cannot be trusted for reliable issue detection, you must use dedicated Static Application Security Testing (SAST) tools. These are the right tools for the job:

  • Tools: SonarQube, Checkstyle, PMD, etc.
  • Why They Work: SAST tools operate by applying defined, verifiable rulesets and analysis techniques in a deterministic way.
  • Best practices:

    • Use tooling as soon as possible in your SDLC. Incorporate these SAST tools in the IDE to analyze the quality of your code at the same time you are introducing new changes.

Image description

SonarQube IDE view with an issue and its explanation

  • Connect your CI/CD pipeline with a Quality Gate tool in order to ensure no bad code is going to be merged to your main branch.

Image description

  • Use Quality Gate messages in your Pull Requests through PR decoration.

Image description

Essential Supporting Pillars: Testing and Reviews using Human skills

Alongside reliable SAST, robust engineering practices remain critical:

  • Rigorous Testing (TDD!): Test-Driven Development provides concrete proof that your code meets requirements and handles various scenarios correctly.
  • Human Code Reviews: Critical examination by experienced peers is essential to catch logical flaws, architectural issues, and subtle security concerns that automated tools (whether AI-based or traditional) might miss.
  • Pair Programming: This practice inherently includes collaborative review and discussion, promoting higher quality code, especially when integrating any new tool or technique like AI assistance.

In Summary: Leverage AI assistants carefully for tasks where they excel – helping you understand, refactor, and learn. But for the critical task of identifying bugs and security vulnerabilities, trust deterministic SAST tools. Combine this with rigorous testing and thorough code reviews to build truly high-quality, secure Java applications.

Conclusion

So, what's the bottom line here? Is AI going to take over Java development? Not anytime soon. But is it becoming a genuinely useful, practical tool that can make our lives as developers easier and more productive? Absolutely.

The real takeaway is to think of these AI tools not as replacements, but as powerful assistants or co-pilots. By offloading some of that work to AI, you get more time and energy to focus on the truly challenging and rewarding parts of software development.

But (and it's a big but!), remember those warnings we sprinkled throughout. AI isn't magic, and it's certainly not infallible. Critical thinking, thorough review, rigorous testing, deterministic static analysis, and collaborative code reviews are more important than ever.

And remember : ➑️ 𝑨𝑰 gives you βœ… π“πˆπŒπ„ not βŒπ‚πŽππ…πˆπƒπ„ππ‚π„

Happy coding!

Top comments (0)