DEV Community

Cover image for Scaffolding Spring Boot, Freemarker and JDI - Building DDTJ, Day 2
Shai Almog
Shai Almog

Posted on

Scaffolding Spring Boot, Freemarker and JDI - Building DDTJ, Day 2

Yesterday I discussed the first steps in building a new open source project from scratch (which also applies rather nicely to commercial undertakings). Today we continue in the first phase which is scaffolding the actual project and picking the tools. Spoiler, I picked the Spring Boot application framework...

Why did I pick Spring Boot?

This was a straightforward choice:

  • Spring Boot is very mature
  • I know Spring Boot well
  • Spring framework's approach to configuration will let the project grow to support additional use cases
  • Spring Boot's native support will let me package the final app as a single executable

The obvious question would be, why did I consider something else?

I was conflicted because Spring Native is still missing some things I need, such as Freemarker support (or any templating engine). I think having native compilation for this tool is pretty important in the long run. But for the first MVP I think this would be a "premature optimization". Also, I feel the alternative to spring I'm familiar with isn't as mature.

This is important: try to avoid new technologies for MVPs. By "new" I mean a tool you aren't familiar with.

The other thing I'm concerned about is the size of Spring Boot. Other application development frameworks often boast smaller memory footprint and faster startup time. These things matter in the long run. But I think it will be easier to port a Spring Boot application later than study something based on a vendor niche benchmark.

I created a base project using the Spring Initializr tool, which generates some boilerplate configuration and source code. I avoided many of the standard spring integration options you normally add to a Spring applications such as:

  • Spring Data - we don't need database access or a database connection. For performance, everything is stored directly in memory. Not even a memory database, there's no need
  • Spring Security - the application is running locally to the VM. Yes, security is crucial, but Spring Security is overly focused on web security. We won't even use HTTPS for the MVP
  • Spring Cloud - we don't need any cloud service for this tool. It needs to work fast locally

I added the Spring Boot web support for the RESTful web services. I thought about using WebFlux but I'm not sure if it will deliver better performance for this type of application. Also, with the newer architecture, I don't think this matters as much.

At the moment I don't need most of the features of Spring Framework such as automatic configuration, dependency management, etc. But they will come in handy as we move forward. E.g. the ability to customize the templates generated for a specific corporation using an external configuration file could be very useful in enterprise settings.

Right now the only configuration change I made was to set:

server.port=2012
Enter fullscreen mode Exit fullscreen mode

In the configuration properties file (I used the 20th of December for the number, which isn't a well-known port). The nice thing about spring is that even if you have something on that port it's trivial to launch it with a different port using -Dserver.port.

Template Engine

Spring Boot has extensive support for Freemarker, Velocity and Thymeleaf. I worked with all three, but most with Freemarker and Thymeleaf is mostly for HTML, as far as I can tell. The project needs Java code generation and Freemarker does that (Velocity does as well). It isn’t something I’ve done myself (with Freemarker) but there's plenty of sample code. I'm still not sure if it will handle my requirements well enough, but we'll find that out when we generate source code.

At the moment, I just added the dependency and didn't yet write a single line of Freemarker resource template... I'm not sure if it's the right fit or if it's enough for the finished product. We'll have to see about that.

Either way, we can probably use it with Spring MVC for a simple web application UI later on.

Java 11 All Around

I wanted to go with Java 17. I really wanted to do that when I started and even generated the first project with Java 17 as the JDK. My thought process revolved around using Java 17 and GraalVM to compile it, but it doesn't support 17 yet and Spring Boot can't compile to native with Freemarker.

So for now I standardized on Java 11 and I will re-evaluate as these projects mature and free me from JDK update cycles.

Lombok & Pico CLI

Because I'm going with JDK 11 and can't use records, I went with Lombok. I know it's controversial, but it worked for me so far and is supported for Spring Boot native compilation. I think many people hate Lombok because they misused the equals/hashcode support with their Spring Data/JPA code. This would cause problems, as I mentioned here.

For the CLI, I used PicoCLI. I meant to write about it ages ago and was bogged down with other things. I researched dozens of CLI tools for Java when we started Lightrun. They were all just awful. I like an opinionated approach as much as the next person, but they literally didn't let me define the syntax of the CLI code.

I tried Pico CLI when I already gave up. I had very low expectations but was totally blown away. It's easy to use and powerful at the same time. I never want to see an argv again!

A Revelation and Strategy Shift

I spent the past couple of months thinking here and there about DDT. Many architectures cycled through my mind and eventually I settled on one choice. I could see the big set pieces rather well...

Then, as I started scaffolding the Spring Boot code, it started occurring to me that this approach was completely wrong and wasteful. That's often the problem with design. Once we commit something to a document and go with the "team" we often feel like we're married to the direction we picked. We can't "feel" a design. When I started writing the code in Spring Boot, the approach became clearer.

My initial approach was of three distinct pieces:

  1. Agent running in the target VM, communicating with the Spring Boot Backend
  2. Spring Boot Backend to store agent state
  3. CLI tool communicating with the Spring Boot Backend

Then I started thinking: Why the hell do we need an agent?

It was my initial approach because that's how most of these tools are built, but this specific tool can just leverage the JDI API instead of the agent APIs. If we're doing that, we don't even need to leave the comfort of Spring Boot. That could mean we could upgrade to Java 17 while the target VM can still run Java 8 if we so choose.

With the original architecture, I had considered class file manipulation to adapt the bytecode.

This architecture is so much easier and faster. It will reduce some communications and should work well. I hope it will scale properly.

The Data Model

My primary focus today has been the data model. Getting the right fields into place and defining the structures in which the debug process will store the invocations. I shared the data model between the backend spring framework code and the CLI code.

It was an axiom when I was a young programmer that getting the data structure right is 50% of the work needed for the project. I don't know if it's 50% but when you think about the data model, the puzzle pieces of the project fall into place.

Mono-Repo

I went with a mono-repo approach rather than multiple projects. There are several reasons behind this:

  • It's easier, we have one version and we update all the pieces in one place
  • One location to star/follow/fork
  • CI and integration tests are much easier
  • One point to have all the docs without sending people on a cherry picking path

When I started using git, people convinced me to break up my repos because "that's how git is used". I foolishly listened to that... Huge mistake.

CI, Sonar & Snyk

I don't like code reviews, and I'm not too crazy about sonar cloud. Getting an error message on a PR is never fun... But it catches bugs and does it rather well. Unlike a human reviewer, it's prompt, consistent and through. It's a bit "extreme" and drives me crazy with some of its nitpicks, but I feel it makes me a better programmer. It literally found bugs in my initial code, which is amazing since there's so little code. I like that the error message is proactive and has excellent suggestions. Again, better than most humans.

The one thing that drives me crazy about that is that I get some "code smell" warnings that are perfectly fine and I can't "remove them". E.g. I need to use com.sun APIs since there are no Java API alternatives. It's a documented API, but still... Or there's a warning that recommends I use Maps computeIfAbsent instead of get() in this block. Normally, I would accept that. But I use synchronization for this process and I reduce the scope of the lock. So I want to return from that block after I do the "compute". That makes the lock more efficient (arguably, since stepping out of the lock just to step into another lock is nuanced...

Regardless, looking at warnings all the time makes me feel like I'm doing something wrong.

I integrated this into GitHub actions, which is pretty trivial to do, and added artifacts for the CLI and the Spring Boot backend code. So when something starts working we'll have historic builds etc. to work from. Pretty neat.

Finally, I added Snyk which seems to be essential with the current state of vulnerabilities. Since it’s free for open source projects we should probably try to get it on all our repos. Integration was trivial, which is great. Unfortunately currently the badge seems to be suffering from this issue.

The code is there; it isn't much to look at (yet) but the basic skeleton for building the project is forming.

Tomorrow

It's been a busy day and if you follow the project, the code is fleshing out. My primary focus is currently the debugging API and carrying out the core data we need into the Spring Boot backend.

So tomorrow I plan to talk more about that. Working with JDI and the web interface. I will also discuss code coverage, testing etc. I might even start going into the code I wrote and discuss why I did various things. Right now it's still a bit too abstract, there is code but I'm not sure if it's any good.

If you find this interesting/useful you can follow me on twitter where I publish everything I do. And dad jokes...

Oldest comments (0)