loading...

re: Java vs Go: Impressive solutions for invented problems VIEW POST

FULL DISCUSSION
 

I presume that the basis for your debate is this:

  • we should only be using the language's constructs (if, for, ...) in the business/functional code for anything else, e.g. tests, configuration, frameworks, etc.,
  • we instead have to invent a half-assed declarative language, preferably annotation based

Fortunately, both are factually wrong (at least for my 20 year career in Java). There is certainly nothing wrong with if or for within a unit test - I do that all the time, e.g.

String someVal = testSubject.someFunc();
switch (someVal) {
  case VALUE_ONE: // fall-through
  case VALUE_TWO:
    fail();
  case VALUE_THREE: // fall-through
  default:
    assertTrue(testSubject.otherFunc(someVal));
}

Of course, the above code violates SRP, but other than that, for illustrative purposes, the point stands.

Re your point about annotations, I refer you to Annotation Hell. E.g., almaer.com/blog/hibernate3-example... (no affiliation, just the first result in Google).

I actively prefer not to use Parametized Tests, regardless of the language I'm writing in, but horses for courses there. If pushed to, I would prefer a for loop within a test method, using a field variable within the Test class. It's not a strong preference though, and I wouldn't argue the point in a PR etc. My approach in Java is pretty close the the Go code you posted, funnily enough...

I've never understood the point of a conditional test, I typically want all my tests to run, and the tests can setup pre-conditions as necessary (such as pretending that the environment variable has been run). Conditional tests (to me) imply that the build server might not run them, but then the condition will be true when running in the Production environment.

The fact that JUnit doesn't guarantee order of test execution is positively a good thing. I don't want my tests to bleed through to each other. Sure, my tests might take longer to run (e.g., Spring context bootstrap...) but that's simply an optimisation problem, and we're talking about CPU cycles on the build server (as a developer, I should only be running the tests that I care about during this iteration, the build server can run the rest).

Now, typically I would add another framework into the mix to help with test setup, Mockito, and that kind of leads us into Dependency Hell too, but I'm sure there's many other ways to achieve the same too.

 

Heya Dave,

There is certainly nothing wrong with if or for within a unit test - I do that all the time, e.g.

I mustn't have expressed my meaning clearly, apologies !

I wasn't referring to the code inside the test methods,

Rather, I was referring to the usage of annotations, e.g. @Test, @ParameterizedTest, @Get, @Post, ... to declare tests, rest endpoints, change the behaviour of methods, etc.

compared to the Go libraries philosophy of doing it explicitly & imperatively, e.g.t.Run("dynamic test case", func(t) { /* test code here */ }, or router.GET("/path", func() {} to declare a HTTP handler etc.

W.r.t. the annotation hell, I have a nice one: annotatiomania.com/ :D

I actively prefer not to use Parametized Tests, regardless of the language I'm writing in, but horses for courses there. If pushed to, I would prefer a for loop within a test method, using a field variable within the Test class. It's not a strong preference though, and I wouldn't argue the point in a PR etc. My approach in Java is pretty close the the Go code you posted, funnily enough...

I agree there: I'd rather whip out a for loop rather than have to google how ti use JUnit parameterized test.

The problem is: if a single case inside the for loop fails, the whole test is marked as red, and you have to go through the logs to identify which particular case failed (assuming you had the necessary logging set up).
The tooling (the in-IDE test runner, Jenkins, Surefire test reports, ...) does not accommodate this style very well.

 

The choice to use annotations, at any level, isn't mandated by Java. That's mandated by choices made during design of the application (or maybe mandated by corporate code-style agreements).

As of Java 8 onwards, I've actually been writing far more "functional programming" style Java, so my code probably looks a lot more like Go - because I love lambda's. Some, typically more junior, developers do tend to struggle with lambdas though, in my experience.

The problem is: if a single case inside the for loop fails, the whole test is marked as red, and you have to go through the logs to identify which particular case failed (assuming you had the necessary logging set up).

I'd argue that's a good thing. Fail fast. I'd argue that tests shouldn't write any logs. The point of failure should make it explicitly clear at which point the test failed (at least down to the line). If this is within a loop/stream/lambda then yes, things can get complicated, but then you're just asking a developer to run the test case in debug, with the IDE set to halt execution on exception. Further, most JUnit assertions will allow you to put an explicit statement in the exception's message.

Logs are useful for support purposes, but there's nothing wrong with a developer hooking up to a remote JVM using JDPA to find out the internal state of an application.

The tooling (the in-IDE test runner, Jenkins, Surefire test reports, ...) does not accommodate this style very well.

There, we certainly agree. All of the tooling (except perhaps javac itself) is strongly opinionated about how it expects the application design to be laid out.

PS., no need to apologise for anything - we're both entitled to hold different opinions, that's the point of a debate! It might be that one (or both) reconsiders, but equally, we can still respect each other's opinions without any change.

Code of Conduct Report abuse