DEV Community

Cover image for **5 Java Annotation Processing Techniques That Generate Code and Eliminate Boilerplate Completely**
Nithin Bharadwaj
Nithin Bharadwaj

Posted on

**5 Java Annotation Processing Techniques That Generate Code and Eliminate Boilerplate Completely**

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!

I've been writing Java code for years, and if there's one thing that's remained constant, it's the sheer amount of repetitive code we have to write. You know the type—the getters and setters, the builders, the mappers that copy fields from one object to another, the validation logic you write twice. It's tedious, it's error-prone, and it buries the important business logic under mountains of ceremony.

But there's a better way. Java has a powerful, if somewhat overlooked, feature called annotation processing. It lets you write programs that run during compilation to read your code and generate new source files. Think of it as having a very diligent assistant who writes all the boring, repetitive code for you, perfectly, every single time. The best part? It happens at compile time. Your generated code is just regular Java, sitting right there in your project, ready to be inspected, debugged, and used. It's not magic; it's just smart automation.

Let me show you five practical ways I use annotation processing to cut down on boilerplate. I'll walk you through each one with code, explaining it as if we were pair programming. We'll start simple and build up to more sophisticated uses.

The first technique is generating repetitive code patterns automatically. Builders are a classic example. You have a class with a bunch of fields, and you want a nice, fluent API to create instances. Writing that builder by hand for every data class is a chore.

Instead, I define a simple annotation to mark which classes need a builder.

// This annotation lives in your project. It's just a marker.
// RetentionPolicy.SOURCE means it disappears after compilation; it's only for the processor.
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.SOURCE)
public @interface MakeBuilder {
    String name() default "";
    boolean fluent() default true;
}
Enter fullscreen mode Exit fullscreen mode

Now, the real work happens in the annotation processor. Don't be intimidated by the name. It's just a Java class that the compiler calls when it sees your annotation.

// Tell the compiler what annotation we handle and which Java version we support.
@SupportedAnnotationTypes("com.yourcompany.MakeBuilder")
@SupportedSourceVersion(SourceVersion.RELEASE_17)
public class BuilderMaker extends AbstractProcessor {

    @Override
    public boolean process(Set<? extends TypeElement> annotations,
                          RoundEnvironment roundEnv) {
        // The compiler gives us all elements annotated with @MakeBuilder
        for (TypeElement annotation : annotations) {
            Set<? extends Element> elements =
                roundEnv.getElementsAnnotatedWith(annotation);

            for (Element element : elements) {
                // Ensure we're only processing classes
                if (element.getKind() == ElementKind.CLASS) {
                    TypeElement classElement = (TypeElement) element;
                    writeBuilderFile(classElement);
                }
            }
        }
        return true; // We've handled these annotations.
    }

    private void writeBuilderFile(TypeElement classElement) throws IOException {
        String className = classElement.getSimpleName().toString();
        String packageName = processingEnv.getElementUtils()
            .getPackageOf(classElement).toString();
        MakeBuilder annotation = classElement.getAnnotation(MakeBuilder.class);

        // Decide on the builder class name. Use custom name or default to "ClassNameBuilder"
        String builderName = annotation.name().isEmpty()
            ? className + "Builder" : annotation.name();

        // This is how we create a new .java source file.
        JavaFileObject builderFile = processingEnv.getFiler()
            .createSourceFile(packageName + "." + builderName);

        try (PrintWriter out = new PrintWriter(builderFile.openWriter())) {
            out.println("package " + packageName + ";");
            out.println();
            out.println("public final class " + builderName + " {");

            // First, we need fields in the builder to mirror the class fields.
            List<VariableElement> fields = new ArrayList<>();
            for (Element enclosed : classElement.getEnclosedElements()) {
                if (enclosed.getKind() == ElementKind.FIELD) {
                    VariableElement field = (VariableElement) enclosed;
                    fields.add(field);
                    String type = field.asType().toString();
                    String name = field.getSimpleName().toString();
                    out.println("    private " + type + " " + name + ";");
                }
            }

            out.println();
            // Now, generate setter methods. If fluent is true, they return the builder.
            for (VariableElement field : fields) {
                String type = field.asType().toString();
                String name = field.getSimpleName().toString();
                String methodName = annotation.fluent() ? name : "set" + capitalize(name);

                out.println("    public " + builderName + " " + methodName + "(" + type + " " + name + ") {");
                out.println("        this." + name + " = " + name + ";");
                out.println("        return this;");
                out.println("    }");
                out.println();
            }

            // Finally, the build() method that constructs the actual object.
            out.println("    public " + className + " build() {");
            out.print("        return new " + className + "(");
            // Create the argument list: field1, field2, field3...
            String args = fields.stream()
                .map(f -> f.getSimpleName().toString())
                .collect(Collectors.joining(", "));
            out.print(args);
            out.println(");");
            out.println("    }");
            out.println("}");
        }
    }

    private String capitalize(String str) {
        if (str == null || str.isEmpty()) return str;
        return str.substring(0, 1).toUpperCase() + str.substring(1);
    }
}
Enter fullscreen mode Exit fullscreen mode

To use it, I just annotate my simple data class.

@MakeBuilder
public class User {
    private final String id;
    private final String email;
    private final int age;

    public User(String id, String email, int age) {
        this.id = id;
        this.email = email;
        this.age = age;
    }
    // ... getters
}
Enter fullscreen mode Exit fullscreen mode

During compilation, the processor runs and creates a file called UserBuilder.java in the same package. I never have to look at it, but if I do, it's exactly what I'd write by hand.

package com.example;

public final class UserBuilder {
    private String id;
    private String email;
    private int age;

    public UserBuilder id(String id) {
        this.id = id;
        return this;
    }

    public UserBuilder email(String email) {
        this.email = email;
        return this;
    }

    public UserBuilder age(int age) {
        this.age = age;
        return this;
    }

    public User build() {
        return new User(id, email, age);
    }
}
Enter fullscreen mode Exit fullscreen mode

Now I can create users cleanly: new UserBuilder().id("123").email("test@example.com").age(30).build();. The beauty is, if I add a new field to User, the next compile automatically adds it to the builder. They can never get out of sync.

The second technique tackles another common pain point: mapping objects. Converting a UserEntity from your database layer to a UserDto for your API layer involves a lot of get and set calls. It's trivial but voluminous.

We can annotate a configuration class to describe what we want to map.

// This annotation tells the processor: "Generate a mapper between these two types."
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.SOURCE)
public @interface CreateMapper {
    Class<?> from();
    Class<?> to();
    String mapperClass() default "";
}
Enter fullscreen mode Exit fullscreen mode

Here's how I'd set it up. I create an empty class whose sole purpose is to hold the mapping configuration.

// This class has no logic. It's just instructions for the processor.
@CreateMapper(from = CustomerEntity.class, to = CustomerResponse.class)
public class CustomerMappings {}
Enter fullscreen mode Exit fullscreen mode

The processor for this is more complex because it needs to introspect both the source and target classes, match fields by name and type, and generate the mapping logic. Here's a simplified version that shows the core idea.

// Inside the annotation processor's process method
for (Element element : elementsWithCreateMapper) {
    CreateMapper ann = element.getAnnotation(CreateMapper.class);
    TypeElement sourceType = processingEnv.getElementUtils().getTypeElement(ann.from().getCanonicalName());
    TypeElement targetType = processingEnv.getElementUtils().getTypeElement(ann.to().getCanonicalName());

    String mapperName = ann.mapperClass().isEmpty() ?
        sourceType.getSimpleName() + "To" + targetType.getSimpleName() + "Mapper" : ann.mapperClass();

    generateMapperClass(sourceType, targetType, mapperName);
}

// The generation method would create code like this:
public class CustomerEntityToCustomerResponseMapper {
    public CustomerResponse convert(CustomerEntity source) {
        if (source == null) {
            return null;
        }
        CustomerResponse target = new CustomerResponse();
        // The processor intelligently matches fields
        target.setCustomerId(source.getId());
        target.setFullName(source.getFirstName() + " " + source.getLastName());
        target.setEmailAddress(source.getEmail());
        // It can handle nested objects by generating calls to other generated mappers
        if (source.getAddress() != null) {
            target.setShippingAddress(new AddressMapper().convert(source.getAddress()));
        }
        // It can handle collections
        if (source.getOrders() != null) {
            target.setOrders(source.getOrders().stream()
                .map(order -> new OrderMapper().convert(order))
                .collect(Collectors.toList()));
        }
        return target;
    }
}
Enter fullscreen mode Exit fullscreen mode

The generated mapper handles null checks, basic type conversions, and can even chain to other generated mappers for nested objects. All the copying code I would have written—and inevitably forgotten to update when a field changed—is now generated and always correct.

The third technique is about validation. We often use annotations like @NotNull or @Size. Sometimes we need custom validation, like @ValidEmail. Usually, you write the annotation and then a separate validator class that implements the logic. Annotation processing can fuse these two steps.

I can define my custom validation annotation.

@Target({ElementType.FIELD, ElementType.PARAMETER})
@Retention(RetentionPolicy.RUNTIME) // Need RUNTIME for frameworks like Spring to read it
@Constraint(validatedBy = {}) // Leave this empty! The processor will fill it.
public @interface StrongPassword {
    String message() default "Password must be at least 8 characters with a number and a special symbol";
    Class<?>[] groups() default {};
    Class<? extends Payload>[] payload() default {};
}
Enter fullscreen mode Exit fullscreen mode

Notice the validatedBy = {} is empty. My annotation processor will generate the implementation class that goes there.

// Inside the processor, when I see a @StrongPassword annotation
String validatorClassName = "StrongPasswordValidator";
JavaFileObject file = filer.createSourceFile(validatorClassName);

try (PrintWriter out = new PrintWriter(file.openWriter())) {
    out.println("public class " + validatorClassName +
                " implements ConstraintValidator<StrongPassword, String> {");
    out.println();
    out.println("    @Override");
    out.println("    public void initialize(StrongPassword constraint) {");
    out.println("        // Initialization logic if needed");
    out.println("    }");
    out.println();
    out.println("    @Override");
    out.println("    public boolean isValid(String value, ConstraintValidatorContext ctx) {");
    out.println("        if (value == null) {");
    out.println("            return true; // Let @NotNull handle null checks");
    out.println("        }");
    out.println("        // Generated validation logic");
    out.println("        boolean hasMinLength = value.length() >= 8;");
    out.println("        boolean hasDigit = value.matches(\".*\\\\d.*\");");
    out.println("        boolean hasSpecial = value.matches(\".*[!@#$%^&*].*\");");
    out.println("        return hasMinLength && hasDigit && hasSpecial;");
    out.println("    }");
    out.println("}");
}
Enter fullscreen mode Exit fullscreen mode

Now, the processor also needs to go back and update the @StrongPassword annotation's source to point to this generated validator. This is trickier and often involves generating a new version of the annotation file or using a meta-annotation approach. A simpler, common pattern is to have the processor generate a separate configuration file (like META-INF/validation.xml) that registers the validator. The key outcome is the validation logic is written once in the annotation definition and the boilerplate validator class is created for me.

The fourth technique is a game-changer for application configuration. How many times have you misspelled a property key like app.datasource.url and only found out at runtime? We can use annotation processing to create type-safe configuration classes from simple property interfaces.

I define an annotation for a configuration spec.

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.SOURCE)
public @interface ConfigProperties {
    String prefix();
}
Enter fullscreen mode Exit fullscreen mode

Then, I write an interface that describes my configuration structure. It's just method signatures that mirror my properties.

@ConfigProperties(prefix = "app.redis")
public interface RedisConfig {
    // Method name 'host' binds to property 'app.redis.host'
    String host();
    // 'connectionTimeout' binds to 'app.redis.connection-timeout' (relaxed binding)
    Duration connectionTimeout();
    int poolSize();
    boolean sslEnabled();
}
Enter fullscreen mode Exit fullscreen mode

The annotation processor reads this interface, extracts the method names and return types, and generates a full-fledged Spring @ConfigurationProperties class or a similar configuration bean.

// Generated by the processor
@ConfigurationProperties(prefix = "app.redis")
public class RedisConfigProperties implements RedisConfig {
    private String host = "localhost";
    private Duration connectionTimeout = Duration.ofSeconds(5);
    private int poolSize = 10;
    private boolean sslEnabled = false;

    // Standard getters and setters with Javadoc...
    public String getHost() { return host; }
    public void setHost(String host) { this.host = host; }

    public Duration getConnectionTimeout() { return connectionTimeout; }
    public void setConnectionTimeout(Duration timeout) {
        if (timeout.isNegative()) {
            throw new IllegalArgumentException("Timeout cannot be negative");
        }
        this.connectionTimeout = timeout;
    }

    // A convenient builder might also be generated
    public static Builder builder() { return new Builder(); }
    public static class Builder {
        private final RedisConfigProperties props = new RedisConfigProperties();
        public Builder host(String h) { props.setHost(h); return this; }
        public RedisConfigProperties build() { return props; }
    }
}
Enter fullscreen mode Exit fullscreen mode

Now, in my application code, I inject RedisConfig (the interface), and I get compile-time checking. My IDE can autocomplete host() and poolSize(). If I rename a property in my application.yml, the interface and the generated class get out of sync, and the compiler tells me immediately. It turns configuration—a famously stringly-typed and runtime-error-prone part of the system—into a type-safe, refactoring-friendly component.

The fifth technique is about tests. Writing unit tests involves a lot of setup: creating mocks, building fixture data, writing assertions. While the logic of each test is unique, the scaffolding is often very similar. We can generate a base test class tailored to the class under test.

Imagine an annotation that analyzes a service class.

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.SOURCE)
public @interface GenerateTestScaffold {
    String testClassName() default "";
    TestFramework framework() default TestFramework.JUNIT_JUPITER;
}

public enum TestFramework { JUNIT_JUPITER, TESTNG }
Enter fullscreen mode Exit fullscreen mode

I'd annotate my service.

@GenerateTestScaffold
@Service
public class PaymentService {
    private final PaymentGateway gateway;
    private final AuditLogger logger;
    public PaymentService(PaymentGateway g, AuditLogger l) { /* ... */ }
    public Receipt process(PaymentRequest req) { /* ... */ }
    public List<Transaction> getHistory(String userId) { /* ... */ }
}
Enter fullscreen mode Exit fullscreen mode

The processor would examine PaymentService. It would see it has two dependencies through its constructor. It would see its public methods. It would then generate a test class with that knowledge baked in.

// Generated test scaffold
@ExtendWith(MockitoExtension.class)
public class PaymentServiceTest {
    @Mock
    private PaymentGateway mockGateway;
    @Mock
    private AuditLogger mockLogger;
    @InjectMocks
    private PaymentService paymentService;

    @BeforeEach
    void setUp() {
        // Could include common mock resets here
    }

    // It generates a basic positive path test for each public method
    @Test
    void process_ShouldReturnReceipt_WhenPaymentIsSuccessful() {
        // Arrange
        PaymentRequest sampleRequest = PaymentRequest.builder()
            .amount(new BigDecimal("49.99"))
            .token("valid_token")
            .build();
        when(mockGateway.charge(any())).thenReturn(new GatewayResponse(Status.SUCCESS, "txn_123"));

        // Act
        Receipt result = paymentService.process(sampleRequest);

        // Assert
        assertNotNull(result);
        assertEquals("txn_123", result.getTransactionId());
        verify(mockLogger).logSuccess(any());
    }

    @Test
    void process_ShouldThrow_WhenGatewayFails() {
        PaymentRequest request = // ...;
        when(mockGateway.charge(any())).thenReturn(new GatewayResponse(Status.FAILED, null));

        assertThrows(PaymentFailedException.class, () -> paymentService.process(request));
    }

    // A skeleton test for the other method
    @Test
    void getHistory_ShouldReturnList() {
        // TODO: Developer fills in specifics
        when(mockGateway.fetchTransactions(anyString())).thenReturn(List.of());
        List<Transaction> history = paymentService.getHistory("user1");
        assertTrue(history.isEmpty());
    }
}
Enter fullscreen mode Exit fullscreen mode

This generated scaffold doesn't write my tests for me. Instead, it writes all the ceremony: the field declarations, the Mockito annotations, the @BeforeEach method, and skeleton tests with basic mocking and assertions. I, as the developer, then fill in the specific scenario data and edge cases. It saves me from the repetitive typing and ensures my test class structure is consistent across the entire codebase.

These five techniques—generating builders, mappers, validators, type-safe configs, and test scaffolds—have fundamentally changed how I write Java. The initial investment to learn annotation processing pays off enormously. You move from being a scribe, copying code patterns endlessly, to being an architect, defining the patterns and letting the compiler do the copying. Your codebase becomes more consistent, less prone to trivial errors, and much easier to maintain. The logic that matters—your business rules, your algorithms—stands out clearly, no longer hidden in piles of boilerplate.

Start small. Try writing a processor for a simple builder. Once you see that generated UserBuilder.java file appear in your target/generated-sources folder, you'll be hooked. It's one of the most satisfying forms of automation in a developer's toolbox.

📘 Checkout my latest ebook for free on my channel!

Be sure to like, share, comment, and subscribe to the channel!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!

Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Top comments (0)