DEV Community

Cover image for Mastering the Senior Java/Spring Boot Engineer Interview
Odumosu Matthew
Odumosu Matthew

Posted on

Mastering the Senior Java/Spring Boot Engineer Interview

I've been on both sides of the interview table for over a decade. I've bombed interviews at companies I really wanted to join. I've also hired engineers who looked mediocre on paper but turned out to be the best technical decisions I ever made.

Here's what I've learned: Senior interviews aren't knowledge tests. They're thinking tests.

Anyone can memorize that HashMap has O(1) lookup. What interviewers actually want to know is whether you'll make good decisions when the documentation runs out and Stack Overflow doesn't have your exact problem.

This guide is different. I'm not going to list 50 questions with textbook answers. Instead, I'll show you how senior engineers think through problems, because that's what actually gets you hired.


The Uncomfortable Truth About Senior Interviews

Let me be blunt: if you have 10+ years of experience and you're still getting rejected, it's probably not your technical knowledge. It's one of these:

  1. You're answering questions like a junior - giving definitions instead of demonstrating judgment
  2. You're not showing trade-off thinking - every solution has costs, and you're not acknowledging them
  3. You're not telling stories - your experience is your differentiator, but you're not using it
  4. You're solving the wrong problem - you're answering what was asked, not what was meant

Let's fix all of these.


Part 1: Core Java - But Not The Way You Think

Every Java interview covers the basics. The difference is depth. Here's what separates a senior answer from a mid-level one.

The equals() and hashCode() Question

What they ask: "What happens if you override equals() but not hashCode()?"

Mid-level answer: "It breaks hash-based collections because equal objects might have different hash codes."

Senior answer:

"It violates the hashCode contract, which creates subtle bugs that are incredibly hard to track down in production.

Here's why this is dangerous: the bug won't show up in your unit tests. It'll show up three months later when someone adds your object to a HashSet in a completely different part of the codebase.

public class TradingAccount {
    private String accountId;
    private String owner;

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;
        TradingAccount that = (TradingAccount) o;
        return Objects.equals(accountId, that.accountId);
    }

    // Intentionally missing hashCode — this is a bug
}
Enter fullscreen mode Exit fullscreen mode

Now imagine this in a trading system:

Set<TradingAccount> processedAccounts = new HashSet<>();
TradingAccount account1 = loadFromDatabase(12345);
processedAccounts.add(account1);

// Later, different code path loads the same account
TradingAccount account2 = loadFromDatabase(12345);
if (!processedAccounts.contains(account2)) {
    // This executes! We process the same account twice.
    // In a trading system, this could mean duplicate orders.
    processAccount(account2);
}
Enter fullscreen mode Exit fullscreen mode

The contains() check fails because HashSet first checks the bucket using hashCode(). Since we didn't override it, account1 and account2 have different hash codes (based on memory address), so they land in different buckets. The equals() method is never even called.

I've seen this exact bug cause duplicate transactions in a payment system. Took two weeks to find because it only happened under specific load conditions where the same entity was loaded through different code paths.

The fix isn't just adding hashCode() - it's establishing team conventions. We now require that any class overriding equals() must also have a test that verifies the hashCode contract."


See the difference? The senior answer:

  • Explains the mechanism (hash buckets)
  • Describes real consequences (duplicate transactions)
  • Shares a war story (the two-week debugging session)
  • Proposes prevention (team conventions and tests)

This is what interviewers want. They're not testing if you know Java. They're testing if you've been burned by Java and learned from it.


Memory Model and Concurrency

What they ask: "Explain the Java Memory Model and when you'd use volatile."

This question is a trap. Most candidates launch into a textbook explanation of happens-before relationships and CPU caches. That's fine, but it's not impressive.

Senior approach:

"The Java Memory Model defines how threads interact through memory, but honestly, the practical implications matter more than the theory.

Here's when I've actually needed volatile in production:

1. Status flags for graceful shutdown

public class OrderProcessor implements Runnable {
    private volatile boolean running = true;

    public void shutdown() {
        running = false;  // Visible to the processing thread immediately
    }

    @Override
    public void run() {
        while (running) {
            Order order = queue.poll(100, TimeUnit.MILLISECONDS);
            if (order != null) {
                process(order);
            }
        }
        // Cleanup resources
    }
}
Enter fullscreen mode Exit fullscreen mode

Without volatile, the processing thread might cache running = true and never see the shutdown signal. I've seen services that couldn't shut down cleanly because of this - they'd hang during deployments.

2. Double-checked locking (though I avoid it now)

public class ExpensiveResourceHolder {
    private volatile ExpensiveResource instance;

    public ExpensiveResource getInstance() {
        if (instance == null) {
            synchronized (this) {
                if (instance == null) {
                    instance = new ExpensiveResource();
                }
            }
        }
        return instance;
    }
}
Enter fullscreen mode Exit fullscreen mode

This pattern is technically correct with volatile, but I've stopped using it. It's too easy to get wrong, and the performance benefit over simple synchronization is negligible in most applications. I now use either:

  • Lazy holders for singletons
  • ConcurrentHashMap.computeIfAbsent() for cached values
  • Or just accept the synchronization cost

3. Publishing immutable objects

public class ConfigurationHolder {
    private volatile ImmutableConfiguration config;

    public void updateConfiguration(ImmutableConfiguration newConfig) {
        // Single atomic write, immediately visible to all readers
        this.config = newConfig;
    }

    public ImmutableConfiguration getConfiguration() {
        return config;  // No synchronization needed for reads
    }
}
Enter fullscreen mode Exit fullscreen mode

This is probably my most common use case — when you have a reference that's written rarely but read constantly.

What I'd avoid:

Using volatile for compound operations:

private volatile int counter = 0;

public void increment() {
    counter++;  // NOT THREAD-SAFE! Read-modify-write is not atomic
}
Enter fullscreen mode Exit fullscreen mode

This is a classic mistake. volatile guarantees visibility, not atomicity. You need AtomicInteger or synchronization here.

The broader point is: most concurrency problems are better solved by avoiding shared mutable state entirely. Immutable objects, message passing, and higher-level constructs like CompletableFuture are almost always better choices than low-level primitives like volatile."


Part 2: Spring Boot — Beyond the Annotations

Every Java developer knows @Autowired. Senior developers know when not to use it.

Dependency Injection Philosophy

What they ask: "Explain dependency injection in Spring."

What they mean: "Do you understand why we use DI, or do you just follow patterns blindly?"

Senior answer:

"Dependency injection is fundamentally about inverting control of object creation. Instead of a class creating its dependencies, they're provided from outside. This seems simple, but the implications are profound.

Here's what DI actually buys us:

1. Testability without magic

// Without DI — hard to test
public class PaymentProcessor {
    private final PaymentGateway gateway = new StripeGateway();

    public PaymentResult process(Payment payment) {
        return gateway.charge(payment);  // How do you test this without hitting Stripe?
    }
}

// With DI — trivially testable
public class PaymentProcessor {
    private final PaymentGateway gateway;

    public PaymentProcessor(PaymentGateway gateway) {
        this.gateway = gateway;
    }

    public PaymentResult process(Payment payment) {
        return gateway.charge(payment);
    }
}

// Test
@Test
void shouldHandleDeclinedCard() {
    PaymentGateway mockGateway = mock(PaymentGateway.class);
    when(mockGateway.charge(any())).thenReturn(PaymentResult.declined("Insufficient funds"));

    PaymentProcessor processor = new PaymentProcessor(mockGateway);
    PaymentResult result = processor.process(new Payment(100));

    assertThat(result.isDeclined()).isTrue();
}
Enter fullscreen mode Exit fullscreen mode

2. Configuration flexibility

@Configuration
public class PaymentConfig {

    @Bean
    @Profile("production")
    public PaymentGateway productionGateway() {
        return new StripeGateway(stripeApiKey);
    }

    @Bean
    @Profile("sandbox")
    public PaymentGateway sandboxGateway() {
        return new StripeSandboxGateway();
    }

    @Bean
    @Profile("test")
    public PaymentGateway testGateway() {
        return new InMemoryPaymentGateway();  // No external calls
    }
}
Enter fullscreen mode Exit fullscreen mode

Same code, different behavior based on environment. No if-statements scattered through your business logic.

3. Lifecycle management

Spring manages object lifecycles, which matters more than people realize:

@Component
public class DatabaseConnectionPool {
    private final HikariDataSource dataSource;

    @PostConstruct
    public void initialize() {
        // Warm up connections
        dataSource.getConnection().close();
        log.info("Connection pool warmed up");
    }

    @PreDestroy
    public void shutdown() {
        // Graceful shutdown — finish in-flight queries
        dataSource.close();
        log.info("Connection pool shut down gracefully");
    }
}
Enter fullscreen mode Exit fullscreen mode

What I've learned to avoid:

Field injection:

// Don't do this
@Service
public class OrderService {
    @Autowired
    private PaymentService paymentService;  // Hidden dependency
    @Autowired
    private InventoryService inventoryService;  // Another hidden dependency
}
Enter fullscreen mode Exit fullscreen mode

Problems:

  • Dependencies are invisible in the constructor
  • Can't create instances without Spring (harder to test)
  • No compile-time safety if dependency is missing
  • Encourages too many dependencies (code smell for SRP violation)

Constructor injection is always better:

@Service
public class OrderService {
    private final PaymentService paymentService;
    private final InventoryService inventoryService;

    // If this constructor has 8 parameters, you know something's wrong
    public OrderService(PaymentService paymentService, InventoryService inventoryService) {
        this.paymentService = paymentService;
        this.inventoryService = inventoryService;
    }
}
Enter fullscreen mode Exit fullscreen mode

With constructor injection, a class with too many dependencies becomes obvious — the constructor is unreadable. That's a feature, not a bug. It's a signal that the class has too many responsibilities."


The N+1 Problem — A Production Story

What they ask: "How do you solve N+1 queries in JPA?"

What they mean: "Have you actually debugged performance issues in production?"

Senior answer:

"The N+1 problem has cost me more production incidents than any other JPA issue. Let me tell you about one.

We had an order history page that worked fine in development — maybe 50ms response time. In production, with real data, it took 12 seconds. Users were furious.

The code looked innocent:

@GetMapping("/orders")
public List<OrderDto> getOrders(@RequestParam Long customerId) {
    List<Order> orders = orderRepository.findByCustomerId(customerId);
    return orders.stream()
        .map(this::toDto)
        .collect(toList());
}

private OrderDto toDto(Order order) {
    return new OrderDto(
        order.getId(),
        order.getTotal(),
        order.getItems().size(),  // BOOM — lazy load
        order.getCustomer().getName()  // BOOM — another lazy load
    );
}
Enter fullscreen mode Exit fullscreen mode

One customer had 500 orders. That's:

  • 1 query to get orders
  • 500 queries to get items (one per order)
  • 500 queries to get customer (yes, the same customer 500 times — JPA doesn't know)

1,001 queries for one page load.

Solution 1: JOIN FETCH

@Query("SELECT o FROM Order o " +
       "JOIN FETCH o.items " +
       "JOIN FETCH o.customer " +
       "WHERE o.customer.id = :customerId")
List<Order> findByCustomerIdWithDetails(@Param("customerId") Long customerId);
Enter fullscreen mode Exit fullscreen mode

This generates a single query with JOINs. But there's a catch — if you have multiple collections, you get a cartesian product:

// DON'T DO THIS
@Query("SELECT o FROM Order o " +
       "JOIN FETCH o.items " +
       "JOIN FETCH o.payments " +  // Two collections = cartesian product
       "WHERE o.customer.id = :customerId")
Enter fullscreen mode Exit fullscreen mode

If an order has 10 items and 3 payments, you get 30 rows per order. Hibernate deduplicates, but you've transferred way more data than needed.

Solution 2: @EntityGraph

@EntityGraph(attributePaths = {"items", "customer"})
List<Order> findByCustomerId(Long customerId);
Enter fullscreen mode Exit fullscreen mode

Cleaner syntax, same result. I prefer this for simple cases.

Solution 3: Batch fetching

@Entity
public class Order {
    @OneToMany(mappedBy = "order")
    @BatchSize(size = 25)  // Load 25 orders' items in one query
    private List<OrderItem> items;
}
Enter fullscreen mode Exit fullscreen mode

This changes N+1 into N/25+1. For 500 orders, that's 21 queries instead of 501. Not perfect, but much better, and it doesn't require changing your repository methods.

Solution 4: DTO projections (my preferred approach for read-heavy endpoints)

public interface OrderSummary {
    Long getId();
    BigDecimal getTotal();
    Integer getItemCount();
    String getCustomerName();
}

@Query("SELECT o.id as id, o.total as total, SIZE(o.items) as itemCount, c.name as customerName " +
       "FROM Order o JOIN o.customer c " +
       "WHERE c.id = :customerId")
List<OrderSummary> findOrderSummaries(@Param("customerId") Long customerId);
Enter fullscreen mode Exit fullscreen mode

This is the nuclear option. One query, exactly the data you need, no entity mapping overhead. For read-heavy endpoints where you're just displaying data, this is usually the best choice.

How I prevent N+1 now:

  1. Enable SQL logging in development:
spring:
  jpa:
    show-sql: true
    properties:
      hibernate:
        format_sql: true
Enter fullscreen mode Exit fullscreen mode
  1. Use a query counter in tests:
@Test
void shouldLoadOrdersInSingleQuery() {
    QueryCounter counter = new QueryCounter();

    orderRepository.findByCustomerIdWithDetails(customerId);

    assertThat(counter.getQueryCount()).isLessThanOrEqualTo(1);
}
Enter fullscreen mode Exit fullscreen mode
  1. Monitor query count in production — we alert if any endpoint exceeds 10 queries."

Part 3: System Design Thinking

Senior interviews always include system design. Here's how to approach them.

The Payment Service Question

What they ask: "Design a payment service for an e-commerce platform."

How seniors approach it:

"Before I start designing, I need to understand the constraints. Let me ask a few questions:

  • What's the expected transaction volume? Tens per second or thousands?
  • Do we need to support multiple payment providers or just one?
  • What's the consistency requirement? Can we ever lose a transaction?
  • What's the latency budget for a payment request?

[Interviewer provides context: 100 TPS, multiple providers, zero data loss, 500ms budget]

Okay, let me think through this systematically.

The core challenge with payments is reliability. Users will retry if something looks stuck. Payment providers might timeout but still process the charge. Network partitions happen. We need to handle all of this without charging someone twice or losing their order.

Architecture:

┌─────────────┐     ┌─────────────────┐     ┌─────────────────┐
│   Client    │────▶│  Payment API    │────▶│  Payment Worker │
└─────────────┘     └─────────────────┘     └─────────────────┘
                            │                        │
                            ▼                        ▼
                    ┌─────────────┐          ┌─────────────┐
                    │  Database   │          │   Stripe    │
                    │  (source    │          │   Adyen     │
                    │  of truth)  │          │   etc.      │
                    └─────────────┘          └─────────────┘
Enter fullscreen mode Exit fullscreen mode

Key design decisions:

1. Idempotency is non-negotiable

Every payment request must include a client-generated idempotency key:

@PostMapping("/payments")
public PaymentResponse createPayment(
        @RequestHeader("Idempotency-Key") String idempotencyKey,
        @RequestBody PaymentRequest request) {

    // Check if we've seen this request before
    Optional<Payment> existing = paymentRepository.findByIdempotencyKey(idempotencyKey);
    if (existing.isPresent()) {
        return toResponse(existing.get());  // Return cached result
    }

    // Process new payment
    Payment payment = processPayment(request, idempotencyKey);
    return toResponse(payment);
}
Enter fullscreen mode Exit fullscreen mode

This prevents duplicate charges when clients retry.

2. Two-phase state machine

Payments go through explicit states:

public enum PaymentState {
    PENDING,      // Created, not yet sent to provider
    PROCESSING,   // Sent to provider, awaiting response
    COMPLETED,    // Successfully charged
    FAILED,       // Provider declined
    REFUNDED      // Money returned
}
Enter fullscreen mode Exit fullscreen mode

The critical insight: we persist state before calling the payment provider, and update it after we get a response. This way, if we crash mid-processing, we can recover:

@Transactional
public Payment initiatePayment(PaymentRequest request, String idempotencyKey) {
    // Step 1: Create payment record in PENDING state
    Payment payment = Payment.builder()
        .idempotencyKey(idempotencyKey)
        .amount(request.getAmount())
        .state(PaymentState.PENDING)
        .build();
    paymentRepository.save(payment);

    // Step 2: Transition to PROCESSING before calling provider
    payment.setState(PaymentState.PROCESSING);
    paymentRepository.save(payment);

    return payment;
}

// Separate method, possibly async
public void executePayment(Payment payment) {
    try {
        ProviderResponse response = paymentProvider.charge(payment);
        payment.setState(PaymentState.COMPLETED);
        payment.setProviderTransactionId(response.getTransactionId());
    } catch (PaymentDeclinedException e) {
        payment.setState(PaymentState.FAILED);
        payment.setFailureReason(e.getMessage());
    }
    paymentRepository.save(payment);
}
Enter fullscreen mode Exit fullscreen mode

3. Recovery process for stuck payments

A background job finds payments stuck in PROCESSING:

@Scheduled(fixedDelay = 60000)
public void recoverStuckPayments() {
    Instant cutoff = Instant.now().minus(5, ChronoUnit.MINUTES);
    List<Payment> stuck = paymentRepository.findByStateAndCreatedBefore(
        PaymentState.PROCESSING, cutoff);

    for (Payment payment : stuck) {
        // Check with provider if payment actually went through
        ProviderStatus status = paymentProvider.checkStatus(payment.getIdempotencyKey());

        switch (status) {
            case COMPLETED -> payment.setState(PaymentState.COMPLETED);
            case FAILED -> payment.setState(PaymentState.FAILED);
            case NOT_FOUND -> {
                // Provider never received it — safe to retry or fail
                payment.setState(PaymentState.FAILED);
                payment.setFailureReason("Provider timeout — please retry");
            }
        }
        paymentRepository.save(payment);
    }
}
Enter fullscreen mode Exit fullscreen mode

4. Provider abstraction for multi-provider support

public interface PaymentProvider {
    ProviderResponse charge(PaymentDetails details);
    ProviderStatus checkStatus(String idempotencyKey);
    RefundResponse refund(String transactionId, BigDecimal amount);
}

@Service
@Primary
public class RoutingPaymentProvider implements PaymentProvider {
    private final Map<String, PaymentProvider> providers;

    @Override
    public ProviderResponse charge(PaymentDetails details) {
        PaymentProvider provider = selectProvider(details);
        return provider.charge(details);
    }

    private PaymentProvider selectProvider(PaymentDetails details) {
        // Route based on card type, amount, merchant category, etc.
        if (details.getAmount().compareTo(new BigDecimal("10000")) > 0) {
            return providers.get("stripe");  // Lower fees for large transactions
        }
        return providers.get("adyen");  // Better for small transactions
    }
}
Enter fullscreen mode Exit fullscreen mode

What I'd monitor:

  • Payment success rate (alert if drops below 95%)
  • Provider latency (p99 should be under 2 seconds)
  • Stuck payments count (alert if any)
  • Idempotency key collision rate (should be near zero)

Trade-offs I'm making:

  • Complexity for reliability — this is more complex than a simple REST call to Stripe
  • Eventual consistency — the client might get 'processing' and need to poll
  • Storage overhead — we're keeping full payment history"

Part 4: Behavioral Questions That Actually Matter

Senior roles require leadership and judgment. Here's how to demonstrate both.

"Tell me about a technical decision you regret"

Bad answer: "I can't think of any" (implies you don't reflect on your work)

Good answer:

"Two years ago, I pushed hard for us to adopt microservices. We were a team of six, and our monolith was getting messy. I'd read all the blog posts about how microservices would solve our problems.

We spent four months splitting the monolith into eight services. And honestly? It made everything worse.

Deployments went from 10 minutes to 2 hours because we had to coordinate across services. Debugging a request meant checking logs in five different places. We didn't have the observability infrastructure to support it. And the 'messy monolith' problems? They were still there, just distributed across services now.

What I learned: microservices are an organizational scaling solution, not a technical one. They make sense when you have multiple teams that need to deploy independently. For a team of six working on one product, a well-structured modular monolith is almost always better.

We eventually merged three of those services back. It was painful to admit the mistake, but the team's velocity doubled after we simplified."


"How do you handle disagreements with other engineers?"

Senior approach:

"I had a significant disagreement last year about our caching strategy. A colleague wanted to cache aggressively at the application layer using Redis. I thought we should rely more on database query optimization and HTTP caching.

Here's how I approached it:

First, I made sure I understood their position. Not just what they wanted, but why. Turned out they'd been burned by a slow database at a previous company, so caching was their instinctive solution.

Then I focused the discussion on data, not opinions. I proposed we measure our actual bottlenecks. We profiled a few endpoints and found that 70% of our latency was network calls to external services, not database queries.

We found a compromise based on evidence. We implemented Redis caching for external API responses (where it genuinely helped) but invested in query optimization for our own database. Both of us got part of what we wanted, and the solution was better than either original proposal.

The relationship actually improved. Because I'd taken their concerns seriously and proposed measuring instead of arguing, they trusted my judgment more afterward. We've collaborated well since then.

What I've learned is that technical disagreements are almost never purely technical. There's usually context — past experiences, assumptions, priorities — that shapes someone's position. Understanding that context is often more productive than debating the technical merits."


Final Advice

After conducting hundreds of senior interviews, here's what I look for:

  1. Depth over breadth — I'd rather you know one thing deeply than ten things superficially
  2. Opinions with reasons — "I prefer X because Y" is always better than "I use whatever works"
  3. Humility about mistakes — The best engineers have strong opinions loosely held
  4. Clear communication — If you can't explain it simply, you don't understand it well enough
  5. Curiosity — Do you ask good questions, or just answer what's asked?

The best interview advice I ever got: treat the interview as a conversation between potential colleagues, not an exam. Ask questions. Challenge assumptions. Share your real opinions.

Good luck.


I've also written guides for C# interviews if you work across both ecosystems.

What's the hardest interview question you've faced? Drop it in the comments — I'll add my take.

Top comments (0)