I built Curve, an open-source Spring Boot library that turns 30+ lines of Kafka event publishing code into a single @PublishEvent annotation. It's production-ready with PII protection, Dead Letter Queue (DLQ), transactional outbox pattern, and AWS KMS integration.
- ๐https://github.com/closeup1202/curve
- ๐ฆhttps://central.sonatype.com/artifact/io.github.closeup1202/curve
- ๐ https://closeup1202.github.io/curve/
The Problem: Too Much Boilerplate
In microservices, publishing events to Kafka is essential but repetitive. Here's what typical event publishing looks like:
@Service
public class UserService {
@Autowired private KafkaTemplate<String, Object> kafka;
@Autowired private ObjectMapper objectMapper;
public User createUser(UserRequest request) {
User user = userRepository.save(new User(request));
try {
// Manual event creation
EventEnvelope event = EventEnvelope.builder()
.eventId(UUID.randomUUID().toString())
.eventType("USER_CREATED")
.occurredAt(Instant.now())
.publishedAt(Instant.now())
.metadata(/* extract actor, trace, source... */)
.payload(/* map to DTO... */)
.build();
// Manual PII masking
String json = maskPii(objectMapper.writeValueAsString(event));
// Manual Kafka send with retry
kafka.send("user-events", json)
.get(30, TimeUnit.SECONDS);
} catch (Exception e) {
log.error("Failed to publish event", e);
sendToDlq(event);
}
return user;
}
}
30+ lines of boilerplate. And you need to repeat this for every event type.
The Solution: Just Add One Annotation
With Curve, the same logic becomes:
@Service
public class UserService {
@PublishEvent(eventType = "USER_CREATED")
public User createUser(UserRequest request) {
return userRepository.save(new User(request));
}
}
That's it. Everything else is handled automatically:
- โ Event ID generation (Snowflake algorithm)
- โ Metadata extraction (actor, trace, source)
- โ PII masking/encryption
- โ Kafka publishing with retry
- โ DLQ on failure
- โ Metrics collection
Key Features That Make It Production-Ready
1. Automatic PII Protection
Sensitive data is automatically protected with @PiiField:
public class UserEventPayload implements DomainEventPayload {
@PiiField(type = PiiType.EMAIL, strategy = PiiStrategy.MASK)
private String email; // "user@example.com" โ "user@***.com"
@PiiField(type = PiiType.PHONE, strategy = PiiStrategy.ENCRYPT)
private String phone; // AES-256-GCM encrypted
@PiiField(type = PiiType.ID_NO, strategy = PiiStrategy.HASH)
private String id; // HMAC-SHA256 hashed
}
Supports AWS KMS and HashiCorp Vault for key management with envelope encryption.
2. 3-Tier Failure Recovery
Events never get lost, even when Kafka is completely down:
Main Topic โ DLQ โ Local File Backup โ S3 Backup (optional)
3. Transactional Outbox Pattern
Guarantees atomicity between database transactions and event publishing:
@PublishEvent(
eventType = "ORDER_CREATED",
outbox = true,
aggregateType = "Order",
aggregateId = "#result.orderId"
)
@Transactional
public Order createOrder(OrderRequest req) {
return orderRepo.save(new Order(req));
}
Uses exponential backoff and SKIP LOCKED to prevent duplicate processing in multi-instance environments.
4. Built-in Observability
Health check and metrics out of the box:
# Health check
curl http://localhost:8080/actuator/health/curve
{
"status": "UP",
"details": {
"kafkaProducerInitialized": true,
"clusterId": "lkc-abc123",
"nodeCount": 3,
"topic": "event.audit.v1",
"dlqTopic": "event.audit.dlq.v1"
}
}
# Custom metrics
curl http://localhost:8080/actuator/curve-metrics
{
"summary": {
"totalEventsPublished": 1523,
"successRate": "99.80%"
}
}
Architecture: Hexagonal Design
Curve follows Hexagonal Architecture (Ports & Adapters) to keep the core domain framework-independent:
curve/
โโโ core/ # Pure domain (no Spring/Kafka)
โ โโโ envelope/ # EventEnvelope, Metadata
โ โโโ port/ # EventProducer interface
โ โโโ validation/ # Domain validators
โ
โโโ spring/ # Spring adapter
โ โโโ aop/ # @PublishEvent aspect
โ โโโ context/ # Context providers
โ
โโโ kafka/ # Kafka adapter
โ โโโ producer/ # KafkaEventProducer
โ
โโโ kms/ # AWS KMS / Vault adapter
โโโ spring-boot-autoconfigure # Auto-configuration
This makes it testable (no framework needed) and extensible (swap Kafka for RabbitMQ, etc.).
Performance
Benchmarked with JMH on AWS EC2 t3.medium (Kafka 3.8, 3-node cluster):
- Sync mode: ~500 TPS
- Async mode: ~10,000+ TPS
- With MDC Context Propagation: Trace IDs preserved even in async threads
Quick Start
1. Add Dependency
dependencies {
implementation 'io.github.closeup1202:curve:0.1.2'
}
2. Configure
spring:
kafka:
bootstrap-servers: localhost:9092
curve:
enabled: true
kafka:
topic: event.audit.v1
dlq-topic: event.audit.dlq.v1
3. Use
@PublishEvent(eventType = "ORDER_CREATED", severity = EventSeverity.INFO)
public Order createOrder(OrderRequest request) {
return orderRepository.save(new Order(request));
}
Done!
Lessons Learned
1. Hexagonal Architecture Was Worth It
Initially, I considered coupling directly to Spring. But isolating the core domain made:
- Testing 10x easier (no Spring context needed)
- Evolution safer (can change frameworks without breaking core logic)
- Reusability possible (core can be used in non-Spring projects)
2. Security Defaults Matter
I started with simple StandardEvaluationContext for SpEL but switched to SimpleEvaluationContext to block dangerous operations (constructor calls, type references). Small change, huge security impact.
3. Documentation Is Critical for Adoption
I spent 30% of development time on docs:
- 30+ markdown files (Getting Started, Operations, Troubleshooting)
- English + Korean versions
- MkDocs Material for beautiful GitHub Pages
Result: Users can onboard in < 5 minutes.
4. Maven Central Publishing Is Hard
Getting published required:
- GPG signing
- Nexus Sonatype account
- Proper POM metadata
- Source/Javadoc JARs
But it's essential for credibility. No one trusts a library not on Maven Central.
Comparison with Alternatives
| Feature | Spring Events | Spring Cloud Stream | Curve |
|---|---|---|---|
| Kafka Integration | โ | โ | โ |
| Declarative Usage | โ | โณ | โ |
| Standardized Schema | โ | โ | โ |
| PII Protection | โ | โ | โ |
| AWS KMS Integration | โ | โ | โ |
| DLQ + Local Backup | โ | โณ ยน | โ |
| Transactional Outbox | โ | โ | โ |
| Health Check | โ | โ | โ |
| Boilerplate Code | Medium | High | Minimal |
ยน Spring Cloud Stream supports Dead Letter Topics (broker-side),
but has no offline fallback for complete broker outages.
Curve adds Local File and S3 backup, so events survive even when
Kafka itself is unreachable.
What's Next?
Roadmap for v1.0.0 (Q3 2026):
- GraphQL subscription support
- AWS EventBridge adapter
- Grafana dashboard template
- gRPC event streaming
- Multi-cloud KMS (GCP, Azure)
Try It Yourself
Clone the sample application
git clone https://github.com/closeup1202/curve.gitStart Kafka with Docker
docker-compose up -dRun the app
cd curve/sample
../gradlew bootRunCreate an event
curl -X POST http://localhost:8081/api/orders \
-H "Content-Type: application/json" \
-d '{
"customerId": "cust-001",
"customerName": "John Doe",
"email": "john@example.com",
"phone": "010-1234-5678",
"productName": "MacBook Pro",
"quantity": 1,
"totalAmount": 3500000
}'
- Check Kafka UI
visit http://localhost:8080 in your browser
Contributing
Curve is MIT licensed and welcomes contributions! Whether it's:
- ๐ Bug reports
- ๐ก Feature requests
- ๐ Documentation improvements
- ๐งช Test coverage
Check out the Contributing Guide.
Final Thoughts
Building Curve taught me that good abstractions save time. Instead of writing the same Kafka code over and over, I invested time creating a reusable library.
If you're building event-driven microservices with Spring Boot and Kafka, give Curve a try. It might save you hundreds of lines of boilerplate.
- โจ Star on GitHub: https://github.com/closeup1202/curve
- ๐ฆ Maven Central: https://central.sonatype.com/artifact/io.github.closeup1202/curve
- ๐ Docs: https://closeup1202.github.io/curve/
What do you think? Have you built similar abstraction layers in your projects? I'd love to hear your experiences in the comments! ๐ฌ
Top comments (0)