DEV Community

Discussion on: Best Practices for Event-Driven Microservice Architecture

Collapse
 
aibarra11 profile image
aibarra11

Great Article! We've stumbled upon all of this separately. Cool to see it all in one place. How do you handle "replays" or getting to current state? In a highly decoupled, event based system, microservices don't really know who created the event. So, how would a "new" microservice get up to date with a stream that might be several months or years old? Seems impractical to read back all of the messages of an event stream.

Is a REST endpoint ok to use? But would also introduce some coupling.

thanks!

Collapse
 
mostlyjason profile image
Jason Skowronski

Great question! I think it depends a little bit on your use case. In my last company we had a limited data retention window so replaying from the beginning of the window was slow but worked. For months or years of data, it sounds like you need a different approach. If the size of the current state is less than the total number of state changes over time (eg. an account balance not an account history), then I have some suggestions. You could load the current state from another canonical source (eg. from a REST endpoint, a database or a replica). Alternatively, you could periodically checkpoint or back up the state to a persistent store, and just replay events from the last checkpoint time to catch up. It's even possible to store checkpoints to another Kafka partition to avoid adding other service dependencies. Hopefully one of those ideas helps :)

Collapse
 
aibarra11 profile image
aibarra11

Thanks! seems like a combination of snapshots and/or REST endpoints might be the way to go. I was trying to avoid having one service talk directly to another, but "catching up" might be an exception =)