or
What I accidentally learned about Node and JavaScript while teaching myself about CQRS and event sourcing
This is a book recommendation because I do recommend Ethan Garofalo’s Practical Microservices. It is useful, balanced, somewhat humorous, and (as the title suggests) very practical. However, it isn’t really a book about microservices, at least not in general. It is an in-depth description of how to implement webservices, according to the Command Query Responsibility Segregation (CQRS) and event-sourcing patterns, in JavaScript and Node.js. While microservices is a more generic description of decoupling a system's logic into many small parts, CQRS and event sourcing is a very specific way of doing it. So much has been written about these architecture patterns that there is no point for me to add to that. Instead, I’m going to describe how I used this book for my own learning and what I ‘accidentally’ learned.
What I did
I have written several articles (during my time pursuing my Ph.D.) where I went on and on about the importance of interoperability, decoupling, and microservices. However, I hadn’t fully implemented a system according to these principles myself before, so now I really wanted to learn. I decided to implement myself in parallel to reading the book. First, I followed along with the implementation of a video tutorial, then I implemented my own application, where users can solve Rubik’s cubes. I started by implementing the message store (database for storing events and commands) in a separate module and I changed it to be based on MongoDB instead of the Message DB from the Eventide project (that runs on PostgreSQL). I did not make the changes because I thought it would be better in any way (probably the opposite), it was because I think I would learn more this way. Then I went on by implemented the actual application. To avoid thinking about how to represent a Rubik's cube in code I used the cubejs package. When doing event sourcing systems, it’s important to think about the ‘domain problem’, what is ‘actually’ happening (the events). Users should be able t create cubes that they can manipulate by doing moves, and eventually, the cube enters a solved state (every side has one color). I went with two commands (Create and DoMoves) and three events (Created, Moved, and MovesRejected) described in the contract of the cubes component. The sum of these events (for a certain cube) should result in all states of that cube at any moment in time.
What I learned
The example project in the book is written in Node.js using the Express web framework. This feels like a rather good choice since it is probably the first-choice environment for most developers, especially for web-based systems. My experience of JavaScript has mostly been as smaller functions part of larger IoT based frameworks (or embedded in HTML), so building whole systems in Node.js was rather new for me. Ethan suggests that it is enough to grok the JavaScript code, and that is certainly true if you just want to understand the basic architecture and concepts, but it will probably get you a deeper understanding of the practical implementation.
Express and Node.js
When being presented with someone’s implementation in any code, the structure sometimes makes sense to you and sometimes not, but it often feels very forced, like this is the way it should (or even must) be. I think the reason for this is because code reinforces itself and the same structure is iterated all over. When it comes to Node.js and Express framework, there seems to be little to no consensus on what constitutes the best structure, most likely because it depends on many things. This is something you should accept. Create your own Express application but avoid using the generator that will provide you with a basic structure, just build something from scratch and understand the basic concepts first.
Promises
Promises, a representation of an event that is happening in the future, have been around for a long time now but they are relatively new to JavaScript. Asynchronous function calls have instead been solved with the callback method. Promises (especially when chaining/pipelining them) provides superior readability over nesting callbacks inside other callbacks. Since Promises previously didn’t exist for JavaScript and Node.js several external packages were created to provide this feature. These were often also more efficient when the Promises did appear, now when the implementations have improved, it is not necessarily so, but it could be (I don't know). Nowadays, besides pipelining promises, it is also possible to use the async/await syntax. This allows the code to be written in a more straightforward manner adding even more readability. In the book, Ethan uses Bluebird (ref: Bluebird) Promises with the pipeline syntax (see example), and I was curious as to why. Are there still advantages to using Bluebird or was it because of old habits or personal preferences? I don’t know the answer but it's probably a bit of both. The only reason I could see is the possibility to catch specific errors in Bluebird pipelines, compared to the native Promises.
// Catching MyCustomError with Bluebird promise
Promise.resolve().then(function() {
throw new MyCustomError();
}).catch(MyCustomError, function(e) {
//MyCustomError
});
// Catching MyCustomError with native promise
Promise.resolve().then(function() {
throw new MyCustomError();
}).catch(error) {
if (error.name == 'MyCustomError') {
//MyCustomError
}
};
Parameters or Object
As of ECMAScript 6, parameter objects can be directly deconstructed into its variable name/key.
function squareUsingGoodOldParameters(width, height) {
return width * height;
}
function squareUsingDecunstructedObject({width, height}) {
return width * height;
}
// Calling the functions
let square1 = squareUsingGoodOldParameters(5, 5);
let square2 = squareUsingDecunstructedObject({width: 5, height: 5});
This is easier to read and has the advantage that each parameter automatically becomes named, removing problems with entering parameters in the wrong order. I wondered then if there is any point in using the traditional parameters anymore. The answer is yes. Since the passed object becomes a shallow copy of the original, changes to any property of a complex type (object/array) also changes the original, and this could have unintended side effects. ref..
Conclusion
So, it seems that I “accidentally” got into details about JavaScript implementations in my pursuit of learning about microservices. As it turns out, I already knew about the different architecture patterns such as CQRS and event sourcing. For me it was the first part of the title, practical, that gave the most value. Maybe it’s because I adopted a very practical and methodical learning approach.
When it comes to the actual result, is “the very asynchronous Rubik’s cube application” any good? No, it is absolutely terrible. Solving a Rubik’s Cube is a single-player time-sensitive game. There is no reason to send moves commands to the server, then having to reload the page while waiting for an event to trigger. There are many implementations that would benefit from this approach, but not this one.
Top comments (0)