I feel like having "validation" in the constructor, or on the value object at all is at odds with evolving business rules.
If we have an escape hatch that skips validation when deserializing "known good", or rather "previously accepted" instances, means in our other logic we can't rely on the "invariants" being true, but without that being immediately obvious.
And always going through the constructor will just make our code break on old instances whenever we make the validation "stricter".
It's extremely common, unavoidable, really, to have an active InsurancePolicy that could no longer be created/issued with the current rules, and yet needs to be honored and handled until its end of life, which may be indefinite.
Sure, there's this version issue when applying something like this. It also exists when using event sourcing, for example. It's a trade-off that you need to be aware of. Basically, you need to keep your aggregates backward compatible in the event the scenario you mentioned applies.
Just like most things in programming - it's about trade-offs. If your domain isn't that complex then DDD isn't going to be helpful.
If your domain is pretty complex, then the trade-off might be worth it 👍
I'd argue that the more complex the domain, the more likely it is the rules will change. That's why I prefer for the entities to be dumb, and the validation to happen in the command handler. This way the current rules apply to the "decisions" that are yet to be made, but old decisions are respected.
Just to make sure I understand 🙂:
All your individual use cases are represented by command handlers, and each command handler has its own unique set of validation?
In that case, if new rules of this kind appear but you are using let's say a DDD aggregate kind of thing, depending on the rule that needs to change, etc. you could just create a new aggregate model?
That way, commands that share the same rules can just use the same aggregates, and the ones that don't can just use different aggregates. I'm fine with that.
Either way, this and what (I believe) you said are almost the same thing. The only difference being that by using aggregates you just have the ability to share a grouping of business invariants/rules across handlers.
I mean yeah, you do end up constructing a write model to understand if a command is allowed. And having the model allows you to, if needed, express some of your rules as assertions on the model. In which case, you could express validation as:
If we assume the command is valid, and apply to the in-memory model the events that handling it would persist, will the resulting state of the model still fulfill all the invariants? If it does, persist the events, and consider the command successful.
There are some good properties to this, such as writing your invariants as assertions, and this automatically applying to all handlers.
But in practice it seems very brittle, since, once again, the "state" of an old entity might violate the currently checked invariants... which, while not great and probably requiring some thought, doesn't really need to fail commands that are orthogonal to the violated invariant.
Looking at a real world example, having an incorrectly-formatted phone number/billing address is "bad". But failing all transactions to/from the affected account until the issue is fixed, despite all the prerequisites specific to the transaction being met, is much worse.
And, of course, there are a lot more reasons to fail a command than "succeeding the command would put/leave the state in violation of an invariant". And these rules that apply to the command and not the state still need to share logic.
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.