Question Details

No question body available.

Tags

domain-driven-design

Answers (2)

July 9, 2025 Score: 5 Rep: 34,727 Quality: Medium Completeness: 50%

The common term for the problem you are facing is set validation.

The basic problem is this - if you have some property of the set that must "always" be true, then you must (logically) lock the entire set when making changes that impact that property.

There are a couple of "escape hatches".

By far the most common answer is to dig more deeply into the business requirements, and discover that the property doesn't actually need to be "always true". Some properties are always, in the sense that if the data is ever out of policy then we have expensive problems; but other properties are eventually, in the sense that we need them to be re-established by the time the system reaches equilibrium, but it isn't too damaging to be off policy for some (short) period of time.

Sometimes, you can redesign your data model, separating the data that needs to be kept in compliance with the policy from that which doesn't. For instance, it might be that a "site" is a hierarchy of page identifiers, which you can manipulate without loading the page contents. When you can do that, then you can have separate locks, such that Alex can be moving a page around in the site while at the same time Charlie is modifying the content.

(To some extent, this is a shell game - we aren't necessarily making problems "go away" so much as exchanging one set of problems for a different set that may be easier to solve.)

In the really unfortunate cases... you have to buy/rent hardware capable of performing the workloads that you need done.

There is no magic.

July 10, 2025 Score: 5 Rep: 82,371 Quality: Medium Completeness: 100%

Evans, the father of DDD gave the following definition with three main criteria:

AGGREGATE: A cluster of associated objects that are treated as a unit for the purpose of data changes. External references are restricted to one member of the AGGREGATE, designated as the root. A set of consistency rules applies within the AGGREGATE’S boundaries.

The common misunderstanding is to focus only on the consistency rules. In this case Site would be the aggregate, and many other complex models would end-up with a single aggregate (see here on SE another example with the team-member relationship). But having a single aggregate brings in many drawbacks, especially, if it's very large aggregates.

In reality there is the focus should be on "unit for data changes" first. And here some judgement is needed. Indeed, when you manage a site, very often you change only the content of a page. The change to the full tree structure or adding new pages occurs more rarely.

Moreover, the rule on the depth are not really consistency rules. These are arbitrary business rule. If a site doesn't respect this rule, the data is not inconsistent, it is just non-compliant with some arbitrary rules. And from this perspective, there is no need to keep them in the same aggregate. Two strategies could be considered:

  • Breaking the business rules could raise some warnings, but leave time to restructure the site differently (and the restructuring would be incremental for very large sites). Independently from DDD, this would be my preferred choice from a usability perspective.
  • Breaking the business rules should not be accepted and be reverted asap. the saga pattern, popular with microservices, e.g. when microservices correspond to aggregates, could address the business rule.

If you'd use the aggregate approach and inconsistency shall never be accepted, so you could never give the possibility of an incremental restructuring of a website.