Question Details

No question body available.

Tags

unit-testing domain-driven-design tdd

Answers (4)

Accepted Answer Available
Accepted Answer
June 20, 2025 Score: 4 Rep: 84,846 Quality: Expert Completeness: 60%

What you want to avoid testing is configuration, and here the allowed sub states could be considered configuration.

Let rearrange your domain objects so you have:

SuperState
   string id
   string[] allowedSubStates

State SuperState super SubState sub IsValid() { return super.allowedSubStats.Contains(sub.id); }

Now You want at least two unit tests for IsValid, plus maybe some nulls etc. But there's no point testing all possible combinations. That's just the Config for the various states and we can imagine this changing over time or even being user configurable.

Consider the opposite case though, where instead of just a list of allowed sub states there are some rules about the substates. eg:

SuperState_triangles //sub states must have three sides!
   string id
   IsValid(SubState sub) { return sub.sides.count == 3; }

State SuperState super SubState sub IsValid() { return super.IsValid(sub); }

Now I need tests for the rules, so I will have to try various sided sub states with the triangle super state etc. It might well be worth having a whole bunch of tests for common cases.

June 20, 2025 Score: 2 Rep: 34,727 Quality: Medium Completeness: 100%

Since I'm following a TDD approach, the implementation of the state validation logic

In a TDD approach, I would expect you to be concentrating your attention on the implementation of behavior, rather than the implementation of state.

Which is to say, that your example of having a test for getallowedsubstates that explores the entire range of inputs is "fine" (I have doubts about the structure of this test: the api you have been "driven" to, and how well the test serves as documentation for how the method is expected to work).

But as a function, what is this for? If the state machine is under the control of your program, then reaching an "invalid" state really means that there has been a programmer error somewhere - that some implementation of a transition in the state machine didn't realize the intended state.

On the other hand, if you are dealing with a source of "untrusted" state data, then you probably out to have a parser somewhere close to the boundary where the untrusted data enters the system, and you should be test driving the parser.

(Now, in fairness, there is a subset of the TDD community that loves starting with small "units" that are driven into existence in isolation -- you might be doing that, but it isn't obvious from this example that you are doing this and keeping a specific client in mind; one of the reasons that I prefer working "outside in" is that you get alignment with the client context "for free".)


This made me realize there is significant overlap between the validation logic within the domain objects and unit tests

Yeah, this kind of thing is common and normal - it has to do with the fact that we have the same predicate being used for two different ideas.

  • In our parser, we're trying to figure out if some general purpose data structure is aligned with our narrower range of expected values (example: we're receiving information using a general purpose integer data type; but our narrower expectation is that the integer shall always be positive - thus, roughly half of the space of values that fit in the general purpose data type are "invalid", and need not-on-the-happy-path handling).

  • In our domain types, we're using a constructor to ensure that some data constraint only needs to be checked once, and the validation within the constructor is a defensive programming technique intended to catch errors in our parsing code.

(Warning: untested pseudo-code written in an imaginary programming language)

class TrustedData:
  def init(untrustedData):
    FailFast.programmerError(...) if not valid(untrustedData)
    ...

def parsing(untrustedData): if valid(untrustedData): happyPath( TrustedData(untrustedData) ) else: # handle the usual case that we've gotten bad information # from our untrusted source

What we're really doing in the TrustedData constructor is checking that the programmer who wrote the parsing function did the right thing.

If you look through the older literature, the code in the constructor might look instead like

class TrustedData:
  def init(untrustedData):
    assert valid(untrustedData)

Where the assert line would be completely removed from the production code (ex: in C code, those lines would normally be elided by the preprocessor before the source was passed to the compiler).


is writing tests for the domain model redundant, and thus be avoided?

Typically, in DDD, the domain model is the thing that's real; it's where the important logic lives, and that's the code that changes regularly as the needs of the business evolve -- so that's an important area to be able to make changes without introducing faults. TDD's twin goals of (a) improving the design so that faults are introduced less frequently and (b) producing tests so that faults are detected both quickly and cost effectively play well here.

But specifically for the case of something like a programmerError detector, the cost benefit ratio of having "tests" is not quite the same as it would be in code that has a lot of branches.

Horses for Courses

is it a sensible choice to test all possible combinations of superstate and substate

From a TDD perspective? Mostly no, in my experience -- you don't typically learn anything interesting about the design by spamming an infinite number of input combinations.

From a testing perspective? Maybe. See "Property Based Testing" in the literature. My experience is that typing-the-same-thing-in-two-places tests aren't especially valuable, and that the cost benefit ratio gets really bad if the answer isn't stable.

At one end of the spectrum you have "the answers are really stable, but we are constantly changing how we calculate them", and at the other end you have "the answers are always changing, but the calculation is always so simple that there are obviously no deficiencies" -- for the latter case, you may have more cost effective alternatives than maintaining a suite of "automated" tests.


WorkitemState is a simple valueobject intended for display. It is not an implementation of a state pattern and doesn't contain complex behaviour (this is just an analytics tool, state changes are managed by an external, pre-existing application)

In that case, you probably don't have a "domain model" -- your code is primarily about labeling information so that you can be sure that the right things are being displayed in the right places.

For instance, you probably don't have an "aggregate root" here, because an aggregate is "a cluster of objects that we treat as a unit for the purpose of data changes" and your program isn't responsible for data changes; you are taking the data as is and preparing a report from it.

(In some parts of the DDD community, you'll find people talking about read models vs write models; aggregates are a pattern that makes a certain amount of sense when writing down new information from the outside world, but don't necessarily offer very good ROI when you are simply rendering information written down previously).

What you've shown here is "just" an id/superstate/substate tuple; whether that's a single value object, or a graph of value objects... well, there are trade offs; but from your description, you'll really want it to look more like an immutable value copied from some outside source than a local entity that can be modified.

But before diving too deeply into how to test this, you might want to explore how things are expected to change over time. For example, if the rules about which combinations of states changes, are you expected to need to release a new version of your code to make things work? When the answer is "no, the old release should work with the new rules" (which is, in my experience, the common case), then you want to be cautious about over fitting your solution to today's rule set.

June 23, 2025 Score: 1 Rep: 119,848 Quality: Low Completeness: 40%

Follow the zero-one-infinity rule. It states the only numbers allowed in a design are 0, 1, or as many as you like.

Thus, both your super and sub state lists should be designed to be arbitrarily large. Your design, and its tests, should not statically reflect one particular set of either.

However, this means you must express the particular states in some other way than hard coding them into the design.

June 20, 2025 Score: 0 Rep: 9 Quality: Low Completeness: 50%

Is writing tests for the domain model redundant?

Not at all, but you've done it the wrong way. If you reuse the same function getallowedsubstates in both your domain logic and your tests, you're not really testing anything.

Use real, manually written values to test your business logic. In fact, your tests should verify logic inside the getallowedsubstates function.

Is it a sensible choice to test all possible combinations?

It depends on the business context. If it is critical, then yes — test all combinations. You can use data-driven testing to keep things clean and maintainable.

If it's not critical, testing just a few cases per behavior (e.g. a valid and an invalid scenario for SuperState.A). It's more about test coverage.