Question Details

No question body available.

Tags

testing dependency-injection implementations

Answers (10)

July 15, 2025 Score: 34 Rep: 59,581 Quality: Expert Completeness: 50%

In my experience, implementations of, say, MailService or SqliteRepository never change.

Failing to prepare is preparing to fail

This is going to be a long answer because of the topic at hand. There are a ton of considerations that go into balancing clean coding efforts versus quick delivery and you are skipping a lot of them. The problem space that you're trying to draw conclusions from is too narrow to provide a fully informed overview over what it is you are trying to do, why you are trying to do it, and how correct your proposed conclusion is.

For context, I have worked over 10 years as a .NET consultant specifically being sent in to clients whose development process became cumbersome, and a never-ending stream of bugs and missed deadlines. I have pretty much built a career out of handling software development gone bad. I mention this because while I cannot reasonably go into specifics on every anecdote in this answer (for the sake of brevity), the issues I'm pointing out are pervasive across the majority of clients I have worked for.

Almost every project I have seen grind to a halt got to this stage because the codebase buckled under a pile of shortsighted design decisions made by developers who either did not know how to keep code change-friendly or who actively thought that they would never need to change their code if they got it right the first time.

Even if you write things perfectly, which you won't (not a personal insult - just dispelling an unrealistic expectation for any person), you still cannot state for a fact that you will never need to change your implementation. New technologies might become available that warrant a change, business requirements could have been miscommunicated, or they may just simply change in the future.

Don't focus on the release. Focus on the ongoing maintenance past release. Most project fail post release, when changes are next to impossible to implement cleanly and it starts becoming impossible to make changes without breaking other stuff in the process. Only in egregious cases do the consequences of bad decisions catch up with you even before the first release - and in those cases you might never make it to release, I've seen this happen several times.

That is not to say that clean coding is something that you should pour infinite effort into, but generally speaking those who are new to clean coding tend to underestimate the sweet spot for clean coding effort, erring too much on the side of "I believe I can cut this corner because of [bla]".
When adopting clean coding practices, I strongly suggest you first spend some time doing as the Romans do, before you start arguing that living outside of (but close to) Rome is sufficient for you.

The underlying tone of wanting to skip out on what is considered a core essential of clean coding is that you think of clean coding as something that is best engaged with on a sparing and restrictive basis, only doing doing it when the benefits are immediate and concrete. This is not how it works, you need to shift your mindset towards clean coding as what you do from the ground up, not thinking of it as an optional extra you have to do on top of the "normal work".


We also don't write tests.

You really, really should. I know you're going to roll your eyes at this. It is a platitude to say that tests are essential. But that does not make it wrong.

This is the same underlying issue as what I addressed before: by skipping the testing stage, you are setting yourself up for failure down the line.

The main value of tests is not that it helps you write a feature faster. The main value of tests is that it stops you from breaking already made features when developing the next feature.
The tests you write on feature X help you deliver feature X+1 faster, because they will immediately let you know that the implementation of feature X+1 broke something in feature X, instead of hiding out of sight and waiting for you to have to chase up what broke and why.

The further you build your codebase, the higher chance that a change in one thing affects a behavior in another (seemingly unrelated) thing. While you probably don't need tests to keep track of this in a codebase with 3 features, you will eventually reach an amount of features where you can no longer keep track. And when you reach that stage are you going to retroactively write tests for all existing features? Let's be honest here, you won't. So write your tests when you build your feature, because you will get to a stage where you will need all your features to be backed by tests.

That is not to say you should write infinitely many tests for everything. Much like with clean code, there is a balance to be struck as to how much effort you put into it, and that consideration is situational. If I'm building a blog site I don't care that something might break once in a while, but if I'm writing something for NASA or the medical sector, I will test everything within an inch of its life to ensure that there is no bug in an edge case when that can lead to a billion dollar project failure or harm to a human being.

Writing tests, just like writing clean code, is an act of paying it forward. Present you puts more effort in something today, so that future you does not have to spend as much effort doing their tasks.

It is inevitable that this means paying an upfront cost. No amount of advice or guidelines is going to change the fact that you spend more effort writing things cleanly. That's just a logical inevitability: if it took less effort, no one would skip on writing clean code in the first place and we wouldn't have to make it a named concept that we explicitly tell people they should be doing.

The upfront cost is always there. But it pays itself back multiple times over in the long run. The added effort in writing the code is offset by the reduced effort required to maintain the code, and the maintenance phase is decidedly longer and more difficult than the writing process.


And then we get to the final argument that I am often met with when preaching good practice: "but it takes so much useless time and effort that we need to be spending on other things". There are multiple way in which I can respond to that:

  1. Don't mistake the effort in having to do something that is new for you with the ongoing required effort. Yes, it takes conscious effort to do something you're not used to and to set it all up the first time. But that is not due to the code, that's due to your unfamiliarity with it, which passes very soon after committing to the new way of doing things.

  2. Those other things you're needing to spend your time on are the consequences of failing to write clean code in the first place. It takes longer to clean something after the fact than it does to keep it change-friendly in the first place. Choosing to follow bad practice on the premise that you need to spend your effort fixing the consequences of previous bad practice decisions is a Ponzi scheme for time and effort - you get blinded by short term gains that you fail to see you're slowing down in the long term.

  3. It does not take an appreciable amount of time to distill an interface, register the DI dependency, and inject it. Using IDE tools commonly available, if you already have the concrete class developed, you can do each step in under 15 seconds. It takes longer to express why you think it's a waste of effort than it does to implement it. And while you might argue that you only make this argument once but you have to implement interfaces multiple times, I can guarantee that the same person will keep arguing about that over and over for as long as they're not given the green light to cut corners. Clearly, this is not a time-saving exercise.

Is the "sweet spot" for dependency injections just where you would expect to have a different implementation at some point, including mocks for testing?

The short answer is no.

The more elaborate answer relates back to the bullet points above, notably 3 and 1. It is pointless to pursue what this question is trying to pursue given how little effort it takes to do it the right way instead of spending more time arguing about why you think it could be skipped in this specific scenario.

Even if you're completely right with no possible mistake in your observation, it still takes more time to assess whether you are correct than it would to actually just build it the clean way to begin with, at which point this entire corner-cutting exercise is just delaying you and not saving you anything, even in the short term.

July 14, 2025 Score: 14 Rep: 139,666 Quality: Expert Completeness: 80%

In my experience, implementations of, say, MailService or SqliteRepository never change.

Semantically, those are not really abstractions, but implementations.

MailService is just one of the possible implementations of IMessageService. Given that IMessageService could have many different implementations, including:

  • The one that interacts with the corporate SMTP server.
  • The Microsoft Exchange connector.
  • The intermediary that would call Amazon SNS.
  • etc.

Similarly, SqliteRepository is one possible implementation of, say, IProductRepository for an e-commerce website. SQLite is a good start for a tiny one. As soon as it grows a little bit, there are chances that you'll swap it for PostgresProductRepository, or MongoProductRepository, or whatever scalable database would be needed.

We also don't write tests.

You definitively should try testing. It's fun, and saves you a lot of time you'll spend debugging. Debugging is not fun.

On the other hand, creating interfaces for just about anything without the expectation of having a different implementation in the future is considered bad practice.

Yes, that's YAGNI and KISS principles you are talking about. However, they are about making your life difficult for some hypothetical future need that could never materialize.

With dependency injection, it's different. You don't spend too much time writing those interfaces, compared to the time you write the actual logic. You can even delegate the task to AI, if your IDE doesn't already have a way to generate those interfaces for you. As for the hypothetical aspect, well:

  1. Finding an actual concretion that never benefits from abstraction is a challenge for all but the simplest code and proofs of concept.

  2. On the other hand, there are plenty of cases where you don't use an interface, thinking that it would be okay, and later on, you spend days painfully refactoring your code, as it is time to introduce the abstraction.

    Even stuff that is often considered “too basic” suffers from that. In .NET community, for instance, file access and methods that retrieve current date and time were considered too basic—usually, you just call static methods all over your code, and you call it a day. In retrospective, it appeared to be a really bad idea, as this often makes the source code more difficult to test than it needs to be, especially legacy code.

  3. Proofs of concept are except from this. If you work on a piece of code that just needs to show that something is feasible, and you know that you'll throw this code away once the project is finished, then yes, you don't need automated testing, and you absolutely don't need dependency injection.

July 14, 2025 Score: 7 Rep: 3,767 Quality: Medium Completeness: 20%

We also don't write tests. Making the use of interfaces obsolete

If you "Depend on abstractions, not on concretions" as the saying goes, you gain another benefit apart from injecting test mocks: the ability to inject wrapped dependencies.

Such interceptors allow you to apply cross cutting concerns more easily to your code base, without sprinkling them all over the place.

So if you want to do logging, authentication, auditing, etc. it can be beneficial to do so through decorators of your actual implementations.

July 15, 2025 Score: 7 Rep: 1,379 Quality: Medium Completeness: 70%

Try to predict the cost of your decisions

You should usually only write code you actually need (YAGNI & KISS). Additional abstractions and lines of code are not free, but increase cognitive load. So you should ask yourself how expensive will it be to introduce an dependency injection / interface later on. - If you are only using the Service inside of a single code base (code repository) and you make sure to only use clearly defined public methods of your services - then you can later introduce interfaces with a simple refactoring operation.

In this case I would not introduce interfaces/abstractions without benefits.

If the components reside in a library which can be used in different repositories - then refactoring can be really difficult and will likely break backwards compatibility. In this case it usually makes sense to introduce abstraction early on, because it will be expensive later.

Abstractions can be helpful to separate modules

Sometimes using interfaces can be helpful to make sure you only have a single clearly defined interface between components. Without an interface you might easily start to access or depend on the inner workings of an implementation. This will entangle your code and increase cognitive load, because I need to know about the inner workings of the Class to understand how a change might affect other code.

In this case abstractions like interfaces can be helpful to promote more disciplined coding and a clearly separated code-base with loosely coupled modules with clearly defined contracts (interfaces) between each other. But this does not need the additional hassle of dependency-injection. Again dependency injection can usually be added later on in a single code-base (using clear abstractions/interfaces) without major cost.

  • YAGNI = You ain't gonna need it
  • KISS = Keep it simple stupid
July 14, 2025 Score: 5 Rep: 4,570 Quality: Low Completeness: 20%

MailService has SMTP host and TrustedCertificateStorage component, SqliteRepository has file path. Even if "source code" does not change, these classes still provide a useful abstraction over their parameter set and internal state. "Dependency" does not have to be code, and interface is not necessarily "Java interface" - "dependency" is usually an abstraction over something.

So even if you do not have automated tests or alternative implementations, it is still beneficial to pass around instances of MailService and SqliteRepository instead of their full set of constructor arguments.

July 19, 2025 Score: 2 Rep: 62,388 Quality: Medium Completeness: 60%

Dependency injection still have value, even if you never will have more that a single implementation. DI allows you to separate the initialization and lifetime of a class from the consumer. The MailService and SqlliteRepository presumably need some configurations and initialization to work. The alternative to DI is to have each consumer initialize the class itself, which is likely to lead to duplicate initialization logic and further dependencies.

DI is often associated with using interfaces, but any reasonable DI framework should support depending directly on classes. From the perspective of the consumer, it shouldn't even matter if the injected type is a class or an interface.

You can always introduce an interface at a later time if the need arise. (Some answers seem to suggest that if you don't define interfaces up-front, it will be difficult to introduce later. This is incorrect. In fact it is much easier to design an interface at the point when the need arise, since it is easier to understand what abstraction is needed when you need it.)

July 16, 2025 Score: 1 Rep: 12,721 Quality: Low Completeness: 20%

Is the "sweet spot" for dependency injections just where you would expect to have a different implementation at some point, including mocks for testing?

I think the sweet spot for DI is where you actually have several different real implementations you need to swap, now, at runtime, which are compiled by independent teams.

Those who harp on about the flexibility to swap in new implementations in future which are not yet conceived, often overstate the extent to which implementations change in future but interfaces don't.

It's also very overstated how often major components like SQL engines are actually changed, unless the application is written and tested from the outset to be simultaneously compatible with multiple specific brands of engine (not an abstract engine which has yet to be tried).

July 17, 2025 Score: 1 Rep: 49,602 Quality: Low Completeness: 10%

You may expect an implementation never to change. Unfortunately, the implementation doesn’t care what you expect.

And you often have at least two implementations, one for production, one “mock” implementation for testing.

July 25, 2025 Score: 1 Rep: 569 Quality: Low Completeness: 100%

Is Dependency Injection useful when implementations are never expected to change?

Yes, DI is a mechanism that facilitates many good OO principals. If you you have encapsulated logic into many discrete services then DI provides a way to define those service implementations, the activation rules and lifetime scopes in a centralized way.

DI is not actually about interfaces at all, although it complements the use of interfaces. It is about removing the initialization and activation boilerplate code out of your business logic and facilitating overall efficiency by only initializing services and resources when and if you need them in a given context scope.

  • Most DI libraries do not constrain the instance registration to interfaces alone, that would lead to a 1:1 relationship between class and interface definitions and the creation of a lot of redundant interfaces and would in conceptually tightly couple your implementation.

MailService or SqliteRepository never change

This is covered in other answers, I like this explanation from Arseni Mourzenko. You might create an MsSqlRepository or some other type of repo in the future, it's not that these implementations are expected to change, its that you might reasonably want to use a different underlying provider in the future to use in parallel or in place of the current ones. Communication and Data providers are the most likely types of services to change when you hit scale issues ;)

We also don't write tests. Making the use of interfaces obsolete and therefore, to a large part, dependecy injection.

This is the wrong assumption to make, interfaces aren't just for DI, interfaces were an important OO concept long before DI and IoC. They help you to compose complex polymorphic behaviours for classes that could not be achieved through inheritance alone. Interfaces help you to reduce the complexity and can flatten class hierarchies.

We also don't write tests.

This is out of scope for this post, you should be writing tests. Even for small projects, the benefits to your personal development alone from learning how to become proficient in writing tests far out weighs the time and effort it takes to write meaningful tests. Especially when compared to developing bad cowboy habits of not writing tests, none of us are that good.

October 17, 2025 Score: 0 Rep: 49,602 Quality: Low Completeness: 20%

The point of an abstraction is not just that I can change the implementation. It also means that one developer who is great at creating user interfaces and knows basically nothing about databases can use an abstraction that leaves all the implementation details out that he doesn’t and shouldn’t care about.

Just don’t fall into the trap of creating a fake abstraction that just repeats the database APIs. If you figure out that replacing the implementation would be hard work, that may be an indication that you didn’t use a good abstraction.