Question Details

No question body available.

Tags

microservices node.js laravel monolith

Answers (4)

April 18, 2025 Score: 7 Rep: 119,888 Quality: High Completeness: 70%

The client was very much inspired from microservices so I had to convince him to go for a monolith which then can later be migrated to a microservices. besides to me microservices for a 3-dev team is a joke.

There are definitely those who agree with you:

Microservices are not neccesarily required to manage huge software, but rather to manage a huge number of people working on them.

The horror of microservices in small teams - medium.com

Microservices solve a few problems

  • allow large teams to break up work and have separate release cycles.

  • scale. You can scale out different services independently.

  • tackle a few large data problems more efficiently

If you don’t have these issues then the added cost of microseconds are not worth it.

You want a modular monolith

Are microservices worth it, when you have A SINGLE TEAM of 4 devs - redit.com

Conversely,

They want a system in which modules can be subscribed to, independent of each other.

This means you have to maintain a hard isolation of these modules regardless of microservices.

Which means you're part way to microservices anyway.

Is this too much for a modular monolith system?

The danger of a modular monolith is it allows unwanted coupling between the modules. Since you have a requirement that forces you to decouple them that danger goes poof. You don't need the architecture to force this on you when the requirements do. Rather, you need an architecture that enables this decoupling.

It may be worth analyzing how far you've been pushed towards microservices already.

Here are some microservice best practices. I'll contrast them with your modular monolith:

  1. Follow the Single-Responsibility Principle (SRP)‍

Still good advice in a modular monolith.

  1. Do Not Share Databases Between Services

You're already doing this.

  1. Clearly Define Data Transfer Object (DTO) Usage

Still good advice in a modular monolith.

  1. Use centralized observability tools‍

Still good advice in a modular monolith.

  1. Carefully consider your authorization options

May not be needed yet.

  1. Use an API gateway for HTTP

Still good advice in a modular monolith.

  1. Use the Right Communication Protocol Between Services

Still good advice in a modular monolith.

  1. Adopt a consistent authentication strategy

Still good advice in a modular monolith.

  1. Use containers and a container orchestration framework

May not be needed yet.

  1. Run health checks on your services‍

Still good advice in a modular monolith.

  1. Maintain consistent practices across your microservices

Still good advice in a modular monolith.

  1. Apply Resiliency and Reliability Patterns

Still good advice in a modular monolith.

  1. Ensure Idempotency of Microservices Operations

Still good advice in a modular monolith.

Evolving microservices - microservices.io

There are other things that go with microservices, like giving each service it's own pipeline, you can likely not bother with yet. But making your modules independently deployable has benefits even on a 3 person team.

Another issue is getting your modules to be the right size. I found some good advice on that here. Yes, making things as small as you can makes them simple. But that just pushes the complexity out somewhere else. Find balance.

April 18, 2025 Score: 4 Rep: 47,298 Quality: Medium Completeness: 100%

You already built micro services.

  1. Modules communicate asynchronously via message queues.
  2. You have a micro service module that aggregates data from multiple services.
  3. This reporting module gets updates from other modules via messages in the message queue.

The only thing stopping you from calling these things micro services is making them independently deployable. You have a monolithic codebase, the added complexity of asynchronous programming, and maintenance of message queues while retaining the single deployment unit.

You sort of have the worst of both worlds.

Except that all depends.

Modularizing the Monolith by Jimmy Bogard has a great high-level conceptualization of that transition between a monolith and micro services. He does mention message queues as part of the solution, but this implies you have a concrete desire and plan to migrate from a monolith to micro services. Message queues are introduced in the later stages of the transition so you can deal with the change from synchronous programing and business processes to asynchronous programming and business processes.

I had to convince him to go for a monolith which then can later be migrated to a microservices.

I think you made the right call by not jumping right into micro services, however this doesn't sound like a concrete plan to me. It can be migrated later. This implies that you also don't need to. Given this, I think message queues are an over-complication.

You built a modular monolith. You (hopefully) carved out the proper boundaries between modules. Your monolith is primed to be split into micro services by virtue of the fact you modularized it. Get rid of the message queues and just make in-process function calls. What do you gain with the asynchronous nature of message queues at this point, especially at this size?

I think you would benefit more from clearly defined interfaces between modules and the simpler nature of in-process function calls. The simpler nature of the system will allow you to add features and evolve it quicker, which is crucial at the early stages of developing a product. Clearly defined interfaces and in-process function calls lead to another simplification: no more chatter between the reporting module and the other two modules.

First, I propose a slight change in terminology to reset your frame of mind. The reporting module isn't doing "distributed joins"; it's aggregating data from multiple sources. This might be a niggling detail, but people see "joins" and immediately think "joining tables in a database" before immediately doing a face-plant into "there are three separate databases." I think this is the wrong frame of mind here. Aggregating data feels more appropriate, because it alleviates the mental overhead of trying to "join data". You just need to compile it; summarize it.

Stuff happens in Module A. The reporting module needs to respond. Stuff happens in Module B, and the reporting module needs to know about that, too. Here I see two simpler solutions than message queues:

  • Have Module A call a method on the Reporting Module at the appropriate time. Same for Module B. You introduce dependencies between modules, but these are pretty limited in scope.
  • Use a pub/sub pattern to promote looser coupling between modules. The application housing the modules should be wiring these things together (see also composition root).

PHP doesn't seem to support events in the traditional sense like, say, C#, but a publish/subscribe pattern isn't too difficult to create from scratch. A generic event class, event names, and an interface to publish, subscribe, and unsubscribe would suffice. You can try searching php publish subscribe for some options. A cursory search landed me here: https://github.com/Superbalist/php-pubsub — note that I've never used it before, but it supports local events:

$adapter = new \Superbalist\PubSub\Adapters\LocalPubSubAdapter();
$adapter->subscribe('mychannel', function ($message) {
    vardump($message);
});

$adapter->publish('my_channel', 'Hello World!');

Pass the same $adapter around to each module and you've made your own local, synchronous event bus. You get the ability to step through method calls line-by-line in an IDE — essential for quick turnaround times when squashing bugs on what is, presumably, a tight deadline.

I think there are simpler ways to achieve modularity and prime your application to be split up later into micro services. It starts with ditching message queues. Once you've reached a scale where micro services actually solve more problems than they introduce would I add message queues as a temporary stop-gap between "monolith" and "micro services."

In your case, I would opt for faster and simpler development that facilitates refactoring. A monolith achieves this provided the code is in the same repository and you can trace through method calls in a debugger without running multiple processes and multiple debuggers. If you are early on in the development of this product, speed is of the essence, and a simpler architecture that still supports change will serve you better.

Either that or finish the transition to micro services now, because you're about 80% the way there. You just need to package and deploy these things separately. Don't sit on the fence. Choose a side and run with it.

April 18, 2025 Score: 3 Rep: 85,874 Quality: Medium Completeness: 40%

Obviously it's hard to say without seeing the code but to simplify it further..

  1. You could drop Rabbit MQ. given that you have monolith design, the messages should be able to be sent with in process function calls. No need for message queues.

  2. You could drop reddis. With only 25 users it seem unlikely that you will need a distributed cache. Just have the query service on one box.

My concern going forward would be the query service. It seems to couple the modules. If different users have different module sets, won't you end up having to store all permutations of every query?

Lastly. Have you really avoided microservices? "query module reads it gathers the data from different services (calling their Rest end points)"

April 18, 2025 Score: 1 Quality: Low Completeness: 30%

My understanding is the query module it is a collection of database views keeping together information from module A and module B databases. Probably module A and module B databases are exposed each by their own application that reads and writes the database.

query module reads it gathers the data from different services (calling their Rest end points)

The architecture it is already a service oriented architecture (SOA). It might seem a monolith by having all the applications packaged in a sole package, a side useful to detail: why it is a monolith when the communication between modules it is performed over HTTP. To understand where it can get to consider each method from the application's code could be exposed through its own REST end point and its direct calls replaced by calls over HTTP/HTTPS. With that type of setup from a certain stage the development burden gets replaced by the administrative one that in the end fades out also transferring the burden to the infrastructure that in case of under development private clouds could get unmanageable down side the popular public clouds has overtaken. Therefore I think there are two questions (1) for short/medium term "what is the time span the development could be a tolerable bottleneck for the business?" and (2) for long term "what resources the business side can invest in the infrastructure?" rephrased that would be "what it is the cost of infrastructure the business side could support on long term?".