Renben Logo
Back to Blogs
Engineering

When Not to Use Microservices (And Why Everyone Wants Them Anyway)

Meghna Bharadwaj
September 9, 2025
When Not to Use Microservices (And Why Everyone Wants Them Anyway)

Microservices are the shiny toy in software architecture. Netflix uses them, Amazon uses them, everyone on tech Twitter seems to be deploying “hundreds of services,” and suddenly everyone with a 5-person dev team thinks they need them too.

But here’s the problem: microservices are distributed systems. And distributed systems come with very real costs: latency, maintaining consistency, retries, monitoring headaches, and those fun late-night pager alerts when “Service A can’t talk to Service B."

“Sometimes they’re like using a chainsaw to cut a stick of butter — powerful, but messy and dangerous if you don’t need it.”

Let’s go step by step, with some real-world analogies, and see where microservices actually hurt more than they help.


🍔 The Burger Joint (Small Product → Monolith)

You open a burger shop. Do you:
Build one kitchen where everything happens?
Or build 10 kitchens—one for buns, one for patties, one for lettuce?

If you picked #2, congrats—you’ve just reinvented microservices. And you’re also running between kitchens while your customers wait hungry.

burger joint image

👉 Early on, one kitchen (monolith) is faster, cheaper, and saner. When the line gets long, you add another identical kitchen (replica)—not 10 new kitchens.

Technical takeaway:
Small product → monolith. Add boundaries inside the codebase (modules) instead of network calls.


👩‍💻 The HR App (Small Team → Monolith)

Picture this: a small HR department asks you to build an internal leave-tracking app. Your team? Just four developers.
Now imagine you split it into 10 microservices:

  • auth-service for logins
  • payroll-service for salaries
  • leave-service for vacation requests
  • notification-service for reminders
  • … and so on

too much man

Each service has its own repo, pipeline, deployment rules, observability, and maybe even its own database.

Result? You’ll spend more time fixing pipelines than adding features. Your HR manager will still be waiting for that “carry forward leave” feature while you’re debugging why the notification-service can’t talk to the auth-service.

Technical takeaway:
Small teams → modular monolith. One deployment, less ops overhead, faster delivery.


🏦 Payroll and Banking (Consistency-Critical → Monolith)

Banking, payroll, compliance-heavy apps: these live and die on consistency.

Now imagine payroll is split into three services — attendance, payroll, and bank transfer. That’s three separate systems, three databases, and three places for partitions to creep in. One employee’s attendance syncs late, another’s payroll runs before the bank-transfer service catches up. Suddenly, someone’s salary doesn’t show up.

Try explaining “eventual consistency” to an employee who didn’t get paid.

acid

👉 In banking, consistency always wins over availability. If the system is under partition (say, the DB cluster loses connectivity), you’d rather block transactions than allow double-spends. But if you introduce unnecessary partitions by scattering core payroll logic across microservices, you’ve just multiplied the failure modes.

⚙️ Technical takeaway
Strong consistency needs → monolith with a single ACID database. Add microservices only for non-critical extensions (dashboards, notifications, reports).


🎮 Multiplayer Games (Latency-Sensitive → Monolith)

You’re building a real-time battle royale.
If “Health Service” and “Position Service” are separate microservices, every move = a network call. Add retries, packet loss, and now your players see enemies teleporting (rubber-banding).

Anyone who has played online games knows the pain of rubber banding—you run forward, then suddenly snap back five steps because the server and client weren’t in sync. That’s distributed lag in action.

The same applies to high-frequency trading or IoT control loops. If your thermostat turns the heater off three seconds late because the decision-service was waiting for the sensor-service, you’ll notice.

Technical takeaway:
Latency-sensitive systems → monolith for the core loop. Use microservices only for non-critical extras (analytics, leaderboards, dashboards).

🏢 Enterprises With Legacy Systems (Integration First → Not Microservices)

If your enterprise already runs on a forest of 20-year-old ERP systems, your problem isn’t “we need microservices.” Your problem is integration.
Microservices won’t magically modernize COBOL. What works better here is:

  • SOA (Service-Oriented Architecture) for contracts
  • Hub-and-spoke ESB for data flow
  • Event-driven pub/sub for messaging

Technical takeaway:
Legacy-heavy enterprises → focus on integration patterns first. Microservices can come later, not as a first step.


🛠️ Early-Stage Startups (Avoid Over-Engineering → Monolith)

Startups often fall into this trap: “We’ll build it microservices-first so we’re ready to scale to millions of users.”
Reality check: you have five customers, three of them are your friends, and you’re still pivoting the product every week. Splitting into microservices too early just means you’ll burn time on distributed tracing, monitoring, and DevOps pipelines—before you even have product-market fit.

Technical takeaway:
MVPs and early products → monolith. Split into microservices only if scale forces the issue.


🚦 When You Actually Need Microservices

Now picture Amazon on Prime Day.
Millions of users swarm the site, regional inventory checks fly in from warehouses, and hundreds of dev teams are pushing experiments at once.
Here, a single monolith would choke. Why? Because traffic distribution isn’t uniform:

  • Search → 10M queries per minute.
  • Payments → 1M transactions per minute.
  • Recommendations → 500k requests per minute.

👉 Scaling them together is wasteful. You don’t need 10M-ready infra for recommendations if search is the bottleneck.

That’s where microservices shine:

  • Single Responsibility Principle applied to scaling → each service (search, payments, recommendations) owns its workload and scales independently.
  • Variable distribution of traffic → hot paths get the most infra, cold paths don’t waste resources.
  • Team autonomy → search team can reindex with Solr/Elastic, payments team integrates new gateways, recommendations team tweaks ML models — all without blocking each other.

platform hub

⚙️ Technical takeaway
Prime Day works only because the system is broken into bounded contexts with independent scaling. Without that, one overloaded module would take the entire platform down.


✈️ Airlines (Why Data Separation Makes Sense)

Now let’s switch to airline bookings.
Buying a ticket touches completely different domains:

  • Seat inventory → read-heavy, must be consistent across partner airlines.
  • Payments → write-heavy, regulatory compliance, retries.
  • Loyalty program → stateful, lower traffic, but strict consistency.
  • Partner APIs → slow, flaky, outside your control.

If all of this lived in one database and codebase, you’d constantly be stepping on your own toes.
A schema change in loyalty could lock up seat inventory.
A payments surge could starve API threads needed for partner sync.

👉 This is where data separation + bounded contexts are critical:

  • Inventory has its own datastore (fast reads, replication across partners).
  • Payments own their DB (ACID, compliance).
  • Loyalty manages its own records (points, redemptions).
  • Integrations isolate failures (partner API down ≠ bookings blocked).

⚙️ Technical takeaway
Airline systems prove why microservices + domain-driven boundaries make sense:

  • Data models are inherently different (inventory ≠ payments ≠ loyalty).
  • Scaling needs diverge (read-heavy vs. write-heavy vs. API-bound).
  • Fault isolation prevents one slow/flaky service from taking the whole system down.

Let’s sum it up

The debate isn’t really monolith vs. microservices. It’s about what failure you’re willing to tolerate.

  • In banking, failure means money lost → so you choose strong consistency and block until you’re sure.
  • In games, failure means a teleporting enemy → so you allow inconsistency and keep the game moving.
  • In airlines, failure means a booking system freeze → so you isolate services so one bad API doesn’t sink the whole plane.
  • In small teams or small products, the failure you can’t tolerate is wasting cycles on infra instead of shipping → so you keep it simple with a monolith until scale forces a split.

Microservices don’t make systems magically better.
They just give you levers: scale one service without touching the rest, evolve teams independently, isolate risks.
The cost? Extra complexity, more moving parts, and late-night pager duty when your services don’t talk.

A monolith gives you the opposite trade-off: simplicity, speed, fewer moving parts — but less flexibility when you do hit scale or complexity walls.

👉 The real skill isn’t “knowing microservices.” It’s knowing when to pay the cost of distribution and when to say no thanks; one kitchen is enough.

In other words, architecture isn’t about copying Netflix’s blog posts.
It’s about asking — what’s the failure mode my system can live with, and which one will kill me?